ECIAIR Mini Tracks

The Mini Tracks at ECIAIR

      • Malicious Use of Artificial Intelligence: New Challenges for Democratic Institutions and Political Stability
      • AI Ethics, from Design to Certification
      • Human Centred Futures
      • “We need to talk about AI regulation”
Submit your Abstract to an Academic Conference

Malicious Use of Artificial Intelligence: New Challenges for Democratic Institutions and Political Stability

Mini Track Chair: Prof. Evgeny N. Pashentsev, Lomonosov Moscow State University, Russia  

ECIAIR 2021 Mini Track on the Malicious Use of Artificial Intelligence: New Challenges for Democratic Institutions and Political Stability  

The possibilities for artificial intelligence (AI) are growing at an unprecedented rate. AI has many areas of social utility: from machine translation and medical diagnostics to electronic trading and education. Less investigated are the areas and types of the malicious use of artificial intelligence (MUAI), which should be given further attention. It is impossible to exclude global, disastrous, rapid and latent consequences of MUAI. MUAI implies the possibility of using multiple weaknesses of individual and human civilization as a whole. For instance, AI could integrate with a nuclear or biological attack, and even improve its effectiveness. However, AI could similarly be used as a most efficient defence instrument. The international experience in monitoring online content and predictive analytics indicates the possibility of creating an AI system, based on the information disseminated in the digital environment, that could not only indicate threats to information and psychological security in a timely manner but also offer scenarios of counteraction (including counteracting offensive weapons’ systems).  

Suggested topics include but are not limited to:

  • Dynamic social and political systems and the malicious use of AI
  • AI in civil and military conflicts
  • AI enhancing terrorist threats and counter-terrorist response
  • Role and practice of the malicious use of AI in contemporary geopolitical confrontation
  • Predictive analytics and prognostic weapons
  • Risk scenarios of the malicious use of AI
  • Spoofing, data extraction, and poisoning of training data to exploit vulnerabilities under the malicious use of AI
  • Artificial Intelligence Online Reputation Management (ORM)
  • AI in Lethal Autonomous Systems (LAWs):
  • Deepfakes and their possible influence on political warfare
  • Amplification and political agenda setting
  • Emotional AI in political warfare
  • Damage reputation through bot activities
  • Challenges of the malicious use of AI
  • Ways and means to neutralize targeted information and psychological destabilization of democratic institutions using AI.

AI Ethics, from Design to Certification

Mini Track Chair:  Prof A G Hessami, Vega Systems-UK  

ECIAIR 2021 Mini Track on AI Ethics, from Design to Certification  

With the rapid advancement and application of Autonomous Decision Making and Algorithmic Learning Systems often referred to as AI, the consideration of societal values impacted by such artefacts should be underpinned by guidelines, standards and independent certification to engender trust by all stakeholders. This track covers all aspects of exploration, consultation, articulation of ethical requirements, risk based design, deployment, monitoring and decommissioning for a whole life cycle ethical assurance of AI systems.  

Suggested topics include but are not limited to:  

  • Value Based/Sensitive Design
  • Consideration of Ethics in Autonomous Decision Making
  • Facets of Technology Ethics
  • Independent Verification of Conformity to Ethics
  • Emerging AI Ethics Guidelines, Standards and Certification Criteria

Human Centred Futures

Mini Track Chair: Prof Karen Cham, University of Brighton, UK

ECIAIR 2021 Mini Track on Human Centred Futures  

“Human Centred Futures” is a proposed open call track for full academic and /or position papers, case studies and/or demos with regards all forms of human factors in the social application of robotics and AI for Industry 4.0  

This would be including but not limited to, human/machine teaming, novel AI, neural networks, cognitive systems, psychology, ergonomics, human performance measures, sentiment analysis, behavioural analytics, conversion metrics and mitigating bias in VUCA scenarios enabled or accelerated by 5G, 6G and future networks.  Verticals include:  

  • digital health, care and wellbeing
  • agri-metrics and geo-data economies
  • serious games, virtualised and simulated training
  • next gen retail, arts and entertainment
  • re-manufacturing and circular economies
  • enterprise and behavioural change applications etc

Suggested topics include but are not limited to:  

  • untethered / remote XR applications
  • intelligent DX and psychometrics
  • EV & HITL systems
  • IoT & IoP in smart homes, smart cities, smart planet
  • quantified self Internet of Value (IoV) and Internet of Mind (IoM)

“We need to talk about AI regulation”

Mini Track Chair: Marija Cubric, University of Hertfordshire, UK

ECIAIR 2021 Mini Track on “We need to talk about AI regulation”  

The great Stephen Hawking once said the emergence of AI could be the "worst event in the history of our civilization” and he urged AI developers to "employ best practice" to control its development. While the general artificial super intelligence is still a distant prospect, even in the context of the narrow AI, where most of the current AI development is based, AI has a potential to harm humans either physically, as is the example of autonomous weapons, or psychologically by influencing, controlling and manipulating human agency through fake news, and more generally, at the societal level, by  for example, introducing bias in decision processes.  

If we accept the premise that AI has elements with destructive potential for the human race, then we should start thinking about regulatory framework for its development and deployment. Some work in this area is starting to emerge from various directions, such as the EU proposal for legal framework for AI, “Spiegelhalter’s tests for trustworthy algorithms, Suresh and Guttag’s framework for understanding unintended consequences of machine learning and a few other conceptual frameworks offering AI ethics guidelines. Still, many remain unconvinced that regulated AI is the way forward, and worry that regulation may stifle the innovation, and create uneven playing field based on the ownership of regulations.  

For this mini-track we invite papers which provide diverse perspectives on AI regulations based on research and lessons-learned from practice.

Suggested topics include but are not limited to:  

  • new conceptual models for AI regulation
  • systematic reviews of current research on AI regulation
  • case-studies based on existing AI R&D projects focusing on AI regulation
  • stakeholders’ views, opinions, and critical perspectives on the existing initiatives on AI regulations
  • comparative studies perspectives (e.g. EU, NA, China) on AI regulations