AI on the Edge of Predictability
Dr Andy Edmonds, ThinkBase LLC, Wyoming, USA
AI is much concerned with modelling and predicting of difficult, vague or uncertain systems. Those predictions will necessarily be vague and uncertain too. Once filtered by the boardroom, the popular press, or the speech of politicians the uncertainties tend to vanish. When short term predictions prove dramatically wrong, whether produced by conventional models or AI, the public loses faith, and the suspicion arises that the modellers biased their predictions for partisan reasons.
This talk will look at some of the often-unconsidered sources of uncertainty, areas where overlooked techniques can provide useful analysis and some thoughts on the ethical presentation of uncertainty for lay audiences.
Meaningful Human Control in Robotics and Decision Support Systems
Professor Catholijn Jonker Delft University of Technology
With the advance of robotics and AI we are frequently confronted with news items of incidents in which AI made wrong decisions, and in which questions arise as to the role of humans in the decision making of the AI. Typically, in these situations the robot or AI is doing a task delegated to it by humans, the result of which is taken by humans to complete their own tasks. In itself automating tasks is no problem, and it spurred the industrial revolution. However, with AI we touch upon some interesting boundaries, for which I will discuss the deathly myths of autonomous systems (Bradshaw et al.). No surprise that these days there is a call for meaning full human control. I will define what we mean by that and I will give examples of what makes that an interesting challenge. These challenges have brought me and other colleagues to embrace the concept of Hybrid Intelligence, i.e., intelligence that combines human with artificial intelligence. With that in mind I will discuss with a critical eye, my own work on decision support systems for negotiations.
AI (In)justice in Education: Challenges for Policy, Governance and Society
The legitimisation of algorithmic systems in education poses risks with their ability to drive techno-deterministic scenarios (cradle-to-career pathways) and their potential to control knowledge and therefore create power asymmetries. In Bordieuan terms, we can see a(nother) dominant power arise - those who own the algorithmic systems – which will have the capacity to decide who will be the mere ‘technician’ and who - the ‘engineer’. Before policymakers continue to encourage AI adoption in education and schools become entirely dependent on data-driven systems, we need to consider: (1) what kinds of people should we try and be to ensure we put AI to good use?; (2 ) what is the planetary and societal cost of AI systems?; and (3) what real alternatives do we leave for our children for progress that doesn’t depend on AI?
Ethics of AI: Does AI lead to a responsibility gap?
Professor Lode Lauwaert, ThinkBase LLC, Wyoming, USA
It is often claimed that the use of AI systems is associated with a so-called responsibility gap. In this presentation, I argue that this conclusion is not correct. To do so, it is first necessary to clarify what is meant by ‘responsibility’ here, what exactly the responsibility gap means, and what the argument for the responsibility gap is.