Trust, Regulation, and Human-in-the-Loop AI: within the European region

Trustworthy Autonomous Systems Hub logo on blue - Credit: UKRI TAS Hub

Trust, Regulation, and Human-in-the-Loop AI: within the European region
Communications of the ACM, April 2022, Vol. 65 No. 4, Pages 64-68
Europe Region Special Section: Big Trends
By Stuart E. Middleton, Emmanuel Letouzé, Ali Hossaini, Adriane Chapman

“The E.U. is an early mover in the race to regulate AI, and with the draft E.U. AI Act, it has adopted an assurance-based regulatory environment using yet-to-be-defined AI assurance standards.”

 

Artificial intelligence (AI) systems employ learning algorithms that adapt to their users and environment, with learning either pre-trained or allowed to adapt during deployment. Because AI can optimize its behavior, a unit’s factory model behavior can diverge after release, often at the perceived expense of safety, reliability, and human controllability. Since the Industrial Revolution, trust has ultimately resided in regulatory systems set up by governments and standards bodies. Research into human interactions with autonomous machines demonstrates a shift in the locus of trust: we must trust non-deterministic systems such as AI to self-regulate, albeit within boundaries. This radical shift is one of the biggest issues facing the deployment of AI in the European region.

 

Trust has no accepted definition, but Rousseau defined it as “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another.” Trust is an attitude that an agent will behave as expected and can be relied upon to reach its goal. Trust breaks down after an error or a misunderstanding between the agent and the trusting individual. The psychological state of trust in AI is an emergent property of a complex system, usually involving many cycles of design, training, deployment, measurement of performance, regulation, redesign, and retraining.

 

Trust matters, especially in critical sectors such as healthcare, defense, and security, where duty of care is foremost. Trustworthiness must be planned, rather than an afterthought. We can trust in AI, such as when a doctor uses algorithms to screen medical images. We can also trust with AI, such as when journalists reference a social network algorithm to analyze sources of a news story. Growing adoption of AI into institutional systems relies on citizens to trust in these systems and have confidence in the way these systems are designed and regulated.

 

Regional approaches for managing trust in AI have recently emerged, leading to different regulatory regimes in the U.S., the European region, and China. We review these regulatory divergences. Within the European region, research programs are examining how trust impacts user acceptance of AI. Examples include the UKRI Trustworthy Autonomous Systems Hub, the French Confiance. ai project, and the German AI Breakthrough Hub. Europe appears to be developing a “third way,” alongside the U.S. and China.

 

Healthcare contains many examples of AI applications, including online harm risk identification, mental health behavior classification, and automated blood testing. In defense and security, examples include combat management systems and using machine learning to identify chemical and biological contamination. There is a growing awareness within critical sectors that AI systems need to address a “public trust deficit” by adding reliability to the perception of AI. In the next two sections, we discuss research highlights around the key trends of building safer and more reliable AI systems to engender trust and put humans in the loop with regard to AI systems and teams. We conclude with a discussion about applications, and what we consider the future outlook is for this area.

Read the Full Article »

About the Authors:

Stuart E. Middleton is a lecturer in computer science at the University of Southampton, Southampton, U.K.

Emmanuel Letouzé is Marie Curie Fellow at Universitat Pompeu Fabra, Barcelona, Spain.

Ali Hossaini is Senior Visiting Research Fellow at Kings College London, U.K.

Adriane Chapman is a professor in computer science at the University of Southampton, Southampton, U.K.

See also: