Responsible AI: Bridging From Ethics to Practice

figures interacting with technology and one another, illustration - Credit: Elenabsl / Shutterstock

Responsible AI: Bridging From Ethics to Practice
Communications of the ACM, August 2021, Vol. 64 No. 8, Pages 32-35
Viewpoint
By Ben Shneiderman

“These recommendations are meant to increase reliability, safety, and trustworthiness while increasing the benefits of AI technologies.”

 

The high expectations of AI have triggered worldwide interest and concern, generating 400+ policy documents on responsible AI. Intense discussions over the ethical issues lay a helpful foundation, preparing researchers, managers, policy makers, and educators for constructive discussions that will lead to clear recommendations for building the reliable, safe, and trustworthy systems that will be commercial success. This Viewpoint focuses on four themes that lead to 15 recommendations for moving forward. The four themes combine AI thinking with human-centered User Experience Design (UXD).

 

Ethics and Design. Ethical discussions are a vital foundation, but raising the edifice of responsible AI requires design decisions to guide software engineering teams, business managers, industry leaders, and government policymakers. Ethical concerns are catalogued in the Berkman Klein Center report that offers ethical principles in eight categories: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. These important ethical foundations can be strengthened with actionable design guidelines.

 

Autonomous Algorithms and Human Control. The recent CRA report on “Assured Autonomy” and the IEEE’s influential report on “Ethically Aligned Design” are strongly devoted to “Autonomous and Intelligent Systems.” The reports emphasize machine autonomy, which becomes safer when human control can be exercised to prevent damage. I share the desire for autonomy by way of elegant and efficient algorithms, while adding well-designed control panels for users and supervisors to ensure safer outcomes. Autonomous aerial drones become more effective as remotely piloted aircraft and NASA’s Mars Rovers can make autonomous movements, but there is a whole control room of operators managing the larger picture of what is happening.

 

Humans in the Group; Computers in the Loop. While people are instinctively social, they benefit from well-designed computers. Some designers favor developing computers as collaborators, teammates, and partners, when adding control panels and status displays would make them comprehensible appliances. Machine and deep learning strategies will be more widely used if they are integrated in visual user interfaces, as they are in counterterrorism centers, financial trading rooms, and transportation or utility control centers.

 

Explainable AI (XAI) and Comprehensible AI (CAI). Many researchers from AI and HCI have turned to the problem of providing explanations of AI decisions, as required by the European General Data Protection Regulation (GDPR) stipulating a “right to explanation.” Explanations of why mortgage applications or parole requests are rejected can include local or global descriptions, but a useful complementary approach is to prevent confusion and surprise by making comprehensible user interfaces that enable rapid interactive exploration of decision spaces.

 

Combining AI with UXD will enable rapid progress to the goals of reliable, safe, and trustworthy systems. Software engineers, designers, developers, and their managers are practitioners who need more than ethical discussion. They want clear guidance about what to do today as they work toward deadlines with their limited team resources. They operate in competitive markets that reward speed, clarity, and performance.

 

This Viewpoint is a brief introduction to the 15 recommendations in a recent article in the ACM Transactions on Interactive Intelligent Systems, which bridge the gap between widely discussed ethical principles and practical steps for effective governance that will lead to reliable, safe, and trustworthy AI systems. That article offers detailed descriptions and numerous references. The recommendations, grouped into three levels of governance structures, are meant to provoke discussions that could lead to validated, refined, and widely implemented practices .

Read the Full Article »

About the Author:

Ben Shneiderman is an Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director (1983–2000) of the Human-Computer Interaction Laboratory, and a Member of the U.S. National Academy of Engineering.

See also: