Toward Verified Artificial Intelligence

brick edifice of the letters 'AI' - Credit: Peter Crowther Associates

Toward Verified Artificial Intelligence
Communications of the ACM, July 2022, Vol. 65 No. 7, Pages 46-55
Contributed Articles
By Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry

“Verified AI is the goal of achieving strong, ideally provable assurances of correctness and trustworthiness of AI systems with respect to mathematically specified requirements. Five challenge areas for verified AI: Environment modelling, Formal specification, Modeling learning systems, Scalable formal engines, and Correct-by-construction design.”

 

Artificial intelligence (AI) is a term used for computational systems that attempt to mimic aspects of human intelligence, including functions we intuitively associate with intelligence, such as learning, problem solving, and thinking and acting rationally—for example, see Russell and Norvig. We interpret the term AI broadly to include closely related areas such as machine learning (ML). Systems that heavily use AI, henceforth referred to as AI systems, have had a significant societal impact in domains that include healthcare, transportation, finance, social networking, e-commerce, and education.

 

This growing societal-scale impact has brought with it a set of risks and concerns, including errors in AI software, cyber-attacks, and AI system safety. Therefore, the question of verification and validation of AI systems, and, more broadly, of achieving trustworthy AI, has begun to demand the attention of the research community. We define “verified AI” as the goal of designing AI systems that have strong, ideally provable, assurances of correctness with respect to mathematically specified requirements. How can we achieve this goal?

 

In this article, we consider the challenge of verified AI from the perspective of formal methods, a field of computer science and engineering concerned with the rigorous mathematical specification, design, and verification of systems. At its core, formal methods is about proof: formulating specifications that form proof obligations; designing systems to meet those obligations; and verifying, via algorithmic proof search, that the systems indeed meet their specifications. A spectrum of formal methods, from specification-driven testing and simulation to model checking and theorem proving, are routinely used in the computer-aided design of integrated circuits (ICs) and have been widely applied to find bugs in software, analyze cyber-physical systems (CPS), and find security vulnerabilities. We review the way formal methods has traditionally been applied, identify the unique challenges arising in AI systems, and present ideas and recent advances towards overcoming these challenges.

 

This article seeks to address more than just specific types of AI components, such as deep neural networks (DNNs), or specific methods, such as reinforcement learning (RL). It attempts to cover the broad range of AI systems and their design processes. Additionally, recognizing that formal methods provide but one approach to trustworthy AI, our perspective is meant to complement those from other areas. Our views are largely shaped by problems arising from the use of AI in autonomous and semiautonomous systems, where safety and correctness concerns are more acute, though we believe the ideas presented here apply more broadly. This article is written for formal methods researchers and practitioners as well as for the broader computer science community. For the former, we present our viewpoint on where the real problems lie and how formal methods can have the greatest impact. For the latter, we sketch out our vision of how formal methods can be a key enabler for trustworthy AI.

 

We begin with a brief background of formal verification, an illustrative example, and a summary of the article’s key ideas. We then outline five challenges to verified AI, discussing recent progress and presenting principles to address them.

Read the Full Article »

About the Authors:

Sanjit A. Seshia is a professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley, CA, USA.

Dorsa Sadigh is an assistant professor of Computer Science and Electrical Engineering at Stanford University, CA, USA.

S. Shankar Sastry is the Thomas Siebel Professor of Computer Science and professor of Electrical Engineering and Computer Sciences, BioEngineering, and Mechanical Engineering at the University of California, Berkeley, CA, USA.

See also: