Changing the Nature of AI Research

Arizona State University Professor Subbarao Kambhampati

Changing the Nature of AI Research
Communications of the ACM, September 2022, Vol. 65 No. 9, Pages 8-9
BLOG@CACM
By Subbarao Kambhampati

“The emergence of these large learned models is also changing the nature of AI research in fundamental ways.”

 

In many ways, we are living in quite a wondrous time for artificial intelligence (AI), with every week bringing some awe-inspiring feat in yet another tacit knowledge (https://bit.ly/3qYrAOY) task that we were sure would be out of reach of computers for quite some time to come. Of particular recent interest are the large learned systems based on transformer architectures that are trained with billions of parameters over massive Web-scale multimodal corpora. Prominent examples include large language models (https://bit.ly/3iGdekA) like GPT3 and PALM that respond to free-form text prompts, and language/image models like DALL-E and Imagen that can map text prompts to photorealistic images (and even those with claims to general behaviors, such as GATO).

 

The emergence of these large learned models is also changing the nature of AI research in fundamental ways. Just the other day, some researchers were playing with DALL-E and thought that it seems to have developed a secret language of its own (https://bit.ly/3ahH1Py) which, if we can master, might allow us to interact with it better. Other researchers found that GPT3’s responses to reasoning questions can be improved by adding certain seemingly magical incantations to the prompt (https://bit.ly/3aelxmI), the most prominent of these being “Let’s think step by step.” It is almost as if the large learned models like GPT3 and DALL-E are alien organisms whose behavior we are trying to decipher.

 

This is certainly a strange turn of events for AI. Since its inception, AI has existed in the no-man’s land between engineering (which aims at designing systems for specific functions), and “Science” (which aims to discover the regularities in naturally occurring phenomena). The science part of AI came from its original pretensions to provide insights into the nature of (human) intelligence, while the engineering part came from a focus on intelligent function (get computers to demonstrate intelligent behavior) rather than on insights about natural intelligence.

 

This situation is changing rapidly–especially as AI is becoming synonymous with large learned models. Some of these systems are coming to a point where we not only do not know how the models we trained are able to show specific capabilities, we are very much in the dark even about what capabilities they might have (PALM’s alleged capability of “explaining jokes”—https://bit.ly/3yJk1m4—is a case in point). Often, even their creators are caught off guard by things these systems seem capable of doing. Indeed, probing these systems to get a sense of the scope of their “emergent behaviors” has become quite a trend in AI research of late.

 

Given this state of affairs, it is increasingly clear that at least part of AI is straying firmly away from its “engineering” roots.

Read the Full Article »

About the Author:

Subbarao Kambhampati is a professor at the School of Computing & AI at Arizona State University, and a former president of the Association for the Advancement of Artificial Intelligence. He studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He can be followed on Twitter @rao2z.