Can Generative AI Bots Be Trusted?

white gloved hands around a floating, red ace of spades - Credit: Fotokita

Can Generative AI Bots Be Trusted?
Communications of the ACM, June 2023, Vol. 66 No. 6, Pages 24-27
The Profession of IT
By Peter J. Denning

“A chatbot prompt is a probe into the conversation of a crowd.”

 

In November 2022, OpenAI released ChatGPT, a major step forward in creative artificial intelligence. ChatGPT is OpenAI’s interface to a “large language model,” a new breed of AI based on a neural network trained on billions of words of text. ChatGPT generates natural language responses to queries (prompts) on those texts. In bringing working versions of this technology to the public, ChatGPT has unleashed a huge wave of experimentation and commentary. It has inspired moods of awe, amazement, fear, and perplexity. It has stirred massive consternation around its mistakes, foibles, and nonsense. And it has aroused extensive fear about job losses to AI automation.

 

Where does this new development fit in the AI landscape? In 2019, Ted Lewis and I proposed a hierarchy of AI machines ranked by learning power (see the accompanying table). We aimed to cut through the chronic hype of AI and show AI can be discussed without ascribing human qualities to the machines. At the time, no working examples of Creative AI (Level 4) were available to the public. That has changed dramatically with the arrival of “generative AI”—creative AI bots that generate conversational texts, images, music, and computer code.

 

Table. AI machines hierarchy.

 

Text-generator bots, also called chatbots, are trained on huge amounts of natural-language text obtainable from the Internet. Their core neural networks produce outputs that have high probability of being associated in the training data with the inputs. Those outputs are transformed by natural-language processors into genres such as text, summaries, poems, music, and code. Many years of research have come together in these technologies. However, because the workings of these algorithms are not widely known, to many the technology still looks like magic.

 

Opinions about the implications of AI bot technology are all over the map. Technology investors and AI developers are enthusiastic. Many others are deeply concerned about trust, authorship, education, jobs, teaming, and inclusion.

These examples may be amusing, but they reveal a deep limitation of AI bots. Indeed, we have no grounds to expect accuracy from these machines. They do not care about truth. They simply generate probable text given the text prompts. They are amusing to play with but dangerous if taken as authoritative.

When we present a chatbot with a prompt, we are probing the space of conversations in which it was trained, seeking a response that is close to what has been said but not necessarily the same. Chatbots can do this kind of probing much faster than humans.

Chatbot models are notoriously biased toward the conversations among the well-educated and well-off even within rich countries.

Read the Full Article »

About the Author:

Peter J. Denning is Distinguished Professor of Computer Science at the Naval Postgraduate School in Monterey, CA, is Editor of ACM Ubiquity, and is a past president of ACM. His most recent book is Computational Thinking (with Matti Tedre, MIT Press, 2019). The author’s views expressed here are not necessarily those of his employer or the U.S. federal government.