The Rise of the Chatbots

pattern of AI chatbots on displays, illustration - Credit: Venomous Vector

The Rise of the Chatbots
Communications of the ACM, July 2023, Vol. 66 No. 7, Pages 16-17
News
By Neil Savage

“How do we keep track of the truth when bots are becoming increasingly skilled liars?”

 

During the 2016 U.S. presidential race, a Russian “troll-farm” calling itself the Internet Research Agency sought to harm Hillary Clinton’s election chances and help Donald Trump reach the White House by using Twitter to spread false news stories and other disinformation, according to a 2020 report from the Senate Intelligence Committee. Most of that content apparently was produced by human beings, a supposition supported by the fact that activity dropped off on Russian holidays.

 

Soon, though, if not already, such propaganda will be produced automatically by artificial intelligence (AI) systems such as ChatGPT, a chatbot capable of creating human-sounding text.

“Imagine a scenario where you have ChatGPT generating these tweets. The number of fake accounts you could manage for the same price would be much larger,” says V.S. Subrahmanian, a professor of computer science at Northwestern University, whose research focuses on the intersection of AI and security problems. “It’ll potentially scale up the generation of fakes.”

 

Subrahmanian co-authored a Brookings Institution report released in January that warned the spread of deepfakes—computer-generated content that purports to come from humans—could increase the risk of international conflict, and that the technology is on the brink of being used much more widely. That report focuses on fake video, audio, and images, but text could be a problem as well, he says.

 

Text generation may not have caused problems so far. “I have not seen any evidence yet that malicious actors have used it in any substantive way,” Subrahmanian says. “But every time a new technology emerges, it is only a matter of time, so we should be prepared for it sooner rather than later.”

 

There is evidence that cybercriminals are exploring the potential of text generators. A January blog post from security software maker Checkpoint said that in December, shortly after ChatGPT was released, unsophisticated programmers were using it to generate software code that could create ransomware and other malware. “Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad,” the company wrote.

 

Meanwhile, Withsecure, a Finnish provider of cybersecurity tools, warned of the threat of so-called “prompt engineering,” in which users coax software like ChatGPT to create phishing attacks, harassment, and fake news.

 

ChatGPT, a chatbot based on a large language model (LLM) developed by AI company OpenAI, has generated much excitement as well as fear about the advances of AI in general, and there has been a backlash from many technologists across varying disciplines. There have been calls to pause AI’s development, and at press time, a one-sentence open letter to the public signed by hundreds of the world’s leading AI scientists, researchers, and more (including OpenAI CEO Sam Altman) warned that “[m]itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Microsoft, which invested in its development, soon incorporated the chatbot into its search engine, Bing, leading to reports of inaccurate and sometimes creepy conversations. Google also put out a version of Bard, its own chatbot based on its LaMDA LLM that had previously made the news when a Google engineer proclaimed it was self-aware (he was subsequently fired).

 

Despite some early misfires, the text generated by these LLMs can sound remarkably like it was written by humans. “The ability to generate wonderful prose is a big and impressive scientific accomplishment from the ChatGPT team,” Subrahmanian says.

Read the Full Article »

About the Author:

Neil Savage is a science and technology writer based in Lowell, MA, USA.

See also: