What OpenAI Really Wants

Ilya Sutskever Sam Altman Mira Murati and Greg Brockman of OpenAI - Photograph: Jessica Chou

What OpenAI Really Wants
Print Title: “The Transformers”
WIRED, September 5, 2023
Backchannel
By Steven Levy

“The young company sent shock waves around the world when it released ChatGPT. But that was just the start. The ultimate goal: Change everything. Yes. Everything.”

 

THE AIR CRACKLES with an almost Beatlemaniac energy as the star and his entourage tumble into a waiting Mercedes van. They’ve just ducked out of one event and are headed to another, then another, where a frenzied mob awaits. As they careen through the streets of London—the short hop from Holborn to Bloomsbury—it’s as if they’re surfing one of civilization’s before-and-after moments. The history-making force personified inside this car has captured the attention of the world. Everyone wants a piece of it, from the students who’ve waited in line to the prime minister.

 

Inside the luxury van, wolfing down a salad, is the neatly coiffed 38-year-old entrepreneur Sam Altman, cofounder of OpenAI; a PR person; a security specialist; and me. Altman is unhappily sporting a blue suit with a tieless pink dress shirt as he whirlwinds through London as part of a monthlong global jaunt through 25 cities on six continents. As he gobbles his greens—no time for a sit-down lunch today—he reflects on his meeting the previous night with French president Emmanuel Macron. Pretty good guy! And very interested in artificial intelligence.

 

As was the prime minister of Poland. And the prime minister of Spain.

 

Riding with Altman, I can almost hear the ringing, ambiguous chord that opens “A Hard Day’s Night”—introducing the future. Last November, when OpenAI let loose its monster hit, ChatGPT, it triggered a tech explosion not seen since the internet burst into our lives. Suddenly the Turing test was history, search engines were endangered species, and no college essay could ever be trusted. No job was safe. No scientific problem was immutable.

 

Altman didn’t do the research, train the neural net, or code the interface of ChatGPT and its more precocious sibling, GPT-4. But as CEO—and a dreamer/doer type who’s like a younger version of his cofounder Elon Musk, without the baggage—one news article after another has used his photo as the visual symbol of humanity’s new challenge. At least those that haven’t led with an eye-popping image generated by OpenAI’s visual AI product, Dall-E. He is the oracle of the moment, the figure that people want to consult first on how AI might usher in a golden age, or consign humans to irrelevance, or worse.

 

Altman’s van whisks him to four appearances that sunny day in May. The first is stealthy, an off-the-record session with the Round Table, a group of government, academia, and industry types. Organized at the last minute, it’s on the second floor of a pub called the Somers Town Coffee House. Under a glowering portrait of brewmaster Charles Wells (1842–1914), Altman fields the same questions he gets from almost every audience. Will AI kill us? Can it be regulated? What about China? He answers every one in detail, while stealing glances at his phone. After that, he does a fireside chat at the posh Londoner Hotel in front of 600 members of the Oxford Guild. From there it’s on to a basement conference room where he answers more technical questions from about 100 entrepreneurs and engineers. Now he’s almost late to a mid-afternoon onstage talk at University College London. He and his group pull up at a loading zone and are ushered through a series of winding corridors, like the Steadicam shot in Goodfellas. As we walk, the moderator hurriedly tells Altman what he’ll ask. When Altman pops on stage, the auditorium—packed with rapturous academics, geeks, and journalists—erupts.

 

Altman is not a natural publicity seeker. I once spoke to him right after The New Yorker ran a long profile of him. “Too much about me,” he said. But at University College, after the formal program, he wades into the scrum of people who have surged to the foot of the stage. His aides try to maneuver themselves between Altman and the throng, but he shrugs them off. He takes one question after another, each time intently staring at the face of the interlocutor as if he’s hearing the query for the first time. Everyone wants a selfie. After 20 minutes, he finally allows his team to pull him out. Then he’s off to meet with UK prime minister Rishi Sunak.

 

Maybe one day, when robots write our history, they will cite Altman’s world tour as a milestone in the year when everyone, all at once, started to make their own personal reckoning with the singularity. Or then again, maybe whoever writes the history of this moment will see it as a time when a quietly compelling CEO with a paradigm-busting technology made an attempt to inject a very peculiar worldview into the global mindstream—from an unmarked four-story headquarters in San Francisco’s Mission District to the entire world.

 

For Altman and his company, ChatGPT and GPT-4 are merely stepping stones along the way to achieving a simple and seismic mission, one these technologists may as well have branded on their flesh. That mission is to build artificial general intelligence—a concept that’s so far been grounded more in science fiction than science—and to make it safe for humanity. The people who work at OpenAI are fanatical in their pursuit of that goal. (Though, as any number of conversations in the office café will confirm, the “build AGI” bit of the mission seems to offer up more raw excitement to its researchers than the “make it safe” bit.) These are people who do not shy from casually using the term “super-intelligence.” They assume that AI’s trajectory will surpass whatever peak biology can attain. The company’s financial documents even stipulate a kind of exit contingency for when AI wipes away our whole economic system.

 

It’s not fair to call OpenAI a cult, but when I asked several of the company’s top brass if someone could comfortably work there if they didn’t believe AGI was truly coming—and that its arrival would mark one of the greatest moments in human history—most executives didn’t think so. Why would a nonbeliever want to work here? they wondered. The assumption is that the workforce—now at approximately 500, though it might have grown since you began reading this paragraph—has self-selected to include only the faithful. At the very least, as Altman puts it, once you get hired, it seems inevitable that you’ll be drawn into the spell.

 

At the same time, OpenAI is not the company it once was. It was founded as a purely nonprofit research operation, but today most of its employees technically work for a profit-making entity that is reportedly valued at almost $30 billion. Altman and his team now face the pressure to deliver a revolution in every product cycle, in a way that satisfies the commercial demands of investors and keeps ahead in a fiercely competitive landscape. All while hewing to a quasi-messianic mission to elevate humanity rather than exterminate it.

 

That kind of pressure—not to mention the unforgiving attention of the entire world—can be a debilitating force. The Beatles set off colossal waves of cultural change, but they anchored their revolution for only so long: Six years after chiming that unforgettable chord they weren’t even a band anymore. The maelstrom OpenAI has unleashed will almost certainly be far bigger. But the leaders of OpenAI swear they’ll stay the course. All they want to do, they say, is build computers smart enough and safe enough to end history, thrusting humanity into an era of unimaginable bounty.

Read the Full Article »

About the Author:

Steven Levy covers the gamut of tech subjects for WIRED, in print and online, and has been contributing to the magazine since its inception. His weekly column, Plaintext, is exclusive to [WIRED] subscribers online but the newsletter version is open to all—sign up here. He has been writing about technology for more than 30 years, writing columns for Rolling Stone and Macworld, leading technology coverage for Newsweek, and cocreating a tech publication, Backchannel, on Medium. (Backchannel was integrated into WIRED in 2017.) He has written seven books, including Hackers, Crypto, Artificial Life, Insanely Great (a history of the Macintosh), and, most recently, In the Plex, the definitive story of Google. He attended Temple University and has a master’s degree in literature from Penn State. He works from the New York office.

See also in Internet Salmagundi: