What Really Happened When Google Ousted Timnit Gebru

Timnit Gebru - Photograph: Djeneba Aduayom

What Really Happened When Google Ousted Timnit Gebru
WIRED, June 8, 2021
By Tom Simonite

“She was a star engineer who warned that messy AI can spread racism. Google brought her in. Then it forced her out. Can Big Tech take criticism from within?”


One afternoon in late November of last year, Timnit Gebru was sitting on the couch in her San Francisco Bay Area home, crying.


Gebru, a researcher at Google, had just clicked out of a last-minute video meeting with an executive named Megan Kacholia, who had issued a jarring command. Gebru was the coleader of a group at the company that studies the social and ethical ramifications of artificial intelligence, and Kacholia had ordered Gebru to retract her latest research paper—or else remove her name from its list of authors, along with those of several other members of her team.


The paper in question was, in Gebru’s mind, pretty unobjectionable. It surveyed the known pitfalls of so-called large language models, a type of AI software—most famously exemplified by a system called GPT-3—that was stoking excitement in the tech industry. Google’s own version of the technology was now helping to power the company’s search engine. Jeff Dean, Google’s revered head of research, had encouraged Gebru to think about the approach’s possible downsides. The paper had sailed through the company’s internal review process and had been submitted to a prominent conference. But Kacholia now said that a group of product leaders and others inside the company had deemed the work unacceptable, Gebru recalls. Kacholia was vague about their objections but gave Gebru a week to act. Her firm deadline was the day after Thanksgiving.


Gebru’s distress turned to anger as that date drew closer and the situation turned weirder. Kacholia gave Gebru’s manager, Samy Bengio, a document listing the paper’s supposed flaws, but told him not to send it to Gebru, only to read it to her. On Thanksgiving Day, Gebru skipped some festivities with her family to hear Bengio’s recital. According to Gebru’s recollection and contemporaneous notes, the document didn’t offer specific edits but complained that the paper handled topics “casually” and painted too bleak a picture of the new technology. It also claimed that all of Google’s uses of large language models were “engineered to avoid” the pitfalls that the paper described.


Gebru spent Thanksgiving writing a six-page response, explaining her perspective on the paper and asking for guidance on how it might be revised instead of quashed. She titled her reply “Addressing Feedback from the Ether at Google,” because she still didn’t know who had set her Kafkaesque ordeal in motion, and sent it to Kacholia the next day.



“The big tech companies tried to steal a march on regulators and public criticism by embracing the idea of AI ethics,” Stark says. But as the research matured, it raised bigger questions. “Companies became less able to coexist with internal critical research,” he says. One person who runs an ethical AI team at another tech company agrees. “Google and most places did not count on the field becoming what it did.”


To some, the drama at Google suggested that researchers on corporate payrolls should be subject to different rules than those from institutions not seeking to profit from AI. In April, some founding editors of a new journal of AI ethics published a paper calling for industry researchers to disclose who vetted their work and how, and for whistle-blowing mechanisms to be set up inside corporate labs. “We had been trying to poke on this issue already, but when Timnit got fired it catapulted into a more mainstream conversation,” says Savannah Thais, a researcher at Princeton on the journal’s board who contributed to the paper. “Now a lot more people are questioning: Is it possible to do good ethics research in a corporate AI setting?”


If that mindset takes hold, in-house ethical AI research may forever be held in suspicion—much the way industrial research on pollution is viewed by environmental scientists. Jeff Dean admitted in a May interview with CNET that the company had suffered a real “reputational hit” among people interested in AI ethics work. The rest of the interview dealt mainly with promoting Google’s annual developer conference, where it was soon announced that large language models, the subject of Gebru’s fateful critique, would play a more central role in Google search and the company’s voice assistant. Meredith Whittaker, faculty director of New York University’s AI Now Institute, predicts that there will be a clearer split between work done at institutions like her own and work done inside tech companies. “What Google just said to anyone who wants to do this critical research is, ‘We’re not going to tolerate it,’” she says. (Whittaker herself once worked at Google, where she clashed with management over AI ethics and the Maven Pentagon contract before leaving in 2019.)


Any such divide is unlikely to be neat, given how the field of AI ethics sprouted in a tech industry hothouse. The community is still small, and jobs outside big companies are sparser and much less well paid, particularly for candidates without computer science PhDs. That’s in part because AI ethics straddles the established boundaries of academic departments. Government and philanthropic funding is no match for corporate purses, and few institutions can rustle up the data and computing power needed to match work from companies like Google.


For Gebru and her fellow travelers, the past five years have been vertiginous. For a time, the period seemed revolutionary: Tech companies were proactively exploring flaws in AI, their latest moneymaking marvel—a sharp contrast to how they’d faced up to problems like spam and social network moderation only after coming under external pressure. But now it appeared that not much had changed after all, even if many individuals had good intentions.


Inioluwa Deborah Raji, whom Gebru escorted to Black in AI in 2017, and who now works as a fellow at the Mozilla Foundation, says that Google’s treatment of its own researchers demands a permanent shift in perceptions. “There was this hope that some level of self-regulation could have happened at these tech companies,” Raji says. “Everyone’s now aware that the true accountability needs to come from the outside—if you’re on the inside, there’s a limit to how much you can protect people.”


Gebru, who recently returned home after her unexpectedly eventful road trip, has come to a similar conclusion. She’s raising money to launch an independent research institute modeled on her work on Google’s Ethical AI team and her experience in Black in AI. “We need more support for external work so that the choice is not ‘Do I get paid by the DOD or by Google?’” she says.


Gebru has had offers, but she can’t imagine working within the industry anytime in the near future. She’s been thinking back to conversations she’d had with a friend who warned her not to join Google, saying it was harmful to women and impossible to change. Gebru had disagreed, claiming she could nudge things, just a little, toward a more beneficial path. “I kept on arguing with her,” Gebru says. Now, she says, she concedes the point. [emphasis added]

Read the Full Article »

About the Author:

Tom Simonite is a senior writer for WIRED in San Francisco covering artificial intelligence and its effects on the world. He once trained an artificial neural network to generate seascapes and is available for commissions. Simonite was previously San Francisco bureau chief at MIT Technology Review, and wrote and edited technology coverage at New Scientist magazine in London. He lives in San Francisco, where he enjoys riding his bike and testing the reactions of prototype self-driving cars.

See also in Internet Salmagundi: