“Inside the fight to reclaim AI from Big Tech’s control”
MIT Technology Review, June 14, 2021
by Karen Hao
“For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.”
Timnit Gebru never thought a scientific paper would cause her so much trouble.
In 2020, as the co-lead of Google’s ethical AI team, Gebru had reached out to Emily Bender, a linguistics professor at the University of Washington, and the two decided to collaborate on research about the troubling direction of artificial intelligence. Gebru wanted to identify the risks posed by large language models, one of the most stunning recent breakthroughs in AI research. The models are algorithms trained on staggering amounts of text. Under the right conditions, they can compose what look like convincing passages of prose.
For a few years, tech companies had been racing to build bigger versions and integrate them into consumer products. Google, which invented the technique, was already using one to improve the relevance of search results. OpenAI announced the largest one, called GPT-3, in June 2020 and licensed it exclusively to Microsoft a few months later.
Gebru worried about how fast the technology was being deployed. In the paper she wound up writing with Bender and five others, she detailed the possible dangers. The models were enormously costly to create—both environmentally (they require huge amounts of computational power) and financially; they were often trained on the toxic and abusive language of the internet; and they’d come to dominate research in language AI, elbowing out promising alternatives.
Like other existing AI techniques, the models don’t actually understand language. But because they can manipulate it to retrieve text-based information for users or generate natural conversation, they can be packaged into products and services that make tech companies lots of money.
That November, Gebru submitted the paper to a conference. Soon after, Google executives asked her to retract it, and when she refused, they fired her. Two months later, they also fired her coauthor Margaret Mitchell, the other leader of the ethical AI team.
The dismantling of that team sparked one of the largest controversies within the AI world in recent memory. Defenders of Google argued that the company has the right to supervise its own researchers. But for many others, it solidified fears about the degree of control that tech giants now have over the field. Big Tech is now the primary employer and funder of AI researchers, including, somewhat ironically, many of those who assess its social impacts.
Among the world’s richest and most powerful companies, Google, Facebook, Amazon, Microsoft, and Apple have made AI core parts of their business. Advances over the last decade, particularly in an AI technique called deep learning, have allowed them to monitor users’ behavior; recommend news, information, and products to them; and most of all, target them with ads. Last year Google’s advertising apparatus generated over $140 billion in revenue. Facebook’s generated $84 billion.
The companies have invested heavily in the technology that has brought them such vast wealth. Google’s parent company, Alphabet, acquired the London-based AI lab DeepMind for $600 million in 2014 and spends hundreds of millions a year to support its research. Microsoft signed a $1 billion deal with OpenAI in 2019 for commercialization rights to its algorithms.
At the same time, tech giants have become large investors in university-based AI research, heavily influencing its scientific priorities. Over the years, more and more ambitious scientists have transitioned to working for tech giants full time or adopted a dual affiliation. From 2018 to 2019, 58% of the most cited papers at the top two AI conferences had at least one author affiliated with a tech giant, compared with only 11% a decade earlier, according to a study by researchers in the Radical AI Network, a group that seeks to challenge power dynamics in AI.
The problem is that the corporate agenda for AI has focused on techniques with commercial potential, largely ignoring research that could help address challenges like economic inequality and climate change. In fact, it has made these challenges worse. The drive to automate tasks has cost jobs and led to the rise of tedious labor like data cleaning and content moderation. The push to create ever larger models has caused AI’s energy consumption to explode. Deep learning has also created a culture in which our data is constantly scraped, often without consent, to train products like facial recognition systems. And recommendation algorithms have exacerbated political polarization, while large language models have failed to clean up misinformation.
It’s this situation that Gebru and a growing movement of like-minded scholars want to change. Over the last five years, they’ve sought to shift the field’s priorities away from simply enriching tech companies, by expanding who gets to participate in developing the technology. Their goal is not only to mitigate the harms caused by existing systems but to create a new, more equitable and democratic AI.
About the Author:
Karen Hao is the senior AI editor at MIT Technology Review, covering the field’s cutting-edge research and its impacts on society. She writes a weekly newsletter called The Algorithm, which was named one of the best newsletters on the internet in 2019 by The Webby Awards. Her work has also won a Front Page Award and been short-listed for the Sigma and Ambies Awards. Prior to joining the publication, she was a tech reporter and data scientist at Quartz and an application engineer at the first startup to spin out of Google X. She received her B.S. in mechanical engineering and minor in energy studies from MIT.
- We read the paper that forced Timnit Gebru out of Google. Here’s what it says. The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.