Content Moderation Modulation

user and strident speakers, illustration - Credit: GoodStudio

Content Moderation Modulation
Communications of the ACM, January 2021, Vol. 64 No. 1, Pages 29-31
Law and Technology
By Kate Klonick

“Near-constant controversies about social media’s impact on everything from political ads to violent extremism and from data protection to hate speech have led to various attempts at government regulation—some more successful than others.”

 

Debates about speech on social networks may be heated, but the governance of these platforms is more like an iceberg. We see, and often argue over, decisions to take down problematic speech or to leave it up. But these final decisions are only the visible tip of vast and mostly submerged systems of technological governance.

 

The urge to do something is an understandable human reaction, and so is reaching for familiar mechanisms to solve new problems. But current regulatory proposals to change how social network platforms moderate content are not a solution for today’s problems of online speech any more than deck chairs were a solution for the Titanic. To do better, the conversation around online speech must do the careful, thoughtful work of exploring below the surface.

 

In September 2016, Norwegian author Thomas Egeland posted Nick Ut’s famous and award winning photograph The Terror of War on Facebook. The image depicts a nine-year-old girl running naked and screaming down the street following a napalm attack on her village during the Vietnam War. But shortly after it went up, Facebook removed Egeland’s post for violating its Community Standards on sexually exploitative pictures of minors.

 

Citing the photograph’s historical and political significance, Egeland decried Facebook for censorship. Because of his moderate celebrity status, the photo’s removal quickly became global news. Facebook was rebuked by the Norwegian prime minister and in a front-page letter titled “Dear Mark Zuckerberg” Aftenposten, one of Norway’s main newspapers, chastised the site for running roughshod over history and free speech. In the end, Facebook apologized and restored Egeland’s post.

 

The incident served as a turning point, both for the platforms and the public. Though sites like YouTube, Reddit, and Facebook had long had policies limiting the content users could post on their platforms, the enforcement of those rules was largely out of the public eye. For many users worldwide, The Terror of War’s high-profile removal was the first time they confronted the potential deleterious effects of the site’s censorial power. The incident was a foundational lesson not just in how difficult such decisions are but how high the stakes are if platforms get them wrong.

 

In turn, the public backlash was a turning point in how Facebook operationalized its policies and their enforcement. When a post is flagged for removal by another user, the post is put in a queue and reviewed by a human content moderator to determine whether it does or does not violate the site’s Community Standards. Those content moderators are typically off-site workers in the Phillipines, India, or Ireland, reviewing incidents of flagged content in call centers 24 hours a day, 7 days a week.

 

The Terror of War photo violated Facebook’s rule on nudity of minors, and thus removable, but it was also a picture of historical and newsworthy significance and thus an exception to removal. But historical value or news-worthiness are highly contextual and culturally defined—a difficult thing for someone from another culture, like a human content moderator might be, to recognize. It also introduced many to the opaque and unaccountable world of how private social media companies governed the public right of freedom of expression.

 

Since the Terror of War incident, we have had no shortage of reminders of the power of Big Tech and its lack of accountability to the users who rely on its services to speak and interact. Near-constant controversies about social media’s impact on everything from political ads to violent extremism and from data protection to hate speech have led to various attempts at government regulation—some more successful than others.

In September 2019, cybersecurity expert Bruce Schneier gave a talk at the Royal Society in London. It was titled “Why technologists need to get involved in public policy?” but it could just as easily have been called “why public policy needs to get involved in technology.” [Emphasis added.] At the crescendo of his 15-minute speech, Schneier argued that “Technologists need to realize that when they’re building a platform they’re building a world … and policymakers need to realize that technologists are capable of building a world.” Schneier was ostensibly talking about cybersecurity, but his point speaks to the chasm in the middle of almost every technology debate raging today—including one of the most visible: the debate over how to regulate (or not regulate) online speech in the age of social media.

Read the Full Article »

About the Author:

Kate Klonick is an Assistant Professor at St. John’s University School of Law, New York NY, USA.