What To Do About Deepfakes

two mirrored face images, illustration - Credit: Getty Images

What To Do About Deepfakes
Communications of the ACM, March 2021, Vol. 64 No. 3, Pages 33-35
Computing Ethics
By Deborah G. Johnson, Nicholas Diakopoulos

“Research and development of synthetic media will be better served if technical experts see themselves as part of the solution, and not the problem.”

 

Synthetic media technologies are rapidly advancing, making it easier to generate nonveridical media that look and sound increasingly realistic. So-called “deepfakes” (owing to their reliance on deep learning) often present a person saying or doing something they have not said or done. The proliferation of deepfakesa creates a new challenge to the trustworthiness of visual experience, and has already created negative consequences such as nonconsensual pornography, political disinformation, and financial fraud. Deepfakes can harm viewers by deceiving or intimidating, harm subjects by causing reputational damage, and harm society by undermining societal values such as trust in institutions. What can be done to mitigate these harms?

 

It will take the efforts of many different stakeholders including platforms, journalists, and policymakers to counteract the negative effects of deepfakes. Technical experts can and should play an active role. Technical experts must marshall their expertise—their understanding of how deepfake technologies work, their insights into how the technology can be further developed and used—and direct their efforts to find solutions that allow the beneficial uses of synthetic media technologies and mitigate the negative effects. While successful interventions will likely be interdisciplinary and sociotechnical, technical experts should play a role by designing, developing, and evaluating potential technical responses and in collaborating with legal, policy, and other stakeholders in implementing social responses.

The Responsibilities of Technical Experts

Deepfakes pose an age-old challenge for technical experts. Often as new technologies are being developed, their dangers and benefits are uncertain and the dangers loom large. This raises the question of whether technical experts should even work on or with a technology that has the potential for great harm. One of the most well known and weighty versions of this dilemma was faced by scientists involved in the development and use of the atomic bomb. The dilemma also arose for computer scientists as plans for the Strategic Defense Initiative were taking shape as well as when encryption techniques were first debated.

 

Although some technical experts may decide not to work on or with the synthetic media technologies underlying deep fakes, many will likely attempt to navigate more complicated territory, trying to avoid doing harm and reap the benefits of the technology. Those who take this route must recognize they may actually enable negative social consequences and take steps to reduce this risk.

Read the Full Article »

About the Authors:

Deborah G. Johnson is Olsson Professor of Applied Ethics, Emeritus, in the Department of Engineering and Society University of Virginia in Charlottesville, VA, USA.

Nicholas Diakopoulos is an Associate Professor in Communication Studies and Computer Science (by courtesy) at Northwestern University in Evanston, IL, USA.