The Troubling Future for Facial Recognition Software

numbered frames around the heads of indefinite figures, illustration - Credit: Varuna

The Troubling Future for Facial Recognition Software
Communications of the ACM, March 2022, Vol. 65 No. 3, Pages 35-36
Viewpoint
By Toby Walsh

“There is promising, if somewhat slow, progress on making facial recognition software less biased.”

 

George Orwell’s novel 1984 got one thing wrong. A surveillance state will not have people watching people, as the Stasi did in East Germany. Computers will be the ones watching people. Technology lets you perform surveillance at an industrial scale.

 

This is already happening in China, where facial recognition software is being used by law enforcement for catching relatively minor offenders such as jaywalkers to enabling much more disturbing activities such as tracking Uyghurs. The West has also seen a rise in the use of such software. For example, the controversial company Clearview AI has scraped approximately three billion photographs from the Web, which the company uses to sell facial recognition services to agencies including the U.S. Federal Bureau of Investigation.

 

Fortunately, pushback is starting to happen against these developments. In June 2020, IBM announced it would no longer sell, research, or develop facial recognition software. Amazon and Microsoft quickly followed suit, announcing moratoria on selling such services to the police pending federal regulation.

 

Local and national governments in the U.S. are hitting the pause button. San Francisco, Boston, and several other cities have introduced bans. And the Facial Recognition and Bio-metric Technology Moratorium Act introduced by Democratic lawmakers in June 2020 attempts, as the name suggests, to impose a moratorium on the use of facial recognition software. Professional societies such as ACM, along with organizations including Human Rights Watch and the UN, have also called for regulation.

 

A major ethical concern behind many of these calls is bias. Researchers including MIT’s Joy Buolamwini have demonstrated the technology often works better on men than women, better on white people than Black people, and worst of all on Black women. And while some facial recognition software has been improved in response, significant biases remain. In June 2020, in the first known case of its type, a man in Detroit was arrested in front of his family for burglary because he was mistakenly identified by facial recognition software. It may come as no surprise the man was Black.

Read the Full Article »

About the Author:

Toby Walsh  is Professor of Artificial Intelligence at the University of New South Wales in Sydney Australia and at CSIRO Data61. He is a fellow of both the ACM and the Australian Academy of Science, and a strong advocate for limits to ensure AI is used to improve our lives. He has authored three books on AI for general audiences, the most recent is Machines Behaving Badly: The Morality of AI.