Can the Biases in Facial Recognition Be Fixed; Also, Should They?

Joy Buolamwini of the Massachusetts Institute of Technology Media Lab.

Can the Biases in Facial Recognition Be Fixed; Also, Should They?
Communications of the ACM, March 2021, Vol. 64 No. 3, Pages 20-22
News
By Paul Marks

“Many facial recognition systems used by law enforcement are shot through with biases. Can anything be done to make them fair and trustworthy?”

 

In January 2020, Robert Williams of Farmington Hills, MI, was arrested at his home by the Detroit Police Department. He was photographed, fingerprinted, had his DNA taken, and was then locked up for 30 hours. His crime? He had not committed one; a facial recognition system operated by the Michigan State Police had wrongly identified him as the thief in a 2018 store robbery. However, Williams looked nothing like the perpetrator captured in the surveillance video, and the case was dropped.

 

A one-off case? Far from it. Rewind to May 2019, when Detroit resident Michael Oliver was arrested after being identified by the very same police facial recognition unit as the person who stole a smartphone from a vehicle. Again, however, Oliver did not even resemble the person pictured in a smartphone video of the theft. His case, too, was dropped, and Oliver has filed a law-suit seeking reputational and economic damages from the police.

 

What Williams and Oliver have in common is that they are both Black, and biases in deep-learning-based facial recognition systems are known to make such technology highly likely to incorrectly identify people of color. “This is not me. You think all Black people look alike?” an incredulous Williams asked detectives who showed him the CCTV picture of the alleged thief, according to The New York Times. In the Detroit Free Press, Oliver recalled detectives showing him the video of the perpetrator and realizing immediately, “It wasn’t me.”

 

It is such cases, borne out of the foisting of the privacy-invading mass-surveillance technology on whole populations, that continue to raise major questions over what role facial recognition should have in a civilized society. Dubbed the “plutonium of artificial intelligence” in an appraisal in the ACM journal XRDS, Luke Stark of Microsoft Research’s Montreal lab described facial recognition as “intrinsically socially toxic.” Regardless of the intentions of its makers, he says, “it needs controls so strict that it should be banned for almost all practical purposes.”

 

Such controls are now the subject of ongoing legislative efforts in the U.S., the E.U., and the U.K., where lawmakers are attempting to work out how a technology that Washington, D.C.-based Georgetown University Law Center has characterized as placing populations in a “perpetual police lineup” should be regulated. At the same time, activist groups such as Amnesty International are monitoring the rollout of facial recognition at a human rights level, naming and shaming Western firms that provide the technologies to China’s surveillance state.

 

With politicians and pressure groups focused on facial recognition’s regulation, deployment, and human rights issues, where does that leave the technologists who actually make the stuff? Can software design and engineering teams charged with developing such systems address at least some of facial recognition technology’s deep-seated problems?

Read the Full Article »

About the Author:

Paul Marks is a technology journalist, writer, and editor based in London, U.K.

See also:

    • Hill, K. Wrongfully Accused By An Algorithm, The New York Times, June 24, 2020, https://nyti.ms/356Zt8D
    • Anderson, E. Facial Recognition Got Him Arrested for a Crime He Didn’t Commit, Detroit Free Press, July 11, 2020, https://bit.ly/3bnpJwN
    • Buolamwini, J. and Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research, 81:1–15, 2018, Conference on Fairness, Accountability, and Transparency. https://bit.ly/354ucDu
    • Grother, P., Ngan, M., and Hanaoka, K. Face Recognition Vendor Test Part 3: Demographic Effects U.S. National Institute of Standards and Technology, December 2019, https://bit.ly/32Uv1vF
    • Report to Congressional Requestors, Facial Recognition Technology: Privacy and Accuracy Issues Related to Commercial Uses, U.S. Government Accountability Office, July 2020, https://bit.ly/2DrR5oV
    • Krishna, A. IBM CEO’s letter to the U.S. Congress on its abandonment of face recognition technology, June 8, 2020, https://ibm.co/3hXDIM3
    • Amazon: A one-year moratorium on police use of ‘Rekognition’ Amazon’s COVID-19 blog, June 10, 2020, https://bit.ly/3gTOPUZ
    • Smith, B. Microsoft: Facial recognition: It’s Time for Action The Official Microsoft Blog, December 6, 2018, https://bit.ly/3gVScuA