The Department of Defense is issuing AI ethics guidelines for tech contractors

3D drone satellite view of Pentagon building Washington DC USA - Ms Tech | Getty

The Department of Defense is issuing AI ethics guidelines for tech contractors
MIT Technology Review, November 16, 2021
Artificial Intelligence
by Will Douglas Heaven

“The controversy over Project Maven shows the department has a serious trust problem. This is an attempt to fix that.”

 

In 2018, when Google employees found out about their company’s involvement in Project Maven, a controversial US military effort to develop AI to analyze surveillance video, they weren’t happy. Thousands protested. “We believe that Google should not be in the business of war,” they wrote in a letter to the company’s leadership. Around a dozen employees resigned. Google did not renew the contract in 2019.

 

Project Maven still exists, and other tech companies, including Amazon and Microsoft, have since taken Google’s place. Yet the US Department of Defense knows it has a trust problem. That’s something it must tackle to maintain access to the latest technology, especially AI—which will require partnering with Big Tech and other nonmilitary organizations.

 

In a bid to promote transparency, the Defense Innovation Unit, which awards DoD contracts to companies, has released what it calls “responsible artificial intelligence” guidelines that it will require third-party developers to use when building AI for the military, whether that AI is for an HR system or target recognition.

 

The guidelines provide a step-by-step process for companies to follow during planning, development, and deployment. They include procedures for identifying who might use the technology, who might be harmed by it, what those harms might be, and how they might be avoided—both before the system is built and once it is up and running.

 

“There are no other guidelines that exist, either within the DoD or, frankly, the United States government, that go into this level of detail,” says Bryce Goodman at the Defense Innovation Unit, who coauthored the guidelines.

 

The work could change how AI is developed by the US government, if the DoD’s guidelines are adopted or adapted by other departments. Goodman says he and his colleagues have given them to NOAA and the Department of Transportation and are talking to ethics groups within the Department of Justice, the General Services Administration, and the IRS.

 

The purpose of the guidelines is to make sure that tech contractors stick to the DoD’s existing ethical principles for AI, says Goodman. The DoD announced these principles last year, following a two-year study commissioned by the Defense Innovation Board, an advisory panel of leading technology researchers and businesspeople set up in 2016 to bring the spark of Silicon Valley to the US military. The board was chaired by former Google CEO Eric Schmidt until September 2020, and its current members include Daniela Rus, the director of MIT’s Computer Science and Artificial Intelligence Lab.

 

Yet some critics question whether the work promises any meaningful reform.

Read the Full Article »

About the Author:

Will Douglas Heaven: I am the senior editor for AI at MIT Technology Review, where I cover new research, emerging trends and the people behind them. Previously, I was founding editor at the BBC tech-meets-geopolitics website Future Now and chief technology editor at New Scientist magazine. I have a PhD in computer science from Imperial College London and know what it’s like to work with robots.

See also in Internet Salmagundi: