Inside the messy ethics of making war with machines

glitch aesthetic of a soldier's face - YOSHI SODEOKA

Inside the messy ethics of making war with machines
Print Title: “AI – Assisted Warfare”
MIT Technology Review, FAugust 16, 2023
Artificial Intelligence
by Arthur Holland Michel

“AI is making its way into decision-making in battle. Who’s to blame when something goes wrong?”

 

In a near-future war—one that might begin tomorrow, for all we know—a soldier takes up a shooting position on an empty rooftop. His unit has been fighting through the city block by block. It feels as if enemies could be lying in silent wait behind every corner, ready to rain fire upon their marks the moment they have a shot.

 

Through his gunsight, the soldier scans the windows of a nearby building. He notices fresh laundry hanging from the balconies. Word comes in over the radio that his team is about to move across an open patch of ground below. As they head out, a red bounding box appears in the top left corner of the gunsight. The device’s computer vision system has flagged a potential target—a silhouetted figure in a window is drawing up, it seems, to take a shot.

 

The soldier doesn’t have a clear view, but in his experience the system has a superhuman capacity to pick up the faintest tell of an enemy. So he sets his crosshair upon the box and prepares to squeeze the trigger.

In different war, also possibly just over the horizon, a commander stands before a bank of monitors. An alert appears from a chatbot. It brings news that satellites have picked up a truck entering a certain city block that has been designated as a possible staging area for enemy rocket launches. The chatbot has already advised an artillery unit, which it calculates as having the highest estimated “kill probability,” to take aim at the truck and stand by.

 

According to the chatbot, none of the nearby buildings is a civilian structure, though it notes that the determination has yet to be corroborated manually. A drone, which had been dispatched by the system for a closer look, arrives on scene. Its video shows the truck backing into a narrow passage between two compounds. The opportunity to take the shot is rapidly coming to a close.

For the commander, everything now falls silent. The chaos, the uncertainty, the cacophony—all reduced to the sound of a ticking clock and the sight of a single glowing button:

 

“APPROVE FIRE ORDER.”

 

To pull the trigger—or, as the case may be, not to pull it. To hit the button, or to hold off. Legally—and ethically—the role of the soldier’s decision in matters of life and death is preeminent and indispensable. Fundamentally, it is these decisions that define the human act of war.

 

It should be of little surprise, then, that states and civil society have taken up the question of intelligent autonomous weapons—weapons that can select and fire upon targets without any human input—as a matter of serious concern. In May, after close to a decade of discussions, parties to the UN’s Convention on Certain Conventional Weapons agreed, among other recommendations, that militaries using them probably need to “limit the duration, geographical scope, and scale of the operation” to comply with the laws of war. The line was nonbinding, but it was at least an acknowledgment that a human has to play a part—somewhere, sometime—in the immediate process leading up to a killing.

 

But intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. Even the “autonomous” drones and ships fielded by the US and other powers are used under close human supervision. Meanwhile, intelligent systems that merely guide the hand that pulls the trigger have been gaining purchase in the warmaker’s tool kit. And they’ve quietly become sophisticated enough to raise novel questions—ones that are trickier to answer than the well-­covered wrangles over killer robots and, with each passing day, more urgent: What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill?

 

For a long time, the idea of supporting a human decision by computerized means wasn’t such a controversial prospect. Retired Air Force lieutenant general Jack Shanahan says the radar on the F4 Phantom fighter jet he flew in the 1980s was a decision aid of sorts. It alerted him to the presence of other aircraft, he told me, so that he could figure out what to do about them. But to say that the crew and the radar were coequal accomplices would be a stretch.

 

That has all begun to change. “What we’re seeing now, at least in the way that I see this, is a transition to a world [in] which you need to have humans and machines … operating in some sort of team,” says Shanahan.

The rise of machine learning, in particular, has set off a paradigm shift in how militaries use computers to help shape the crucial decisions of warfare—up to, and including, the ultimate decision. Shanahan was the first director of Project Maven, a Pentagon program that developed target recognition algorithms for video footage from drones. The project, which kicked off a new era of American military AI, was launched in 2017 after a study concluded that “deep learning algorithms can perform at near-­human levels.” (It also sparked controversy—in 2018, more than 3,000 Google employees signed a letter of protest against the company’s involvement in the project.)

 

With machine-learning-based decision tools, “you have more apparent competency, more breadth” than earlier tools afforded, says Matt Turek, deputy director of the Information Innovation Office at the Defense Advanced Research Projects Agency. “And perhaps a tendency, as a result, to turn over more decision-making to them.”

Read the Full Article »

About the Author:

Arthur Holland Michel writes about technology. He is based in Barcelona and can be found, occasionally, in New York.

See also: