Online Sleuths Are Using Face Recognition to ID Russian Soldiers

Russian serviceman operating a T72B3 tank of the Russian Southern Military District's 150th Rifle Division in a snowy field - Photograph: Erik Romanenko\Getty Images

Online Sleuths Are Using Face Recognition to ID Russian Soldiers
WIRED, March 10, 2022
Business
By Tom Simonite

“It takes five minutes to put a name to a soldier’s face using little more than a screenshot, but there’s a catch.”

 

On March 1, Chechnya’s leader, Ramzan Kadyrov, posted a short video on Telegram in which a cheery bearded soldier stood before a line of tanks clanking down a road under an overcast sky. In an accompanying post, Kadyrov assured Ukrainians that the Russian army doesn’t hurt civilians and that Vladimir Putin wants their country to determine its own fate.

 

In France, the CEO of a law enforcement and military training company called Tactical Systems took a screenshot of the soldier’s face and got to work. Within about an hour, using face recognition services available to anyone online, he identified that the soldier was likely Hussein Mezhidov, a Chechen commander close to Kadyrov involved in Russia’s assault on Ukraine, and found his Instagram account.

 

“Just having access to a computer and internet, you can basically be like an intelligence agency from a film,” says the CEO, who asked to be identified as YC to avoid potential repercussions for his sleuthing. Tactical Systems’ client list includes the French armed forces, and it offers training in open source intelligence gathering.

 

Russia’s assault on Ukraine, a conflict between two internet-savvy nations in a region with good cellular coverage, offers rich pickings for open source intelligence, or OSINT. Compiling and cross-referencing social media posts and other public sources can reveal information such as the locations or losses of military units. The abundant online photos that are the legacy of years of social networking and a handful of services that provide easy access to face recognition algorithms allow some startling feats of armchair analysis.

 

Not long ago, a commander or prisoner of war pictured in a news report might be recognizable only to military and intelligence analysts or the individual’s own colleagues, friends, and family. Today a stranger on the other side of the globe can use a screenshot of a person’s face to track down their name and family photos—or those of a look-alike.

 

 

That power to identify people from afar could bring new accountability to armed conflict but also open new avenues for digital attack. Identifying—or misidentifying—people in videos or photos said to be from the front lines could expose them or their families to online harassment or worse. Face algorithms can be wrong, and errors are more common on photos without a clear view of a person’s face, as is often the case for wartime images. Nonetheless, Ukraine has a volunteer “IT Army” of computer experts hacking Russian targets on the country’s behalf.

 

If distant volunteers can identify combatants using face recognition, government agencies can do the same or much more. “I’m sure there are Russian analysts tracking Twitter and TikTok with access to similar if not more powerful technology who are not sharing what or who they find so openly,” says Ryan Fedasiuk, an adjunct fellow at the Center for a New American Security.

 

 

Jameson Spivack, an associate at Georgetown’s Center on Privacy & Technology, says some of the same concerns about government uses of the technology also apply when it’s being used for identifications in war-torn Ukraine. [emphasis added]

 

One is that face recognition performs unreliably on images that don’t capture people head-on, a limitation for both police detectives and those sourcing images from war zones. Another is the potential unintended consequences of correct or incorrect identifications. “Individuals using the technology don’t have the power of the state behind them like law enforcement, but the internet can put the collective power of the mob behind them,” Spivack says. [emphasis added]

 

YC of Tactical Systems agrees. He says that he always takes care to back up algorithms’ assessments with other visual clues or contextual information. In the case of the bearded Chechen, a distinctive notch in the man’s beard helped confirm some matches. “Humans are needed, too,” he says. [emphasis added]

Read the Full Article »

About the Author:

Tom Simonite is a senior writer for WIRED in San Francisco covering artificial intelligence and its effects on the world. He once trained an artificial neural network to generate seascapes and is available for commissions. Simonite was previously San Francisco bureau chief at MIT Technology Review, and wrote and edited technology coverage at New Scientist magazine in London. He lives in San Francisco, where he enjoys riding his bike and testing the reactions of prototype self-driving cars.

See Also: