In Ukraine, Identifying the Dead Comes at a Human Rights Cost
Five days after Russia launched its all-out invasion of Ukraine a year ago this week. US-based facial recognition company Clearview AI has offered the Ukrainian government free access to its technology, suggesting it could be used to reunite families, identify Russian agents and fight misinformation. Soon after, the Ukrainian government announced that it was using the technology to scan the faces of dead Russian soldiers to identify their bodies and notify their families. Until December 2022, Mykhailo Fedorov was Deputy Prime Minister and Minister of Digital Transformation of Ukraine tweet a picture of himself with Clearview AI CEO Hoan Ton-That thanking the company for its support.
Accountability for the dead and informing families of the fate of their loved ones is a human rights imperative that is enshrined in international treaties, protocols and laws such as the Geneva Conventions and the guiding principles of the International Committee of the Red Cross (ICRC) for dignified treatment anchored with people is dead. It also comes with much deeper commitments. Caring for the dead is one of the oldest human practices that makes us as human as language and the ability to self-reflect. The historian Thomas Laqueur has in his epic meditation The work of the dead, writes that “Ever since humans have discussed this issue, caring for the dead has been seen as fundamental – of religion, of polity, of clan, of tribe, of the ability to mourn, of understanding the finitude of life, of civilization itself.” But identifying the dead using facial recognition technology uses the moral weight of that kind of caring to authorize a technology that raises serious human rights concerns.
In Ukraine, the In the wake of Europe’s bloodiest war since World War II, facial recognition appears to be just another tool deployed in the grim task of identifying the fallen, alongside the digitization of morgues, mobile DNA labs and the exhumation of mass graves.
But does it work? Ton-That says his company’s technology “works effectively regardless of facial damage that a deceased person may have experienced.” There is little research to support this claim, but the authors of a small study found results that were “promising” even for faces in states of decomposition. But forensic anthropologist Luis Fondebrider, a former head of the ICRC’s forensic services who has worked in conflict zones around the world, doubts these claims. “This technology lacks scientific credibility,” he says. “It’s absolutely not universally accepted by the forensic community.” (DNA identification remains the gold standard.) The field of forensics “understands technology and the importance of new developments,” but the rush towards facial recognition is, according to Fondebrider, “a combination of politics and business with very little science.” “There are no magic solutions to identification,” he says.
Using unproven technology to identify fallen soldiers could lead to errors and traumatize families. But even if the forensic use of facial recognition technology were backed by scientific evidence, it should not be used to name the dead. It’s too dangerous for the living.
Organizations such as Amnesty International, the Electronic Frontier Foundation, the Surveillance Technology Oversight Project and the Immigrant Defense Project have declared facial recognition technology a form of mass surveillance that threatens privacy, increases racist policing, threatens the right to protest and can lead to wrongful arrest . Damini Satija, head of Amnesty International’s Algorithmic Accountability Lab and associate director of Amnesty Tech, says facial recognition technology undermines human rights by “reproducing structural discrimination on a large scale and automating and perpetuating existing societal injustices”. Facial recognition technology is being used in Russia to quell political dissent. It fails to meet legal and ethical standards when used in law enforcement in the UK and US, and is used against marginalized communities around the world.
Clearview AI, which sells its wares primarily to the police, has one of the largest known databases of facial photos, with 20 billion images, and plans to collect another 100 billion images — the equivalent of 14 photos for every person on the planet. The company has promised investors that soon “almost everyone in the world will be identifiable.” Regulators in Italy, Australia, the UK and France have outlawed Clearview’s database and ordered the company to delete its citizens’ photos. In the EU, Reclaim Your Face, a coalition of more than 40 civil society organizations, has called for a total ban on facial recognition technology.
AI Ethics Researcher Stephanie Hare says Ukraine is “using a tool and promoting a company and a CEO who have not only behaved unethically but illegally”. She suspects that “the end justifies the means”, but asks: “Why is it so important that Ukraine can identify dead Russian soldiers using Clearview AI? How important is that to defend Ukraine or win the war?”
https://www.wired.com/story/russia-ukraine-facial-recognition-technology-death-military/ In Ukraine, Identifying the Dead Comes at a Human Rights Cost