Facial recognition technology should be regulated, but not banned
The European Commission has proven itself to be an effective regulator in the past. A blanket ban on FRT in law enforcement will only benefit the criminals, Tony Porter and Dr Nicole Benjamin Fink write.
The EU’s AI Act passed a major hurdle in mid-June when the bloc’s lawmakers greenlit what will be the world’s first rules on artificial intelligence.
But one proposal stands apart: a total ban on facial recognition technology, or FRT.
If left to stand, this rule will blindfold the law enforcers who do vital work to protect the most vulnerable in society. It will embolden criminal groups such as those who traffic wildlife and human victims, thereby putting lives at risk.
All surveillance capabilities intrude on human rights to some extent. The question is whether we can regulate the use of FRT effectively to mitigate any impact on these rights.
Protecting privacy versus protecting people is a balance EU lawmakers can and must strike. A blanket ban is the easy, but not the responsible option.
Privacy concerns should face a reality check
MEPs voted overwhelmingly in favour of a ban on the use of live FRT in publicly accessible spaces, and a similar ban on the use of “after the event” FRT unless a judicial order is obtained.
Now attention has shifted to no doubt heated trilogue negotiations between the European Parliament, European Council and member states.
FRT in essence uses cameras powered by AI algorithms to analyse a person’s facial features, potentially enabling authorities to match individuals against a database of pre-existing images, in order to identify them.
Privacy campaigners have long argued that the potential benefits of using such tech are not worth the negative impact on human rights. But many of those arguments don’t stand up to scrutiny. in fact, they’re based on conclusively debunked myths.
The first is that the tech is inaccurate and that it disproportionately disadvantages people of colour.
That may have been true of very early iterations of the technology, but it certainly isn’t today. Corsight has been benchmarked by the US National Institute of Standards and Technology (NIST) to an accuracy rate of 99.8%, for example.
Separately, a 2020 NIST report claimed that FRT performs far more effectively across racial and other demographic groups than widely reported, with the most accurate technologies displaying “undetectable” differences between groups.
It's also falsely claimed that FRT is ineffective. In fact, Interpol said in 2021 that it had been able to identify almost 1,500 terrorists, criminals, fugitives, persons of interest and missing persons since 2016 using FRT. That figure is expected to have risen exponentially since.
A final myth, that FRT intrudes on human rights as enshrined by the European Convention of the same name, was effectively shot down by the Court of Appeal in London. In that 2020 case, judges ruled that scanning faces and instantly deleting the data if a match can’t be found has a negligible impact on human rights.
It's about stopping the traffickers
On the other hand, if used in compliance with strict regulations, high-quality FRT has the capacity to save countless lives and protect people and communities from harm.
Human trafficking is a trade in misery which enables sexual exploitation, forced labour and other heinous crimes. It’s estimated to affect tens of millions around the world, including children.
But if facial images of known victims or traffickers are caught on camera, police could be alerted in real-time to step in.
Given that traffickers usually go to great lengths to hide their identity, and that victims — especially children — rarely possess official IDs, FRT offers a rare opportunity to make a difference.
Wildlife trafficking is similarly clandestine. It’s a global trade estimated many years ago at €20.9 billion — the world’s fourth biggest illegal activity behind arms, drugs and human trafficking.
With much of the trade carried out by criminal syndicates online, there’s a potential evidence trail if investigators can match facial images of trafficked animals to images posted later to social media.
Buyers can then be questioned as to whom they procured a particular animal from. Apps are already springing up to help track wildlife traffickers in this way.
There is a better way forward
Given what’s at stake here, European lawmakers should be thinking about ways to leverage a technology proven to help reduce societal harm — but in a way that mitigates risks to human rights.
The good news is that it can be done with the right regulatory guardrails. In fact, the EU’s AI Act already provides a great foundation for this, by proposing a standard of excellence for AI technologies which FRT could be held to.
Building on this, FRT should be retained as an operation tool wherever there’s a “substantial” risk to the public and a legitimate basis for protecting citizens from harm.
Its use should always be necessary and proportionate to that pressing need, and subject to a rigorous human rights assessment.
Independent ethical and regulatory oversight must of course be applied, with a centralized supervisory authority put in place. And clear policies should be published setting out details of the proposed use.
Impacted communities should be consulted and data published detailing the success or failure of deployments and human rights assessments.
The European Commission has proven itself to be an effective regulator in the past. So, let’s regulate FRT. A blanket ban will only benefit the criminals.
Tony Porter is the Chief Privacy Officer at Corsight AI and the former UK Surveillance Camera Commissioner, and Dr Nicole Benjamin Fink is the Founder of Conservation Beyond Borders.