Monday, April 19, 2021

PUT ON YOUR BEST FACE
Thousands of US government agencies are using Clearview AI without approval

Daniel Cooper
 2021-04-06

Nearly two thousand government bodies, including police departments and public schools, have been using Clearview AI without oversight. Buzzfeed News reports that employees from 1,803 public bodies used the controversial facial-recognition platform without authorization from bosses. Reporters contacted a number of agency heads, many of which said they were unaware their employees were accessing the system.



A database of searches, outlining which agencies were able to access the platform, and how many queries were made, was leaked to Buzzfeed by an anonymous source. It has published a version of the database online, enabling you to examine how many times each department has used the tool. Clearview AI refused to authenticate the validity of the data, and reportedly refused to engage with questions about the leak.

Clearview AI, founded by Hoan Ton-That, markets itself as a searchable facial-recognition database for law enforcement agencies. The New York Times has previously reported on Ton-That’s close association with notorious figures from the far right, and is backed by early Facebook investor Peter Thiel. The company’s USP has been to download every image posted to social media without permission to build its database — something the social media companies in question have tried to stop. The company is currently under investigation in both the UK and Australia for its data-collection practices.

The report — which you should read in its entirety — outlines how Clearview has offered generous free trials to individual employees at public bodies. This approach is meant to encourage these employees to incorporate the system into their working day, and advocate for their agencies to sign up. But there are a number of civil liberties, privacy, legal and accuracyquestions that remain in the air as to how Clearview operates. This has not deterred agencies like ICE, however, from signing up to use the system, although other agencies, like the LAPD, have already banned use of the platform.

Facial recognition tech is supporting mass surveillance. It's time for a ban, say privacy campaigners


A group of 51 digital rights organizations has called on the European Commission to impose a complete ban on the use of facial recognition technologies for mass surveillance – with no exceptions allowed. © Provided by ZDNet The letter urges the Commissioner to support enhanced protection for fundamental human rights. Image: Getty Images/iStockphoto

Comprising activist groups from across the continent, such as Big Brother Watch UK, AlgorithmWatch and the European Digital Society, the call was chaperoned by advocacy network the European Digital Rights (EDRi) in the form of an open letter to the European commissioner for Justice, Didier Reynders.

It comes just weeks before the Commission releases much-awaited new rules on the ethical use of artificial intelligence on the continent on 21 April.

© ZDNet

The letter urges the Commissioner to support enhanced protection for fundamental human rights in the upcoming laws, in particular in relation to facial recognition and other biometric technologies, when these tools are used in public spaces to carry out mass surveillance.

SEE: Security Awareness and Training policy (TechRepublic Premium)

According to the coalition, there are no examples where the use of facial recognition for the purpose of mass surveillance can justify the harm that it might cause to individuals' rights, such as the right to privacy, to data protection, to non-discrimination or to free expression.

It is often defended that the technology is a reasonable tool to deploy in some circumstances, such as to keep an eye on the public in the context of law enforcement, but the signatories to the letter argue that a blanket ban should instead be imposed on all potential use cases.

"Wherever a biometric technology entails mass surveillance, we call for a ban on all uses and applications without exception," Ella Jakubowska, policy and campaigns officer at EDRi, tells ZDNet. "We think that any use that is indiscriminately or arbitrarily targeting people in a public space is always, and without question, going to infringe on fundamental rights. It's never going to meet the threshold of necessity and proportionality."

Based on evidence from within and beyond the EU, in effect, EDRi has concluded that the unfettered development of biometric technologies to snoop on citizens has severe consequences for human rights.

It has been reported that in China, for instance, the government is using facial recognition to carry out mass surveillance of the Muslim Uighur population living in Xinjiang, through gate-like scanning systems that record biometric features, as well as smartphone fingerprints to track residents' movements.

But worrying developments of the technology have also occurred much closer to home. Recent research coordinated by EDRi found examples of controversial deployments of biometric technologies for mass surveillance across the vast majority of EU countries.

They range from using facial recognition for queue management in Rome and Brussels airports, to German authorities using the technology to surveil G20 protesters in Hamburg. The European Commission provides a €4.5 million ($5.3 million) grant to deploy a technology dubbed iBorderCtrl at some European border controls, which picked up on travelers' gestures to detect those who might be lying when trying to enter an EU country illegally.

In recent months, however, some top EU leaders have shown support for legislation that would limit the scope of facial recognition technologies. In a white paper published last year, in fact, the bloc stated that it would consider banning the technology altogether.

The EU's vice-president for digital Margrethe Vestager has also said that using facial recognition tools to identify citizens automatically is at odds with the bloc's data protection regime, given that it doesn't meet one of the GDPR's key requirements of obtaining an individual's consent before processing their biometric data.

This won't be enough to stop the technology from interfering with human rights, according to EDRi. The GDPR leaves space for exemptions when "strictly necessary", which, coupled with poor enforcement of the rule of consent, has led to examples of facial recognition being used to the detriment of EU citizens, such as those uncovered by EDRi.

"We have evidence of the existing legal framework being misapplied and having enforcement problems. So, although commissioners seem to agree that in principle, these technologies should be banned by the GDPR, that ban doesn't exist in reality," says Jakubowska. "This is why we want the Commission to publish a more specific and clear prohibition, which builds on the existing prohibitions in general data protection law."

EDRi and the 51 organizations that have signed the open letter join a chorus of activist voices that have demanded similar action in the last few years.

Over 43,500 European citizens have signed a "Reclaim Your Face" petition calling for a ban on biometric mass surveillance practices in the EU; and earlier this year, the Council of Europe also called for some applications of facial recognition to be banned, where they have the potential to lead to discrimination.

SEE: Facial recognition: Don't use it to snoop on how staff are feeling, says watchdog

Pressure is mounting on the European Commission, therefore, ahead of the institution's publication of new rules on AI that are expected to shape the EU's place and relevance in what is often described as a race against China and the US.

For Jakubowska, however, this is an opportunity to seize. "These technologies are not inevitable," she says. "We are at an important tipping point where we could actually prevent a lot of future harms and authoritarian technology practices before they go any further. We don't have to wait for huge and disruptive impacts on people's lives before we stop it. This is an incredible opportunity for civil society to interject, at a point where we can still change things."

As part of the open letter, EDRi has also urged the Commission to carefully review the other potentially dangerous applications of AI, and draw some red lines where necessary.

Among the use cases that might be problematic, the signatories flagged technologies that might impede access to healthcare, social security or justice, as well as systems that make predictions about citizens' behaviors and thoughts; and algorithms capable of manipulating individuals, and presenting a threat to human dignity, agency, and collective democrac

No comments:

Post a Comment