Thursday, August 26, 2021

Clearview AI Offered Free Facial Recognition Trials To Police All Around The World

As of February 2020, 88 law enforcement and government-affiliated agencies in 24 countries outside the United States have tried to use controversial facial recognition technology Clearview AI, according to a BuzzFeed News investigation.

LONG READ


Posted on August 25, 2021, at 10:33 a.m. ET

Law enforcement agencies and government organizations from 24 countries outside the United States used a controversial facial recognition technology called Clearview AI, according to internal company data reviewed by BuzzFeed News.

That data, which runs up until February 2020, shows that police departments, prosecutors’ offices, universities, and interior ministries from around the world ran nearly 14,000 searches with Clearview AI’s software. At many law enforcement agencies from Canada to Finland, officers used the software without their higher-ups’ knowledge or permission. After receiving questions from BuzzFeed News, some organizations admitted that the technology had been used without leadership oversight.

In March, a BuzzFeed News investigation based on Clearview AI’s own internal data showed how the New York–based startup distributed its facial recognition tool, by marketing free trials for its mobile app or desktop software, to thousands of officers and employees at more than 1,800 US taxpayer-funded entities. Clearview claims its software is more accurate than other facial recognition technologies because it is trained on a database of more than 3 billion images scraped from websites and social media platforms, including Facebook, Instagram, LinkedIn, and Twitter.

Law enforcement officers using Clearview can take a photo of a suspect or person of interest, run it through the software, and receive possible matches for that individual within seconds. Clearview has claimed that its app is 100% accurate in documents provided to law enforcement officials, but BuzzFeed News has seen the software misidentify people, highlighting a larger concern with facial recognition technologies.

Based on new reporting and data reviewed by BuzzFeed News, Clearview AI took its controversial US marketing playbook around the world, offering free trials to employees at law enforcement agencies in countries including Australia, Brazil, and the United Kingdom.

To accompany this story, BuzzFeed News has created a searchable table of 88 international government-affiliated and taxpayer-funded agencies and organizations listed in Clearview’s data as having employees who used or tested the company’s facial recognition service before February 2020, according to Clearview’s data.
GO HERE


Some of those entities were in countries where the use of Clearview has since been deemed “unlawful.” Following an investigation, Canada’s data privacy commissioner ruled in February 2021 that Clearview had “violated federal and provincial privacy laws”; it recommended the company stop offering its services to Canadian clients, stop collecting images of Canadians, and delete all previously collected images and biometrics of people in the country.

In the European Union, authorities are assessing whether the use of Clearview violated the General Data Protection Regulation (GDPR), a set of broad online privacy laws that requires companies processing personal data to obtain people’s informed consent. The Dutch Data Protection Authority told BuzzFeed News that it’s “unlikely” that police agencies’ use of Clearview was lawful, while France’s National Commission for Informatics and Freedoms said that it has received “several complaints” about Clearview that are “currently being investigated.” One regulator in Hamburg has already deemed the company’s practices illegal under the GDPR and asked it to delete information on a German citizen.

Despite Clearview being used in at least two dozen other countries, CEO Hoan Ton-That insists the company’s key market is the US.

“While there has been tremendous demand for our service from around the world, Clearview AI is primarily focused on providing our service to law enforcement and government agencies in the United States,” he said in a statement to BuzzFeed News. “Other countries have expressed a dire need for our technology because they know it can help investigate crimes, such as, money laundering, financial fraud, romance scams, human trafficking, and crimes against children, which know no borders.”

In the same statement, Ton-That alleged there are “inaccuracies contained in BuzzFeed’s assertions.” He declined to explain what those might be and did not answer a detailed list of questions based on reporting for this story.

Clearview AI has created a powerful facial recognition tool and marketed it to police departments and government agencies. The company has never disclosed the entities that have used its facial recognition software, but a confidential source provided BuzzFeed News with data that appeared to be a list of agencies and companies whose employees have tested or actively used its technology.

Using that data, along with public records and interviews, we have created a searchable database of internationally based taxpayer-funded entities, including law enforcement agencies, prosecutor’s offices, universities, and interior ministries. We have included only those agencies for which the data shows that at least one associated individual ran at least one facial recognition scan as of February 2020.

The database has limitations. Clearview has neither verified nor disputed the underlying data, which The data begins in 2018 and ends in February 2020, so it does not account for any activity after that time or for any additional organizations that may have started using Clearview after February 2020.

Not all searches corresponded to an investigation, and some agencies told us that their employees had merely run test searches to see how well the technology worked. BuzzFeed News created search ranges based on data that showed how many times individuals at a given organization ran photos through Clearview.

We found inaccuracies in the data, including organizations with misspelled or incomplete names, and we moved to correct those issues when they could be confirmed. If we were not able to confirm the existence of an entity, we removed it.

BuzzFeed News gave every agency or organization in this database the opportunity to comment on whether it had used Clearview’s technology and whether the software had led to any arrests.

Of the 88 entities in this database:
36 said they had employees who used or tried Clearview AI.
Officials at 9 of those organizations said they were unaware that their employees had signed up for free trials until questions from BuzzFeed News or our reporting partners prompted them to look.
Officials at another 3 entities at first denied their employees had used Clearview but later determined that some of them had.
10 entities declined to answer questions as to whether their employees had used Clearview.
12 organizations denied any use of Clearview.
30 organizations did not respond to requests for comment.

Responses from the agencies, including whether they denied using Clearview’s technology or did not respond to requests for comment, are included in the table.

Just because an agency appears on the list does not mean BuzzFeed News was able to confirm that it actually used the tool or that its officials approved its employees’ use of Clearview.

By searching this database, you affirm that you understand its limitations.


According to a 2019 internal document first reported by BuzzFeed News, Clearview had planned to pursue “rapid international expansion” into at least 22 countries. But by February 2020, the company’s strategy appeared to have shifted. “Clearview is focused on doing business in the USA and Canada,” Ton-That told BuzzFeed News at that time.

Two weeks later, in an interview on PBS, he clarified that Clearview would never sell its technology to countries that “are very adverse to the US,” before naming China, Russia, Iran, and North Korea.

Since that time, Clearview has become the subject of media scrutiny and multiple government investigations. In July, following earlier reporting from BuzzFeed News that showed that private companies and public organizations had run Clearview searches in Great Britain and Australia, privacy commissioners in those countries opened a joint inquiry into the company for its use of personal data. The investigation is ongoing, according to the UK's Information Commissioner’s Office, which told BuzzFeed News that “no further comment will be made until it is concluded.”

Canadian authorities also moved to regulate Clearview after the Toronto Star, in partnership with BuzzFeed News, reported on the widespread use of the company’s software in the country. In February 2020, federal and local Canadian privacy commissioners launched an investigation into Clearview, and concluded that it represented a “clear violation of the privacy rights of Canadians.”

Earlier this year, those bodies officially declared Clearview’s practices in the country illegal and recommended that the company stop offering its technology to Canadian clients. Clearview disagreed with the findings of the investigation and did not demonstrate a willingness to follow the other recommendations, according to the Office of the Privacy Commissioner of Canada.

Prior to that declaration, employees from at least 41 entities within the Canadian government — the most of any country outside the US — were listed in internal data as having used Clearview. Those agencies ranged from police departments in midsize cities like Timmins, a 41,000-person city where officers ran more than 120 searches, to major metropolitan law enforcement agencies like the Toronto Police Service, which is listed in the data as having run more than 3,400 searches as of February 2020.

Loations of entities that used Clearview AI.BuzzFeed News

A spokesperson for the Timmins Police Service acknowledged that the department had used Clearview but said no arrests were ever made on the basis of a search with the technology. The Toronto Police Service did not respond to multiple requests for comment.

Clearview’s data show that usage was not limited to police departments. The public prosecutions office at the Saskatchewan Ministry of Justice ran more than 70 searches with the software. A spokesperson initially said that employees had not used Clearview but changed her response after a series of follow-up questions.

“The Crown has not used Clearview AI to support a prosecution.”

“After review, we have identified standalone instances where ministry staff did use a trial version of this software,” Margherita Vittorelli, a ministry spokesperson, said. “The Crown has not used Clearview AI to support a prosecution. Given the concerns around the use of this technology, ministry staff have been instructed not to use Clearview AI’s software at this time.”

Some Canadian law enforcement agencies suspended or discontinued their use of Clearview AI not long after the initial trial period or stopped using it in response to the government investigation. One detective with the Niagara Regional Police Service’s Technological Crimes Unit conducted more than 650 searches on a free trial of the software, according to the data.

“Once concerns surfaced with the Privacy Commissioner, the usage of the software was terminated,” department spokesperson Stephanie Sabourin told BuzzFeed News. She said the detective used the software in the course of an undisclosed investigation without the knowledge of senior officers or the police chief.

The Royal Canadian Mounted Police was among the very few international agencies that had contracted with Clearview and paid to use its software. The agency, which ran more than 450 searches, said in February 2020 that it used the software in 15 cases involving online child sexual exploitation, resulting in the rescue of two children.

In June, however, the Office of the Privacy Commissioner in Canada found that RCMP’s use of Clearview violated the country’s privacy laws. The office also found that Clearview had “violated Canada’s federal private sector privacy law by creating a databank of more than three billion images scraped from internet websites without users’ consent.” The RCMP disputed that conclusion.

The Canadian Civil Liberties Association, a nonprofit group, said that Clearview had facilitated “unaccountable police experimentation” within Canada.

“Clearview AI’s business model, which scoops up photos of billions of ordinary people from across the internet and puts them in a perpetual police lineup, is a form of mass surveillance that is unlawful and unacceptable in our democratic, rights-respecting nation,” Brenda McPhail, director of the CCLA’s privacy, technology, and surveillance program, told BuzzFeed News.


Like a number of American law enforcement agencies, some international agencies told BuzzFeed News that they couldn’t discuss their use of Clearview. For instance, Brazil’s Public Ministry of Pernambuco, which is listed as having run more than 100 searches, said that it “does not provide information on matters of institutional security.”

But data reviewed by BuzzFeed News shows that individuals at nine Brazilian law enforcement agencies, including the country’s federal police, are listed as having used Clearview, cumulatively running more than 1,250 searches as of February 2020. All declined to comment or did not respond to requests for comment.

The UK’s National Crime Agency, which ran more than 500 searches, according to the data, declined to comment on its investigative techniques; a spokesperson told BuzzFeed News in early 2020 that the organization “deploys numerous specialist capabilities to track down online offenders who cause serious harm to members of the public.” Employees at the country’s Metropolitan Police Service ran more than 150 searches on Clearview, according to internal data. When asked about the department's use of the service, the police force declined to comment.

Documents reviewed by BuzzFeed News also show that Clearview had a fledgling presence in Middle Eastern countries known for repressive governments and human rights concerns. In Saudi Arabia, individuals at the Artificial Intelligence Center of Advanced Studies (also known as Thakaa) ran at least 10 searches with Clearview. In the United Arab Emirates, people associated with Mubadala Investment Company, a sovereign wealth fund in the capital of Abu Dhabi, ran more than 100 searches, according to internal data.

Thakaa did not respond to multiple requests for comment. A Mubadala spokesperson told BuzzFeed News that the company does not use the software at any of its facilities.

Data revealed that individuals at four different Australian agencies tried or actively used Clearview, including the Australian Federal Police (more than 100 searches) and Victoria Police (more than 10 searches), where a spokesperson told BuzzFeed News that the technology was “deemed unsuitable” after an initial exploration.

“Between 2 December 2019 and 22 January 2020, members of the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) registered for a free trial of the Clearview AI facial recognition tool and conducted a limited pilot of the system in order to ascertain its suitability in combating child exploitation and abuse,” Katie Casling, an AFP spokesperson, said in a statement.

The Queensland Police Service and its homicide investigations unit ran more than 1,000 searches as of February 2020, based on data reviewed by BuzzFeed News. The department did not respond to requests for comment.

Clearview marketed its facial recognition system across Europe by offering free trials at police conferences, where it was often presented as a tool to help find predators and victims of child sex abuse.

In October 2019, law enforcement officers from 21 different nations and Interpol gathered at Europol’s European Cybercrime Centre in the Hague in the Netherlands to comb through millions of image and video files of victims intercepted in their home countries as part of a child abuse Victim Identification Taskforce. At the gathering, outside participants who were not Europol staff members presented Clearview AI as a tool that might help in their investigations.

After the two-week conference, which included specialists from Belgium, France, and Spain, some officers appear to have taken back home what they had learned and began using Clearview.


“The police authority did not know and had not approved the use.”



A Europol spokesperson told BuzzFeed News that it did not endorse the use of Clearview, but confirmed that “external participants presented the tool during an event hosted by Europol.” The spokesperson declined to identify the participants.

“Clearview AI was used during a short test period by a few employees within the Police Authority, including in connection with a course arranged by Europol. The police authority did not know and had not approved the use,” a spokesperson for the Swedish Police Authority told BuzzFeed News in a statement. In February 2021, the Swedish Data Protection Authority concluded an investigation into the police agency’s use of Clearview and fined it $290,000 for violating the Swedish Criminal Data Act.

Leadership at Finland’s National Bureau of Investigation only learned about employees’ use of Clearview after being contacted by BuzzFeed News for this story. After initially denying any usage of the facial recognition software, a spokesperson reversed course a few weeks later, confirming that officers had used the software to run nearly 120 searches.

“The unit tested a US service called Clearview AI for the identification of possible victims of sexual abuse to control the increased workload of the unit by means of artificial intelligence and automation,” Mikko Rauhamaa, a senior detective superintendent with Finland’s National Bureau of Investigation, said in a statement.

Questions from BuzzFeed News prompted the NBI to inform Finland’s Data Protection Ombudsman of a possible data breach, triggering a further investigation. In a statement to the ombudsman, the NBI said its employees had learned of Clearview at a 2019 Europol event, where it was recommended for use in cases of child sexual exploitation. The NBI has since ceased using Clearview.

Data reviewed by BuzzFeed News shows that by early 2020, Clearview had made its way across Europe. Italy’s state police, Polizia di Stato, ran more than 130 searches, according to data, though the agency did not respond to a request for comment. A spokesperson for France’s Ministry of the Interior told BuzzFeed News that they had no information on Clearview, despite internal data listing employees associated with the office as having run more than 400 searches.

“INTERPOL’s Crimes Against Children unit uses a range of technologies in its work to identify victims of online child sexual abuse,” a spokesperson for the international police force based in Lyon, France, told BuzzFeed News when asked about the agency’s more than 300 searches. “A small number of officers have used a 30-day free trial account to test the Clearview software. There is no formal relationship between INTERPOL and Clearview, and this software is not used by INTERPOL in its daily work."

Child sex abuse typically warrants the use of powerful tools in order to save the victims or track down the perpetrators. But Jake Wiener, a law fellow at the Electronic Privacy Information Center, said that many tools already exist in order to fight this type of crime, and, unlike Clearview, they don’t involve an unsanctioned mass collection of the photos that billions of people post to platforms like Instagram and Facebook.

“If police simply want to identify victims of child trafficking, there are robust databases and methods that already exist,” he said. “They don’t need Clearview AI to do this.”

Since early 2020, regulators in Canada, France, Sweden, Australia, the UK, and Finland have opened investigations into their government agencies’ use of Clearview. Some privacy experts believe Clearview violated the EU’s data privacy laws, known as the GDPR.

To be sure, the GDPR includes some exemptions for law enforcement. It explicitly notes that “covert investigations or video surveillance” can be carried out “for the purposes of the prevention, investigation, detection, or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security…”

But in June 2020, the European Data Protection Board, the independent body that oversees the application of the GDPR, issued guidance that “the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime.”

This January, the Hamburg Commissioner for Data Protection and Freedom of Information in Germany — a country where agencies had no known use of Clearview as of February 2020, according to data — went one step further; it deemed that Clearview itself was in violation of the GDPR and ordered the company to delete biometric information associated with an individual who had filed an earlier complaint.

In his response to questions from BuzzFeed News, Ton-That said Clearview has “voluntarily processed” requests from people within the European Union to have their personal information deleted from the company’s databases. He also noted that Clearview does not have contracts with any EU customers “and is not currently available in the EU.” He declined to specify when Clearview stopped being available in the EU.


CBS This Morning via YouTube / Via youtube.com
Clearview AI CEO Hoan Ton-That

Christoph Schmon, the international policy director for the Electronic Frontier Foundation, told BuzzFeed News that the GDPR adds a new level of complexity for European police officers who had used Clearview. Under the GDPR, police can’t use personal or biometric data unless doing so is “necessary to protect the vital interests” of a person. But if law enforcement agencies aren’t aware they have officers using Clearview, it's impossible to make such evaluations.

“If authorities have basically not known that their staff tried Clearview — that I find quite astonishing and quite unbelievable, to be honest,” he said. “It’s the job of law enforcement authorities to know the circumstances that they can produce citizen data and an even higher responsibility to be held accountable for any misuse of citizen data.”

"If authorities have basically not known that their staff tried Clearview — that I find quite astonishing."

Many experts and civil rights groups have argued that there should be a ban on governmental use of facial recognition. Regardless of whether a facial recognition software is accurate, groups like the Algorithmic Justice League argue that without regulation and proper oversight it can cause overpolicing or false arrests.

“Our general stance is that facial recognition tech is problematic, so governments should never use it,” Schmon said. Not only is there a high chance that police officers will misuse facial recognition, he said, but the technology tends to misidentify people of color at higher rates than it does white people.

Schmon also noted that facial recognition tools don’t provide facts. They provide a probability that a person matches an image. “Even if the probabilities were engineered correctly, it may still reflect biases,” he said. “They are not neutral.”

Clearview did not answer questions about its claims of accuracy. In a March statement to BuzzFeed News, Ton-That said, “As a person of mixed race, ensuring that Clearview AI is non-biased is of great importance to me.” He added, “Based on independent testing and the fact that there have been no reported wrongful arrests related to the use of Clearview AI, we are meeting that standard.”

Despite being investigated and, in some cases banned around the world, Clearview’s executives appear to have already begun laying the groundwork for further expansion. The company recently raised $30 million, according to the New York Times, and it has made a number of new hires. Last August, cofounders Ton-That and Richard Schwartz, along with other Clearview executives, appeared on registration papers for companies called Standard International Technologies in Panama and Singapore.

In a deposition for an ongoing lawsuit in the US this year, Clearview executive Thomas Mulcaire shed some light on the purpose of those companies. While the subsidiary companies do not yet have any clients, he said, the Panama entity was set up to “potentially transact with law enforcement agencies in Latin America and the Caribbean that would want to use Clearview software.”

Mulcaire also said the newly formed Singapore company could do business with Asian law enforcement agencies. In a statement, Ton-That stopped short of confirming those intentions but provided no other explanation for the move.

“Clearview AI has set up two international entities that have not conducted any business,” he said. ●

CONTRIBUTED REPORTING: Ken Bensinger, Salvador Hernandez, Brianna Sacks, Pranav Dixit, Logan McDonald, John Paczkowski, Mat Honan, Jeremy Singer-Vine, Ben King, Emily Ashton, Hannah Ryan


MORE ON CLEARVIEW AI
How A Facial Recognition Tool Found Its Way Into Hundreds Of US Police Departments, Schools, And Taxpayer-Funded OrganizationsRyan Mac · April 6, 2021


Antonio Pequeño IV is a BuzzFeed News contributor

Contact Antonio Pequeño IV at antonio.pequeno@buzzfeed.com

No comments:

Post a Comment