Facial Recognition Systems Scan 23 Million People at US Borders, Come up With Zero Imposters
SCOTT IKEDA·FEBRUARY 23, 2021
Though international travel was down in 2020 due to the protracted coronavirus pandemic, there were still 23 million border crossers entering the United States that were subject to facial recognition scans. Of these, it appears not one was determined to be an imposter at any of the country’s airports according to an annual U.S. Customs and Border Protection (CBP) report.
These numbers are generally low each year; the agency caught 100 imposters out of 19 million scans in 2019, but it has been as high as 285 among fewer scans in recent years since more advanced (and more numerous) facial recognition systems have been implemented and statistics have been tracked.
Facial recognition system catches no imposters among 4 million more travelers
The system did have some luck with pedestrian border crossers, where it identified just under 100 people attempting to cross national borders on foot under an assumed identity. It completely struck out at airports in spite of a substantially increased amount of facial recognition scans, however. Commercial airlines have been using facial recognition since 2017, and it has also been used at seaports for international cruise passengers since 2018.
CBP has been in some trouble over the accuracy of this program since it began publicly reporting numbers several years ago. The Government Accountability Office (GAO) took the CBP to task in late 2020 over a general poor quality of operations, from unhelpful traveler-facing signage and program information in airports to questionable accuracy audits.
James McQuiggan, security awareness advocate at KnowBe4, points out that given the small sample size of data and relative newness of the technology it is impossible to say if this is a case of unique coronavirus conditions limiting attempts or if the system simply is no good at picking out fraudulent travel documents: “Having no imposters come through in 2020 and with 23 million coming into the U.S. could be due to lockdowns in various countries and restrictions on travel, which might have slowed imposters’ progress to gain entry in the U.S. illegally … While the report does not provide any data or audit results, it’s unclear whether the Government Assurance Office (GAO) conducted any test with a “fake” imposter to see if they could bypass the CBP. In cybersecurity or physical security, organizations want to test and monitor their perimeters, whether electronic or physical. These audits can determine any areas of improvement and if the processes and procedures are operating as required.”
It is known that CBP auditing of each airport’s scans is fairly limited: only two flights from each airport are examined by the department each week and that process can take weeks to detect issues in how scans are being done.
Are border facial recognition systems failing?
The airport facial recognition scan essentially compares the travelers face to their passport photo, looking for a match and for signs that the passport may be illegitimate. INTERPOL maintains an international database of reported lost and stolen passports that the US and other countries use as part of these checks, but a 2014 government hearing found that many countries were not reporting these passports to the agency.
Stuart Sharp, Vice President of Technical Services at OneLogin, expanded on the screening system that is in use at airports: “We should not assume that the CBP facial recognition tools have failed simply by a lack of imposter identification, as this may simply be the result of fewer individuals attempting to enter the country as a result of Covid. Nevertheless, while biometrics have a role to play in identification, it does face significant limitations. Most people don’t realise that Biometric authentication relies on a probabilistic model, not deterministic. When comparing a facial or fingerprint scan to the stored value, the system accepts a degree of variation. This is called the False Acceptance Rate (FAR) metric, which is the probability that the system will incorrectly identify a user as valid. Realizing that facial recognition is simply verifying that the scan is ‘similar’ to the stored image, you can see that there is a real risk that the CBP tools are not detecting skillful imposters.”
Facial recognition has also run into its own inherent accuracy problems. An NIST study in late 2019 found that the facial scanning systems used by border control and law enforcement agencies had serious inaccuracy problems for specific demographics of people: African-American and Asian people specifically, and it was considerably more inaccurate for women and older people than men and middle-aged data subjects. These inaccuracies were tracked back to the possibility of biased facial samples used to train the AI algorithms, with the data sets simply not containing enough members of these demographics to be able to accurately identify them. As of late 2020 a number of companies have claimed that they are working to fix these inherent bias issues, but due to the secretive nature of the industry it is difficult for the general public to tell how successful these efforts are or if real world improvements are being seen.
Out of 23 million border crossers entering the US, none were determined to be an imposter from #facialrecognition scans, raising questions on the tech. #privacy #respectdataClick to Tweet
There are also questions about the security of the facial recognition systems used at the border, and the prospect of attackers gaining illicit access to them. The Department of Homeland Security (DHS) experienced a breach by way of a vendor in 2019, which caused the photos and personal data (including license plate numbers) of thousands of border-crossing travelers to be leaked to the dark web. The vendor had apparently stored these images on its own local servers against DHS policy; a rogue employee gained access to the photos between 2018 and 2019 and exfiltrated them, later attempting to ransom the vendor for Bitcoin and leaking the images to the dark web when the ransom was not paid. The incident was not confirmed by DHS until 2020. While there have not been any known incidents of CBP’s facial comparison biometrics or the screening processes used at airports being hacked, the vendor compromise demonstrates that the entire facial recognition system relies on a massive network of subcontractors that each represent potential points of vulnerability
Though international travel was down in 2020 due to the protracted coronavirus pandemic, there were still 23 million border crossers entering the United States that were subject to facial recognition scans. Of these, it appears not one was determined to be an imposter at any of the country’s airports according to an annual U.S. Customs and Border Protection (CBP) report.
These numbers are generally low each year; the agency caught 100 imposters out of 19 million scans in 2019, but it has been as high as 285 among fewer scans in recent years since more advanced (and more numerous) facial recognition systems have been implemented and statistics have been tracked.
Facial recognition system catches no imposters among 4 million more travelers
The system did have some luck with pedestrian border crossers, where it identified just under 100 people attempting to cross national borders on foot under an assumed identity. It completely struck out at airports in spite of a substantially increased amount of facial recognition scans, however. Commercial airlines have been using facial recognition since 2017, and it has also been used at seaports for international cruise passengers since 2018.
CBP has been in some trouble over the accuracy of this program since it began publicly reporting numbers several years ago. The Government Accountability Office (GAO) took the CBP to task in late 2020 over a general poor quality of operations, from unhelpful traveler-facing signage and program information in airports to questionable accuracy audits.
James McQuiggan, security awareness advocate at KnowBe4, points out that given the small sample size of data and relative newness of the technology it is impossible to say if this is a case of unique coronavirus conditions limiting attempts or if the system simply is no good at picking out fraudulent travel documents: “Having no imposters come through in 2020 and with 23 million coming into the U.S. could be due to lockdowns in various countries and restrictions on travel, which might have slowed imposters’ progress to gain entry in the U.S. illegally … While the report does not provide any data or audit results, it’s unclear whether the Government Assurance Office (GAO) conducted any test with a “fake” imposter to see if they could bypass the CBP. In cybersecurity or physical security, organizations want to test and monitor their perimeters, whether electronic or physical. These audits can determine any areas of improvement and if the processes and procedures are operating as required.”
It is known that CBP auditing of each airport’s scans is fairly limited: only two flights from each airport are examined by the department each week and that process can take weeks to detect issues in how scans are being done.
Are border facial recognition systems failing?
The airport facial recognition scan essentially compares the travelers face to their passport photo, looking for a match and for signs that the passport may be illegitimate. INTERPOL maintains an international database of reported lost and stolen passports that the US and other countries use as part of these checks, but a 2014 government hearing found that many countries were not reporting these passports to the agency.
Stuart Sharp, Vice President of Technical Services at OneLogin, expanded on the screening system that is in use at airports: “We should not assume that the CBP facial recognition tools have failed simply by a lack of imposter identification, as this may simply be the result of fewer individuals attempting to enter the country as a result of Covid. Nevertheless, while biometrics have a role to play in identification, it does face significant limitations. Most people don’t realise that Biometric authentication relies on a probabilistic model, not deterministic. When comparing a facial or fingerprint scan to the stored value, the system accepts a degree of variation. This is called the False Acceptance Rate (FAR) metric, which is the probability that the system will incorrectly identify a user as valid. Realizing that facial recognition is simply verifying that the scan is ‘similar’ to the stored image, you can see that there is a real risk that the CBP tools are not detecting skillful imposters.”
Facial recognition has also run into its own inherent accuracy problems. An NIST study in late 2019 found that the facial scanning systems used by border control and law enforcement agencies had serious inaccuracy problems for specific demographics of people: African-American and Asian people specifically, and it was considerably more inaccurate for women and older people than men and middle-aged data subjects. These inaccuracies were tracked back to the possibility of biased facial samples used to train the AI algorithms, with the data sets simply not containing enough members of these demographics to be able to accurately identify them. As of late 2020 a number of companies have claimed that they are working to fix these inherent bias issues, but due to the secretive nature of the industry it is difficult for the general public to tell how successful these efforts are or if real world improvements are being seen.
Out of 23 million border crossers entering the US, none were determined to be an imposter from #facialrecognition scans, raising questions on the tech. #privacy #respectdataClick to Tweet
There are also questions about the security of the facial recognition systems used at the border, and the prospect of attackers gaining illicit access to them. The Department of Homeland Security (DHS) experienced a breach by way of a vendor in 2019, which caused the photos and personal data (including license plate numbers) of thousands of border-crossing travelers to be leaked to the dark web. The vendor had apparently stored these images on its own local servers against DHS policy; a rogue employee gained access to the photos between 2018 and 2019 and exfiltrated them, later attempting to ransom the vendor for Bitcoin and leaking the images to the dark web when the ransom was not paid. The incident was not confirmed by DHS until 2020. While there have not been any known incidents of CBP’s facial comparison biometrics or the screening processes used at airports being hacked, the vendor compromise demonstrates that the entire facial recognition system relies on a massive network of subcontractors that each represent potential points of vulnerability
No comments:
Post a Comment