By JAMEY KEATEN and MATT O'BRIEN,
Associated Press 1 day ago
GENEVA (AP) — The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.
GENEVA (AP) — The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.
© Provided by Associated Press FILE - In this Wednesday, Dec. 9, 2020 file photo Michelle Bachelet, UN High Commissioner for Human Rights, speaks during a press conference at the European headquarters of the United Nations in Geneva, Switzerland. The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces. Michelle Bachelet also says countries should expressly ban AI applications that don’t comply with international human rights law. Applications that should be prohibited include government “social scoring” systems which judge people based on their behavior and certain AI-based tools that categorize people by ethnicity or gender.(Martial Trezzini/Keystone via AP, file)
Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.
Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.
AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.
Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.
“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”
Bachelet didn't call for an outright ban of facial recognition technology, but said governments should halt the scanning of people's features in real time until they can show the technology is accurate, won't discriminate and meets certain privacy and data protection standards.
While countries weren't mentioned by name in the report, China has been among the countries that have rolled out facial recognition technology — particularly for surveillance in the western region of Xinjiang, where many of its minority Uyghers live. The key authors of the report said naming specific countries wasn't part of their mandate and doing so could even be counterproductive.
“In the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications that addresses particular communities," said Hicks.
She cited several court cases in the United States and Australia where artificial intelligence had been wrongly applied..
The report also voices wariness about tools that try to deduce people's emotional and mental states by analyzing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations and lacks scientific basis.
“The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial,” the report says.
The report’s recommendations echo the thinking of many political leaders in Western democracies, who hope to tap into AI's economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access to jobs, loans and educational opportunities.
European regulators have already taken steps to rein in the riskiest AI applications. Proposed regulations outlined by European Union officials this year would ban some uses of AI, such as real-time scanning of facial features, and tightly control others that could threaten people's safety or rights.
U.S. President Joe Biden's administration has voiced similar concerns, though it hasn't yet outlined a detailed approach to curtailing them. A newly formed group called the Trade and Technology Council, jointly led by American and European officials, has sought to collaborate on developing shared rules for AI and other tech policy.
Efforts to limit the riskiest uses of AI have been backed by Microsoft and other U.S. tech giants that hope to guide the rules affecting the technology. Microsoft has worked with and provided funding to the U.N. rights office to help improve its use of technology, but funding for the report came through the rights office's regular budget, Hicks said.
Western countries have been at the forefront of expressing concerns about the discriminatory use of AI.
“If you think about the ways that AI could be used in a discriminatory fashion, or to further strengthen discriminatory tendencies, it is pretty scary," said U.S. Commerce Secretary Gina Raimondo during a virtual conference in June. “We have to make sure we don’t let that happen.”
She was speaking with Margrethe Vestager, the European Commission’s executive vice president for the digital age, who suggested some AI uses should be off-limits completely in “democracies like ours.” She cited social scoring, which can close off someone’s privileges in society, and the “broad, blanket use of remote biometric identification in public space.”
———-
O'Brien reported from Providence, Rhode Island.
The latest chapter in a 100-year study says AI’s promises and perils are getting real
A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications of AI technology — and to the ways in which that technology are being abused.
Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.
Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.
AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.
Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.
“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”
Bachelet didn't call for an outright ban of facial recognition technology, but said governments should halt the scanning of people's features in real time until they can show the technology is accurate, won't discriminate and meets certain privacy and data protection standards.
While countries weren't mentioned by name in the report, China has been among the countries that have rolled out facial recognition technology — particularly for surveillance in the western region of Xinjiang, where many of its minority Uyghers live. The key authors of the report said naming specific countries wasn't part of their mandate and doing so could even be counterproductive.
“In the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications that addresses particular communities," said Hicks.
She cited several court cases in the United States and Australia where artificial intelligence had been wrongly applied..
The report also voices wariness about tools that try to deduce people's emotional and mental states by analyzing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations and lacks scientific basis.
“The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial,” the report says.
The report’s recommendations echo the thinking of many political leaders in Western democracies, who hope to tap into AI's economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access to jobs, loans and educational opportunities.
European regulators have already taken steps to rein in the riskiest AI applications. Proposed regulations outlined by European Union officials this year would ban some uses of AI, such as real-time scanning of facial features, and tightly control others that could threaten people's safety or rights.
U.S. President Joe Biden's administration has voiced similar concerns, though it hasn't yet outlined a detailed approach to curtailing them. A newly formed group called the Trade and Technology Council, jointly led by American and European officials, has sought to collaborate on developing shared rules for AI and other tech policy.
Efforts to limit the riskiest uses of AI have been backed by Microsoft and other U.S. tech giants that hope to guide the rules affecting the technology. Microsoft has worked with and provided funding to the U.N. rights office to help improve its use of technology, but funding for the report came through the rights office's regular budget, Hicks said.
Western countries have been at the forefront of expressing concerns about the discriminatory use of AI.
“If you think about the ways that AI could be used in a discriminatory fashion, or to further strengthen discriminatory tendencies, it is pretty scary," said U.S. Commerce Secretary Gina Raimondo during a virtual conference in June. “We have to make sure we don’t let that happen.”
She was speaking with Margrethe Vestager, the European Commission’s executive vice president for the digital age, who suggested some AI uses should be off-limits completely in “democracies like ours.” She cited social scoring, which can close off someone’s privileges in society, and the “broad, blanket use of remote biometric identification in public space.”
———-
O'Brien reported from Providence, Rhode Island.
The latest chapter in a 100-year study says AI’s promises and perils are getting real
A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications of AI technology — and to the ways in which that technology are being abused.
© Provided by Geekwire The AI100 project is designed to track trends in artificial intelligence over the course of a century. (Image Courtesy of AI100 / Stanford Institute for Human-Centered Artificial Intelligence)
Alan Boyle
The report, titled “Gathering Strength, Gathering Storms,” was issued today as part of the One Hundred Year Study on Artificial Intelligence, or AI100, which is envisioned as a century-long effort to track progress in AI and guide its future development .
AI100 was initiated by Eric Horvitz, Microsoft’s chief scientific officer, and hosted by the Stanford University Institute for Human-Centered Artificial Intelligence. The project is funded by a gift from Horvitz, a Stanford alumnus, and his wife, Mary.
The project’s first report, published in 2016, downplayed concerns that AI would lead to a Terminator-style rise of the machines and warned that fear and suspicion about AI would impede efforts to ensure the safety and reliability of AI technologies. At the same time, it acknowledged that the effects of AI and automation could lead to social disruption.
This year’s update, prepared by a standing committee in collaboration with a panel of 17 researchers and experts, says AI’s effects are increasingly touching people’s lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical diagnoses.
“In the past five years, AI has made the leap from something that mostly happens in research labs or other highly controlled settings to something that’s out in society affecting people’s lives,” Brown University computer scientist Michael Littman, who chaired the report panel, said in a news release.
“That’s really exciting, because this technology is doing some amazing things that we could only dream about five or ten years ago,” Littman added. “But at the same time the field is coming to grips with the societal impact of this technology, and I think the next frontier is thinking about ways we can get the benefits from AI while minimizing the risks.”
Those risks include deep-fake images and videos that are used to spread misinformation or harm people’s reputations; online bots that are used to manipulate public opinion; algorithmic bias that infects AI with all-too-human prejudices; and pattern recognition systems that can invade personal privacy by piecing together data from multiple sources.
The report, titled “Gathering Strength, Gathering Storms,” was issued today as part of the One Hundred Year Study on Artificial Intelligence, or AI100, which is envisioned as a century-long effort to track progress in AI and guide its future development .
AI100 was initiated by Eric Horvitz, Microsoft’s chief scientific officer, and hosted by the Stanford University Institute for Human-Centered Artificial Intelligence. The project is funded by a gift from Horvitz, a Stanford alumnus, and his wife, Mary.
The project’s first report, published in 2016, downplayed concerns that AI would lead to a Terminator-style rise of the machines and warned that fear and suspicion about AI would impede efforts to ensure the safety and reliability of AI technologies. At the same time, it acknowledged that the effects of AI and automation could lead to social disruption.
This year’s update, prepared by a standing committee in collaboration with a panel of 17 researchers and experts, says AI’s effects are increasingly touching people’s lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical diagnoses.
“In the past five years, AI has made the leap from something that mostly happens in research labs or other highly controlled settings to something that’s out in society affecting people’s lives,” Brown University computer scientist Michael Littman, who chaired the report panel, said in a news release.
“That’s really exciting, because this technology is doing some amazing things that we could only dream about five or ten years ago,” Littman added. “But at the same time the field is coming to grips with the societal impact of this technology, and I think the next frontier is thinking about ways we can get the benefits from AI while minimizing the risks.”
Those risks include deep-fake images and videos that are used to spread misinformation or harm people’s reputations; online bots that are used to manipulate public opinion; algorithmic bias that infects AI with all-too-human prejudices; and pattern recognition systems that can invade personal privacy by piecing together data from multiple sources.
The report says computer scientists must work more closely with experts in the social sciences, the legal system and law enforcement to reduce those risks.
One of the benefits of conducting a century-long study is that each report along the way builds on the previous report, said AI100 standing committee chair Peter Stone, who’s a computer scientist at the University of Texas at Austin as well as executive director of Sony AI America.
“The 2021 report is critical to this longitudinal aspect of AI100 in that it links closely with the 2016 report by commenting on what’s changed in the intervening five years,” he said. “It also provides a wonderful template for future study panels to emulate by answering a set of questions that we expect future study panels to re-evaluate at five-year intervals.”
Oren Etzioni, CEO of the Seattle-based Allen Institute for Artificial Intelligence, hailed the AI100 update in an email to GeekWire.
“The report represents a substantial amount of work and insight by top experts both inside and outside of the field,” said Etzioni, who was on the study panel for the 2016 report but played no role in the update. “It eschews sensationalism in favor of measured and scholarly obligations. I think the report is correct about the prospect for human-AI collaboration, the need for AI literacy, and the essential role of a strong non-commercial perspective from academia and non-profits.”
Etzioni’s only quibble was over the report’s claim that so far, AI’s economic significance has been “comparatively small — particularly relative to expectations.”
“I do think that the report may understate AI’s economic impact, because AI is often a component technology in products made by Apple, Amazon, Google and other major companies,” he said.
No comments:
Post a Comment