Fundamental human rights could be at risk if AI technologies are used without due caution, the EU's rights agency says. It said AI can lead to discriminatory biases and perversions of justice if safeguards are lacking.
Artificial intelligence technologies are being employed more and more across the world
More attention should be paid to the possible negative effects on people's fundamental rights of technologies based on artificial intelligence, the EU's rights agency said in a report issued on Monday.
"AI is not infallible; it is made by people — and humans can make mistakes," said Michael O'Flaherty, director of the Fundamental Rights Agency (FRA), in comments cited on the agency's website.
"The EU needs to clarify how existing rules apply to AI. And organizations need to assess how their technologies can interfere with people's rights both in the development and use of AI," O'Flaherty said.
Neglected rights aspect
The FRA report, entitled "Getting the future right — Artificial intelligence and fundamental rights in the EU," identifies areas where it feels the bloc must create safeguards and mechanisms for holding businesses and public administrations accountable in their use of AI.
It points out the many sectors in which AI is now already widely used, including in decisions on who will receive social benefits, predicting criminality and risk of illness and creating targeted advertising.
The report says that much of the focus in developing AI has been on its "potential to support economic growth" while the aspect of its impact on fundamental rights has been rather neglected.
Facial recognition technology is one use of AI that has aroused considerable controversy
Call for accountability
It is possible that "people are blindly adopting new technologies without assessing their impact before actually using them," David Reichel, one of the experts behind the report, told the AFP news agency.
Reichel told AFP that even when data sets did not include information linked to gender or ethnic origin, there was still "a lot of information than can be linked to protected attributes."
One example used in the report is employing facial recognition technology for law enforcement. It says even small error rates could lead to many innocent people being falsely picked out if the technology were used in places where large numbers are scanned, such as airports or train stations. "A potential bias in error rates could then lead to disproportionately targeting certain groups in society," the report says.
The report calls for more funding into the "potentially discriminatory effects of AI" and for any future legislation on AI to "create effective safeguards."
Above all, it says, the use of AI needs to be more transparent, more accountable and include the possibility of human review.
It is possible that "people are blindly adopting new technologies without assessing their impact before actually using them," David Reichel, one of the experts behind the report, told the AFP news agency.
Reichel told AFP that even when data sets did not include information linked to gender or ethnic origin, there was still "a lot of information than can be linked to protected attributes."
One example used in the report is employing facial recognition technology for law enforcement. It says even small error rates could lead to many innocent people being falsely picked out if the technology were used in places where large numbers are scanned, such as airports or train stations. "A potential bias in error rates could then lead to disproportionately targeting certain groups in society," the report says.
The report calls for more funding into the "potentially discriminatory effects of AI" and for any future legislation on AI to "create effective safeguards."
Above all, it says, the use of AI needs to be more transparent, more accountable and include the possibility of human review.
VIDEO The two faces of automatic facial recognition technology https://p.dw.com/p/3mi0X
No comments:
Post a Comment