Friday, May 03, 2024

Unanswered concerns on the EU AI Act: a dead end?

By Kamilia Amdouni - 03 May 2024



Kamilia Amdouni unpacks some of the major concerns raised by civil society groups on the new European law, and shares insights on potential future development.

Last March, the European Union (EU) made headlines with the endorsement of the groundbreaking EU Artificial Intelligence Act, marking a pivotal moment in the regulation of AI within its borders. However, the journey to this milestone was anything from easy, marked by intense debates and twists and turns. Crafting legislation that effectively governs AI technologies proved to be a daunting task, fraught with complexities ranging from defining AI to the rapid advancements in Generative AI, particularly ChatGPT from OpenAI.

While some heralded the EU AI Act as a crucial step forward, others, in particular civil society groups voiced concerns over its perceived shortcomings in providing adequate safeguards and have demanded stronger protections and monitoring. What are those unresolved concerns raised by associations and non-governmental organisations and what comes next?

It is clear that the EU will not backpedal as the legislation is now set for implementation by 2026. Instead, by engaging in multilateral and multistakeholder processes like the United Nations AI Advisory Body and the Global Digital Compact, civil society groups are playing a pivotal role in shaping global policy on AI. These processes are very ambitious and will set concrete and actionable mechanisms that address some of these concerns. This could influence regional policies on AI and maybe a future iterations of the EU AI Act
High risk AI systems

The EU AI Act takes a risk-based approach and has classified AI systems depending on the threats they represent to people's health and safety or fundamental rights. Biometric identification, access to public services, and border controls are among the eight high-risk categories.

Civil society groups asked to completely ban AI systems that enable biometric mass monitoring and predictive policing systems. While these AI systems are widely recognized as highly dangerous, the EU regulators did not concede to ban their use because of national security. Instead, they have emphasized that these systems will undergo a thorough compliance evaluation. This assessment will depend on the statements made by AI providers regarding the risks they have evaluated and addressed to market their products. This evaluation should be supported and documented by a compulsory Fundamental Rights Impact Assessment (FRIA). Yet, the precise extent of those responsible for carrying out the FRIA remains somewhat unclear. Conducting FRIA can also present quite a challenge, as not all those who deploy high-risk AI systems have the necessary expertise to thoroughly evaluate the risks associated with their deployment. There is also a question about whether the evaluation of fundamental rights should encompass all of them or if organizations can focus on a select group of fundamental rights that is more likely to be impacted. No methodologies have been developed yet that effectively translate technical descriptions of high-risk AI systems into concrete analyses of abstract concepts like fundamental rights.

The legal gaps brought about by some AI systems' exemptions is a further key worry. These exemptions allow developers of AI systems to forego assessment if their systems are designed for specific tasks, enhancing human activities, or carrying out preparatory tasks for specific use cases. Civil society fears that a lack of reliable metrics could lead to AI providers downplaying risks, and human rights be considered in balance with the interests of corporations or other entities. In order to assess risk and potential impact, it is important to involve various stakeholders. This approach would help to consider different perspectives and avoid oversimplification, taking into account the unique circumstances, contexts and vulnerabilities of groups and communities affected. In addition, it is important to support the documentation and explanation of risks with concrete evidence from previous incidents. This would help establish a solid foundation for risk assessment.
Human oversight

The EU AI Act imposes stringent requirements for high-risk AI systems such as the integrity of data, traceability, transparency, accuracy, and resilience to mitigate potential threats to fundamental rights and safety. In fostering the ethical and responsible deployment of AI systems, it also recognizes the imperative to integrate human oversight across all stages of their lifecycle, spanning from design to usage.

However, a notable caveat emerges within the Act's provisions regarding human oversight, stipulating its implementation "when technically feasible”. This language introduces a layer of ambiguity that has left certain stakeholders unsettled, apprehensive about potential trade-offs and risks. AI providers could perceive the addition of human oversight to the product development cycle as burdensome. This could lead to neglecting human oversight because of technical limitations.

Calls for a more robust human-centred approach have echoed throughout discussions on AI governance, advocating for the integration of meaningful oversight and human judgement across the AI lifecycle. This entails broadening the scope of input from diverse stakeholders to ensure a comprehensive consideration of various perspectives. Furthermore, human judgment should assume a central role in determining specific metrics related to AI trustworthiness, including reliability, safety, security, accountability, transparency, explainability, interpretability, privacy protection, and fairness. Establishing precise threshold values for these metrics could offer clear compliance guidelines. There is a growing consensus on the necessity of regular public reporting of these audits to promote transparency and accountability.
AI liability and redress for victims

The European Commission has proposed the 'AI liability directive' since September 2022, intending to modernize the EU liability framework by establishing regulations specifically addressing damages caused by AI systems. Its goal is to afford individuals harmed by AI the same level of protection as those affected by other technologies.

The major concern of the proposed approach is that victims bear the burden of providing evidence of non-compliance with the AI Act. Victims should demonstrate negligent conduct and establish how the damages were caused by the AI system. Placing such responsibility on victims could present significant challenges, particularly considering the opacity and technical complexity of AI systems, making it difficult to substantiate claims for compensation. In addition, the way AI systems are decentralized and have minimal physical presence in different jurisdictions will make it even more challenging for victims to be compensated after an incident.

EU lawmakers will also face challenges considering certain factors, such as determining the extent of harm, which includes intangible and indirect damages, and providing clarity on how liability is assigned throughout the AI value chain. It is worth mentioning that there is an increasing push for the implementation of a strict liability system for general-purpose AI systems. This is mainly due to the concerns surrounding cybersecurity threats posed by generative AI technologies, enabling intrusion attacks, malware creation, and the spread of AI-powered disinformation. Civil society has emphasized the importance of establishing clear liability frameworks and tackling the complexities involved in order to provide sufficient protection for individuals who may suffer harm due to AI-related incidents.
The path forward

The EU AI Act establishes a framework for accountability, outlining the obligations of those involved in the development, provision, and deployment of AI systems throughout key stages of quality management.

However, some civil society organizations have argued that this framework is insufficient and advocate for the creation of impartial oversight mechanisms that would track and document the negative effects of AI systems on people and society. Such a mechanism would ensure accountability, holding responsible parties liable for their actions and foster robust governance and mitigate potential harm linked to AI technologies. Notably, a similar proposition has garnered attention in the interim report of the United Nations High-Level Advisory Body on Artificial Intelligence. Emphasizing the crucial role of civil society, academia and independent scientists, the report underscores their involvement in providing evidence for policy formulation, assessing impacts, and ensuring accountability during implementation. It suggests the establishment of a global analytical observatory function to coordinate research efforts on harms of AI, in critical fields such as labor, education, public health, peace and security, and geopolitical stability.

The initial version of the Global Digital Compact (GDC), which aims to establish universal principles for an open, free, and secure digital landscape and provide a framework for governing artificial intelligence (AI) on a global scale, has been praised by civil society for its ambitious nature and strong basis. GDC has also crystallised several solutions that partly reflect what we described earlier. These include the development of a common and independent assessment of AI systems risks and impacts by a broader community of scientific professionals, the promotion and implementation of AI standards that prioritise human rights. In September 2024, UN Member States plan to ratify the GDC as an international agreement for governing emerging technologies like AI. It is tempting to wonder if we will witness in the next years a shift from the "Brussels effect," where EU regulations influence global policy, to a "Turtle Bay effect" where the UN influences regional policy on AI.

Kamilia Amdouni is an alumna of the University of London, Global Diplomacy. She is a public policy expert working at the intersection of technology, cybersecurity, peace and security and human rights.

Photo by Tara Winstead

No comments: