Thursday, April 04, 2024

EU: AI Act fails to set gold standard for human rights

POSTED ON APRIL 04, 2024

.
As EU institutions are expected to conclusively adopt the EU Artificial Intelligence Act in April 2024, ARTICLE 19 joins those voicing criticism about how the Act fails to set a gold standard for human rights protection. Over the last three years of negotiation, together with the coalition of digital rights organisations, we called on lawmakers to demand that AI works for people and that regulation prioritises the protection of fundamental human rights. We believe that in several areas, the AI Act is a missed opportunity to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence and many other rights and freedoms are protected when it comes to artificial intelligence.

For the last three years, as part of the European Digital Rights (EDRi) coalition, ARTICLE 19 has demanded that artificial intelligence (AI) works for people and that its regulation prioritises the protection of fundamental human rights. We have put forward our collective vision for an approach where ‘human-centric’ is not just a buzzword, where people on the move are treated with dignity, and where lawmakers are bold enough to draw red lines against unacceptable uses of AI systems.

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. But while they celebrate, we take a much more critical stance. We want to highlight the many missed opportunities to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence, and many other rights and freedoms are protected when it comes to AI. Here’s our round-up of how the final law fares against our collective demands.

This analysis is based on the latest available version of the AI Act text, dated 6 March 2024. There may still be small changes made before the law’s final adoption.
First, we called on EU lawmakers to empower affected people by upholding a framework of accountability, transparency, accessibility, and redress. How did they do?

Some accessibility barriers have been broken down, but more needs to be done:Article 16 (ja) of the AI Act fulfills our call for accessibility by stating that high-risk AI systems must comply with accessibility requirements. However, we still believe that this should be extended to apply to low and medium-risk AI systems as well, in order to ensure that the needs of people with disabilities are central in the development of all AI systems which could impact them.

More transparency about certain AI deployments, but big loopholes for the private sector and security agencies:The AI Act establishes a publicly-accessible EU database to provide transparency about AI systems that pose higher risks to people’s rights or safety. While originally only providers of high-risk AI systems were subject to transparency requirements, we successfully persuaded decision-makers that deployers of AI systems – those who actually use the system – shall also be subject to transparency obligations.
Those providers and deployers will be subject to transparency obligations who put on the market or use AI systems in high-risk areas – such as in the areas of employment and education – as designated by Annex III. Providers will be required to register their high-risk system in the database and to enter information about it such as the description of its intended purpose, a concise description of the information used by the system, and its operating logic. Deployers of high-risk AI systems who are public authorities – or those acting on their behalf – will be obliged to register the use of the system. They will be required to enter information in the database such as a summary of the findings of a fundamental rights impact assessment (FRIA) and a summary of the data protection impact assessment. However, deployers of high-risk AI systems in the private sector area will not be required to register the use of high-risk systems – another critical issue;
The major shortcoming of the EU database is that negotiators agreed on a carve-out for law enforcement, migration, asylum, and border control authorities. Providers and deployers of high-risk systems in these areas will be requested to register only a limited amount of information, and only in a non-publicly accessible section of the database. Certain important pieces of information, such as the training data used, will not be disclosed at all. This will prevent affected people, civil society, journalists, watchdog organisations and academics from exercising public scrutiny in these high-stake areas which are prone to fundamental rights violations, and hold them accountable.

Fundamental rights impact assessments are included, but concerns remain about how meaningful they will be:We successfully convinced EU institutions of the need for fundamental rights impact assessments (FRIAs). However, based on the final AI Act text, we have doubts whether it will actually prevent human rights violations and serve as a meaningful tool of accountability. We see three primary shortcomings:
Lack of meaningful assessment and the obligation to prevent negative impacts: while the new rules require deployers of high-risk AI systems to list risks of harm to people, there is no explicit obligation to assess whether these risks are acceptable in light of fundamental rights law, nor to prevent them wherever possible. Regrettably, deployers only have to specify which measures will be taken once risks materialise, likely once the harm has already been done;
No mandatory stakeholder engagement: the requirement to engage external stakeholders, including civil society and people affected by AI, in the assessment process was also removed from the article at the last stages of negotiations. This means that civil society organisations will not have a direct, legally-binding way to contribute to impact assessments;
Transparency exceptions for law enforcement and migration authorities: while in principle, deployers of high-risk AI systems will have to publish the summary of the results of FRIAs, this will not be the case for law enforcement and migration authorities. The public will not even have access to mere information that an authority uses a high-risk AI system in the first place. Instead, all information related to the use of AI in law enforcement and migration will only be included in a non-public database, severely limiting constructive public oversight and scrutiny. This is a very concerning development as, arguably, the risks to human rights, civic space and rule of law are the most severe in these two areas. Moreover, while deployers are obliged to notify the relevant market surveillance authority of the outcome of their FRIA, there is an exemption to comply with this obligation to notify for ‘exceptional reasons of public security’. This excuse is often misused as a justification to carry on disproportionate policing and border management activities.

When it comes to complaints and redress, there some remedies, but no clear recognition of the ‘affected person’:Civil society has advocated for robust rights and redress mechanisms for individuals and groups affected by high-risk AI systems. We have demanded the creation of a new section titled ‘Rights of Affected Persons’, which would delineate specific rights and remedies for individuals impacted by AI systems. However, the section has not been created but instead, we have a ‘remedies’ chapter that includes only some of our demands;
This chapter of remedies includes the right to lodge complaints with a market surveillance authority, but lacks teeth, as it remains unclear how effectively these authorities will be able to enforce compliance and hold violators accountable. Similarly, the right to an explanation of individual decision-making processes, particularly for AI systems listed as high-risk, raises questions about the practicality and accessibility of obtaining meaningful explanations from deployers. Furthermore, the effectiveness of these mechanisms in practice remains uncertain, given the absence of provisions such as the right to representation of natural persons, or the ability for public interest organisations to lodge complaints with national supervisory authorities.

The Act allows a double standard when it comes to the human rights of people outside the EU:The AI Act falls short of civil society’s demand to ensure that EU-based AI providers whose systems impact people outside of the EU are subject to the same requirements as those inside the EU. The Act does not stop EU-based companies from exporting AI systems which are banned in the EU, therefore creating a huge risk of violating the rights of people in non-EU countries by EU-made technologies that are essentially incompatible with human rights. Additionally, the Act does not require exported high-risk systems to follow the technical, transparency or other safeguards otherwise required when AI systems are intended for use within the EU, again risking the violation of rights of people outside of the EU by EU-made technologies.
Second, we urged EU lawmakers to limit harmful and discriminatory surveillance by national security, law enforcement and migration authorities. How did they do?

The blanket exemption for national security risks undermining other rules:The AI Act and its safeguards will not apply to AI systems if they are developed or used solely for the purpose of national security, and regardless of whether this is done by a public authority or a private company. This exemption introduces a significant loophole that will automatically exempt certain AI systems from scrutiny and limit the applicability of human rights safeguards envisioned in the AI Act;
In practical terms, it would mean that governments could invoke national security to introduce biometric mass surveillance systems, without having to apply any safeguards envisioned in the AI Act, without conducting a fundamental rights impact assessment and without ensuring that the AI system meets high technical standards and does not discriminate against certain groups;
Such a broad exemption is not justified under EU treaties and goes against the established jurisprudence of the European Court of Justice. While national security can be a justified ground for exceptions from the AI Act, this has to be assessed case-by-case, in line with the EU Charter of Fundamental Rights. The adopted text, however, makes national security a largely digital rights-free zone. We are concerned about the lack of clear national-level procedures to verify if the national security threat invoked by the government is indeed legitimate and serious enough to justify the use of the system and if the system is developed and used with respect for fundamental rights. The EU has also set a worrying precedent regionally and globally; broad national security exemptions have now been introduced in the newly-adopted Council of Europe Convention on AI.

Predictive policing, live public facial recognition, biometric categorisation and emotion recognition are only partially banned, legitimising these dangerous practices:We called for comprehensive bans against any use of AI that isn’t compatible with rights and freedoms – such as proclaimed AI ‘mind reading’, biometric surveillance systems that treat us as walking bar-codes, or algorithms used to decide whether we are innocent or guilty. All of these examples are now partially banned in the AI Act, which is an important signal that the EU is prepared to draw red lines against unacceptably harmful uses of AI;
At the same time, all of these bans contain significant and disappointing loopholes, which means that they will not achieve their full potential. In some cases, these loopholes risk having the opposite effect from what a ban should: they give the signal that some forms of biometric mass surveillance and AI-fuelled discrimination are legitimate in the EU, which risks setting a dangerous global precedent;
For example, the fact that emotion recognition and biometric categorisation systems are prohibited in the workplace and in education settings, but are still allowed when used by law enforcement and migration authorities, signal that the EU’s will to test the most abusive and intrusive surveillance systems against the most marginalised in society;
Moreover, when it comes to live public facial recognition, the Act paves the way to legalise some specific uses of these systems for the first time ever in the EU – despite our analysis showing that all public-space uses of these systems constitute an unacceptable violation of everyone’s rights and freedoms.

The serious harms of retrospective facial recognition are largely ignored:When it comes to retrospective facial recognition, this practice is not banned at all by the AI Act. As we have explained, the use of retrospective (post) facial recognition and other biometric surveillance systems (called ‘remote biometric identification’, or ‘RBI’ in the text) are just as invasive and rights-violating as live (real-time) systems. Yet the AI Act makes a big error in claiming that the extra time for retrospective uses will mitigate possible harms;
While several lawmakers have argued that they managed to insert safeguards, our analysis is that the safeguards are not meaningful enough and could be easily circumvented by police. In one place, the purported safeguard even suggests that simply the suspicion of any crime having taken place would be enough to justify the use of a post RBI system – a lower threshold than we currently benefit from now under EU data protection law.

People on the move are not afforded the same rights as everyone else, with only weak – and at times absent – rules on the use of AI at borders and in migration contexts:In its final version, the EU AI Act sets a dangerous precedent for the use of surveillance technology against migrants, people on the move and marginalised groups. The legislation develops a separate legal framework for the use of AI by migration control authorities, in order to enable the testing and the use of dangerous surveillance technologies at the EU borders and disproportionately against racialised people;
None of the bans meaningfully apply to the migration context, and the transparency obligations present ad-hoc exemptions for migration authorities, allowing them to act with impunity and far away from public scrutiny;
The list of high-risk systems fails to capture the many AI systems used in the migration context, as it excludes dangerous systems such as non-remote biometric identification systems, fingerprint scanners, or forecasting tools used to predict, interdict, and curtail migration;
Finally, AI systems used as part of EU large-scale migration databases (e.g. Eurodac, the Schengen Information System, and ETIAS) will not have to be compliant with the Regulation until 2030, which gives plenty of time to normalise the use of surveillance technology.
Third, we urged EU lawmakers to push back on Big Tech lobbying and address environmental impacts. How did they do?

The risk classification framework has become a self-regulatory exercise:Initially, all use cases included in the list of high-risk applications would have had to follow specific obligations. However, as a result of heavy industry lobbying, providers of high-risk systems will be now able to decide if their systems is high-risk or not, as an additional ‘filter’ was added into that classification system;
Providers will still have to register sufficient documentation in the public database to explain why they don’t consider their system to be high-risk. However, this obligation will not apply when they are providing systems to law enforcement and migration authorities. This will paving the way for the free and deregulated procurement of surveillance systems in the policing and border contexts.

The Act takes only a tentative first step to address environmental impacts of AI:We have serious concerns about how the exponential use of AI systems can have severe impacts on the environment, including through resource consumption, extractive mining and energy-intensive processing. Today, information on the environmental impacts of AI is a closely-guarded corporate secret. This makes it difficult to assess the environmental harms of AI and to develop political solutions to reduce carbon emissions and other negative impacts;
The first draft of the AI Act completely neglected these risks, despite civil society and researchers repeatedly calling for the energy consumption of AI systems to be made transparent. To address this problem, the AI Act now requires that providers of GPAI models that are trained with large amounts of data and consume a lot of electricity must document their energy consumption. The Commission now has the task of developing a suitable methodology for measuring the energy consumption in a comparable and verifiable way;
The AI Act also requires that standardised reporting and documentation procedures must be created to ensure the efficient use of resources by some AI systems. These procedures should help to reduce the energy and other resource consumption of high-risk AI systems during their life cycle. These standards are also intended to promote the energy-efficient development of general-purpose AI models;
These reporting standards are a crucial first step to provide basic transparency about some ecological impacts of AI, first and foremost the energy use. But they can only serve as a starting point for more comprehensive policy approaches that address all environmental harms along the AI production process, such as water and minerals. We cannot rely on self-regulation, given how fast the climate crisis is evolving.
What’s next for the AI Act?

The coming year will be decisive for the EU’s AI Act, with different EU institutions, national lawmakers and even company representatives setting standards, publishing interpretive guidelines and driving the Act’s implementation across the EU’s member countries. Some parts of the law – the prohibitions – could become operational as soon as November. It is therefore vital that civil society groups are given a seat at the table, and that this work is not done in opaque settings and behind closed doors.

We urge lawmakers around the world who are also considering bringing in horizontal rules on AI to learn from the EU’s many mistakes outlined above. A meaningful set of protections must ensure that AI rules truly work for individuals, communities, society, rule of law, and the planet.

While this long chapter of lawmaking is now coming to a close, the next chapter of implementation – and trying to get as many wins out of this Regulation as possible – is just beginning. As a group, we are drafting an implementation guide for civil society, coming later this year. We want to express our thanks to the entire AI core group, who have worked tirelessly for over three years to analyse, advocate and mobilise around the EU AI Act. In particular, we thank the work, dedication and vision of Sarah Chander, of the Equinox Racial Justice Institute, for her leadership of this group in the last three years.

TECH & RIGHTS

Packed With Loopholes: Why the AI Act Fails to Protect Civic Space and the Rule of Law

The AI Act fails to effectively protect the rule of law and civic space. ECNL, Liberties and European Civic Forum (ECF) gives our analysis of its shortcomings.


by LibertiesEU
April 04, 2024



The unaccountable and opaque use of Artificial Intelligence (AI), especially by public authorities, can undermine civic space and the rule of law. In the European Union, we have already witnessed AI-driven technologies being used to surveil activists, assess whether airline passengers pose a terrorism risk or appoint judges to court cases. The fundamental rights framework as well as rule of law standards require that robust safeguards are in place to protect people and our societies from the negative impacts of AI.

For this reason, the European Centre for Not-for-Profit Law (ECNL), Liberties and the European Civic Forum (ECF) closely monitored and contributed to the discussions on the EU’s Artificial Intelligence Act (AI Act), first proposed in 2021. From the beginning, we advocated for strong protections for fundamental rights and civic space and called on European policymakers to ensure that the AI Act is fully coherent with rule of law standards.

The European Parliament approved the AI Act on 13 March 2024, thus marking the end of a three-year-long legislative process. Yet to come are guidelines and delegated acts to clarify the often vague requirements. In this article, we take stock of the extent to which fundamental rights, civic space and the rule of law will be safeguarded and provide an analysis of key AI Act provisions.
Far from a golden standard for a rights-based AI regulation

Our overall assessment is that the AI Act fails to effectively protect the rule of law and civic space, instead prioritising industry interests, security services and law enforcement bodies. While the Act requires AI developers to maintain high standards for the technical development of AI systems (e.g. in terms of documentation or data quality), measures intended to protect fundamental rights, including key civic rights and freedoms, are insufficient to prevent abuses. They are riddled with far-reaching exceptions, lowering protection standards, especially in the area of law enforcement and migration.

The AI Act was negotiated and finalised in a rush, leaving significant gaps and legal uncertainty, which the European Commission will have to clarify in the next months and years by issuing delegated acts and guidelines. Regulating emerging technology requires flexibility, but the Act leaves too much to the discretion of the Commission, secondary legislation or voluntary codes of conduct. These could easily undermine the safeguards established by the AI Act, further eroding the fundamental rights and rule of law standards in the long term.

CSOs’ contributions will be necessary for a rights-based implementation of the AI Act

The AI Act will enter into effect in stages, with full application expected in 2026. The European Commission will develop guidance and delegated acts specifying various requirements for the implementation, including guidance on the interpretation of prohibitions, as well as a template for conducting fundamental rights impact assessments. It will be crucial for civil society to actively contribute to this process with their expertise and real-life examples. In the next months, we will publish a map of key opportunities where these contributions can be made. We also call on the European Commission and other bodies responsible for the implementation and enforcement of the AI Act to proactively facilitate civil society participation and to prioritise diverse voices including those of people affected by various AI systems, especially those belonging to marginalised groups.

5 flaws of the AI Act from the perspective of civic space and the rule of law

1. Gaps and loopholes can turn prohibitions into empty declarations

2. AI companies’ self-assessment of risks jeopardises fundamental rights protections

3. Standards for fundamental rights impact assessments are weak

4. The use of AI for national security purposes will be a rights-free zone

5. Civic participation in the implementation and enforcement is not guaranteed
The AI Act limitations showcase the need for a European Civil Dialogue Agreement

The legislative process surrounding the AI Act was marred by a significant lack of civil dialogue - the obligation of the EU institutions to engage in an open, transparent, and regular process with representative associations and civil society. To date, there is no legal framework regulating the European civil dialogue, although civil society has been calling for it in various contexts. Since the announcement of the AI Act, civil society has made great efforts to coordinate horizontally to feed into the process, engaging diverse organisations at the national and European levels. In the absence of clear guidelines on how civil society input should be included ahead of the drafting of EU laws and policies, the framework proposed by the European Commission to address the widespread impact of AI technologies on society and fundamental rights was flawed. Throughout the preparatory and political stages, the process remained opaque, with limited transparency regarding decision-making and little opportunity for input from groups representing a rights-based approach, particularly in the Council and during trilogue negotiations. This absence of inclusivity raises concerns about the adopted text’s impact on society at large. It not only undermines people’s trust in the legislative process and the democratic legitimacy of the AI Act but also hampers its key objective to guarantee the safety and fundamental rights of all.

However, in contrast to public interest and fundamental rights advocacy groups, market and for-profit lobbyists and representatives of law enforcement authorities and security services had great influence in the legislative process of the AI Act. This imbalanced representation favoured commercial interests and the narrative of external security threats over the broader societal impacts of AI.

Read our analysis in full here.


Symposium on Military AI and the Law of Armed Conflict: Human-machine Interaction in the Military Domain and the Responsible AI Framework


04.04.24 | 

[Dr Ingvild Bode is Associate Professor at the Centre for War Studies, University of Southern Denmark. She is the Principal Investigator of the European Research Council-funded project AutoNorms: Weaponised Artificial Intelligence, Norms, and Order (08/2020-07/2025) and also serves as the co-chair of the IEEE-SA Research Group on AI and Autonomy in Defence Systems.

Anna Nadibaidze is a researcher for the European Research Council funded AutoNorms project based at the Center for War Studies, University of Southern Denmark.]


Artificial intelligence (AI) technologies are increasingly part of military processes. Militaries use AI technologies, for example, for decision support and in combat operations, including as part of weapon systems. Contrary to some previous expectations, especially notable popular culture depictions of ‘sentient’ humanoid machines willing to destroy humanity or ‘robot wars’ between machines, integrating AI into the military does not mean that AI technologies replace humans. Rather, military personnel interact with AI technologies, and likely at an increasing frequency, as part of their day-to-day activities, which include the targeting process. Some militaries have adapted the language of human-machine teaming to describe these instances of human-machine interaction. This term can refer to humans interacting with both uncrewed, (semi-)autonomous platforms, or AI-based software systems. Such developments are increasingly promoted as key trends in defence innovation. For instance, the UK Ministry of Defence considers the “effective integration of humans, AI and robotics into warfighting systems—human-machine teams” to be “at the core of future military advantage”.

At the same time, many states highlight that they intend to develop and use these technologies in a ‘responsible’ manner. The framework of Responsible AI in the military domain is growing in importance across policy and expert discourse, moving beyond the focus on autonomous weapon systems that can “select and apply force without human intervention”. Instead, this framework assumes that AI will be integrated into various military processes and interact with humans in different ways, and therefore it is imperative to find ways of doing so responsibly, for instance by ensuring understandability, reliability, and accountability.

Our contribution connects these intersecting trends in offering a preliminary examination of the extent to which the Responsible AI framework addresses challenges attached to changing human-machine interaction in the military domain. To do so, we proceed in two steps: first, we sketch the kind of challenges raised by instances of human-machine interaction in a military context. We argue that human-machine interaction may fundamentally change the quality of human agency, understood as the ability to make choices and act, in warfare. It does so by introducing a form of distributed agency in military decision-making, including in but not limited to the targeting process. Therefore, there is a need to examine the types of distributed agency that will emerge, or have already emerged, as computational techniques under the ‘AI’ umbrella term are increasingly integrated into military processes. Second, we consider the extent to which the emerging Responsible AI framework, as well as principles associated with it, demonstrates potential to address these challenges.

1. Human-machine Interaction and Distributed Agency

Appropriate forms of human agency and control over use-of-force decision-making are necessary on ethical, legal, and security grounds. (Western) military thinking on human-machine or human-AI teaming recognises this. Human-machine interaction involves sharing cognitive tasks with AI technologies as their use is chiefly associated with the speedy processing of large amounts of data/information. It follows that any decision made in the context of human-machine interaction implies a combination of ‘human’ and ‘machine’ decision-making. This interplay changes how human agency is exercised. Instead of producing zero-sum outcomes, we are likely to encounter a form of distributed agency in military decisions that rely on human-machine interaction. Above all, distributed agency involves a blurring of the distinction between instances of ‘human’ and ‘AI’ agency.

Understanding this distributed agency could, in the first place, consider particularities of how ‘human’ and ‘AI’ agents make choices and act and what this means for interaction dynamics. This is an evolving topic of interest as AI technologies are increasingly integrated into the military domain. The reality of distributed agency is not clear-cut. Any ‘AI agency’ results from human activity throughout the algorithmic design and training process that has become ‘invisible’ at the point of use. This human activity includes programmers who create the basic algorithmic parameters, workers who prepare the data that training machine learning algorithms requires through a series of iterative micro-tasks often subsumed as ‘labelling data’, but also the people whose data is used to train such algorithms. It is therefore important to think about ‘human’ and ‘AI’ agency as part of a relational, complex, socio-technical system. From the perspective of the many groups of humans that are part of this system, interacting with AI creates both affordances or action potentials and constraints. Studying different configurations of this complex system could then advance our understanding of distributed agency.

These initial insights into how technological affordances and constraints shape distributed agency matter in the military domain because they affect human decision-making, including in a warfare context. What does it actually mean for humans to work with AI technologies? The long-established literature in human-factor analysis describes numerous fundamental obstacles that people face when interacting with complex systems integrating automated and AI technologies. These include “poor understanding of what the systems are doing, high workload when trying to interact with AI systems, poor situation awareness (SA) and performance deficits when intervention is needed, biases in decision making based on system outputs, and degradation”. Such common operational challenges of human-machine interaction raise fundamental political, ethical, legal, social, and security concerns. There are particularly high stakes in the military domain because AI technologies used in this context have the potential to inflict severe harm, such as physical injury, human rights violations, death, and (large-scale) destruction.


2. Responsible AI and Challenges of Human-machine Interaction

The Responsible AI framework has been gaining prominence among policymaking and expert circles of different states, especially the US and its allies. In 2023, the US released its Political Declaration on Responsible Military Use of AI and Autonomy, endorsed by 50 other states as of January 2024. US Deputy Secretary of Defense Kathleen Hicks stated that the new Replicator Initiative, aimed at producing large numbers of all-domain, attritable autonomous systems, will be carried out “while remaining steadfast to [the DoD’s] responsible and ethical approach to AI and autonomous systems, where DoD has been a world leader for over a decade”. At the same time, the concept of responsible military AI use has also been entrenched by the Responsible AI in the Military Domain (REAIM) Summit co-hosted by the Netherlands and the Republic of Korea. More than 55 states supported the Summit’s Call to Action in February 2023, and a second edition of the event is expected in Seoul in 2024.

The Responsible AI framework broadens the debate beyond lethal autonomous weapon systems (LAWS), which have been the focus of discussions at the UN CCW in Geneva throughout the last decade. The effort to consider different uses of AI in the military, including in decision support, is a step towards recognising the challenges of human-machine interaction and potential new forms of distributed agency. These changes are happening in various ways and do not necessarily revolve around ‘full’ autonomy, weapon systems, or humans ‘out of the loop’. Efforts to consider military systems integrating autonomous and AI technologies as part of lifecycle frameworks underline this. Such frameworks demonstrate that situations of human-machine interaction need to be addressed and occur at various lifecycle stages from research & development, procurement & acquisition, TEVV, potential deployment, to retirement. Addressing such concerns therefore deserve the type of debate offered by the REAIM platform: a multi-stakeholder discussion representing global perspectives on (changing) human-machine interaction in the military.

At the same time, the Responsible AI framework is nebulous and imprecise in its guidance on ensuring that challenges of human-machine interaction are addressed. So far, it functions as a “floating signifier”, in the sense that the concept can be understood in different ways, often depending on the interests of those who interpret it. This was already visible during the first REAIM Summit in The Hague, where most participants agreed on the importance of being responsible, but not on how to get there. Some of the common themes among the REAIM and US initiatives include commitment to international law, accountability, and responsibility, ensuring global security and stability, human oversight over military AI capabilities, as well as appropriate training of personnel involved in interacting with the capabilities. But beyond these broad principles, it remains unclear what constitutes ‘appropriate’ forms of human-machine interaction, and the forms of agency these involve, in relation to acting responsibly and in conformity with international law – that, in itself, offers unclear guidance. It must be noted, however, that defining ‘Responsible AI’ is no easy task because it requires considering the various dimensions of a complex socio-technical system which includes not only the technical aspects but also political, legal, and social ones. It has already been a challenging exercise in the civilian domain to pinpoint the exact characteristics of this concept, although key terms such as explainability, transparency, privacy, and security are often mentioned in Responsible AI strategies.

Importantly, the Responsible AI framework allows for various interpretations of the form, or mechanism, of global governance needed to address the challenges of human-machine interaction in the military. There are divergent approaches on the appropriate direction to take. For instance, US policymakers seek to “codify norms” for the responsible use of AI through the US political declaration, a form of soft law, interpreted by some experts as a way for Washington to promote its vision in its perceived strategic competition with Beijing. Meanwhile, many states favour a global legal and normative framework in the form of hard law, such as a legally binding instrument establishing appropriate forms of human-machine interaction, especially in relation to targeting, including the use of force. The UN’s 2023 New Agenda for Peace urges states not only to develop national strategies on responsible military use of AI, but also to “develop norms, rules and principles…through a multilateral process” which would involve engagement with industry, academia, and civil society. Some states are trying to make steps into this direction, for instance Austria took the initiative in the form of co-sponsoring a UN General Assembly First Committee resolution on LAWS, which was adopted with overwhelming support in November 2023. Overall, the Responsible AI framework’s inherent ambiguity is an opportunity for those favouring a soft law approach, especially actors who promote political declarations or sets of guidelines and argue that these are enough. Broad Responsible AI guidelines might symbolise a certain commitment or obligations, but at this stage they are insufficient to address already existing challenges to human-machine interaction in a security and military context –not least because they may not be connected to a concrete pathway toward operationalisation and implementation.

Note: This essay outlines initial thinking that forms the basis of a new research project called “Human-Machine Interaction: The Distributed Agency of Humans and Machines in Military AI” (HuMach) funded by the Independent Research Fund Denmark. Led by Ingvild Bode, the project will start later in 2024.



Symposium on Military AI and the Law of Armed Conflict: A (Pre)cautionary Note About Artificial Intelligence in Military Decision Making


04.04.24 | 


[Georgia Hinds is a Legal Adviser with the ICRC in Geneva, working on the legal and humanitarian implications of autonomous weapons, AI and other new technologies of warfare. Before joining the ICRC, she worked in the Australian Government, advising on public international law including international humanitarian and human rights law, and international criminal law, and served as a Reservist Officer with the Australian Army. The views expressed on this blog are those of the author alone and do not engage the ICRC, or previous employers, in any form.]

Introduction

Most of us would struggle to define ‘artificial intelligence.’ Fewer still could explain how it functions. And yet AI technologies permeate our daily lives. They also pervade today’s battlefields. Over the past eighteen months, reports of AI-enabled systems being used to inform targeting decisions in contemporary conflicts have sparked debates (including on this platform) around legal, moral and operational issues.

Sometimes called ‘decision support systems’ (DSS), these are computerized tools that are designed to aid human decision-making by bringing together and analysing information, and in some cases proposing options as to how to achieve a goal [see, e.g., Bo and Dorsey]. Increasingly, DSS in the military domain are incorporating more complex forms of AI, and are being applied to a wider range of tasks.

These technologies do not actually make decisions, and they are not necessarily part of weapon systems that deliver force. Nevertheless, they can significantly influence the range of actions and decisions that form part of military planning and targeting processes.

This post considers implications for the design and use of these tools in armed conflict, arising from international humanitarian law (IHL) obligations, particularly the rules governing the conduct of hostilities.

Taking ‘Constant Care’, How Might AI-DSS Help or Hinder?

Broadly, in the conduct of military operations, parties to an armed conflict must take constant care to spare the civilian population, civilians and civilian objects.

The obligation of constant care is an obligation of conduct, to mitigate risk and prevent harm. It applies across the planning or execution of military operation, and is not restricted to ‘attacks’ within the meaning of IHL (paras 2191, 1936, 1875). It includes, for example, ground operations, establishment of military installations, defensive preparations, quartering of troops, and search operations. It has been said that this requirement to take ‘constant care’ must “animate all strategic, operational and tactical decision-making.”

In assessing the risk to civilians that may arise from the use of an AI-DSS, a first step must be assessing whether the system is actually suitable for the intended task. Applying AI – particularly machine learning – to problems for which it is not well suited, has the potential to actually undermine decision-making (p 19). Automating processes that feed into decision-making can be advantageous where quality data is available and the system is given clear goals (p 12). In contrast, “militaries risk facing bad or tragic outcomes” where they provide AI systems with clear objectives but in uncertain circumstances, or where they use quality data but task AI systems with open-ended judgments. Uncertain circumstances abound in armed conflict, and the contextual, qualitative judgements required by IHL are notoriously difficult. Further, AI systems generally lack the ability to transfer knowledge from one context or domain to another (p 207), making it potentially problematic to apply an AI-DSS in a different armed conflict, or even in different circumstances in the same conflict. It is clear then, that whilst AI systems may be useful for some tasks in military operations (eg. in navigation and maintenance and supply chain management), they will be inappropriate for many others.

Predictions about enemy behaviour will likely be far less reliable than those about friendly forces, not only due to a lack of relevant quality data, but also because armed forces will often adopt tactics to confuse or mislead their enemy. Similarly, AI-DSS would struggle to infer something open-ended or ill-defined, like the purpose of a person’s act. A more suitable application could be in support of weaponeering processes, and the modelling of estimated effects, where such systems are already deployed, and where the DSS should have access to greater amounts of data derived from tests and simulations.

Artificial Intelligence to Gain the ‘Best Possible Intelligence’?

Across military planning and targeting processes, the general requirement is that decisions required by IHL’s rules on the conduct of hostilities must be based on an assessment of the information from all sources reasonably available at the relevant time. This includes an obligation to proactively seek out and collect relevant and reasonably available information (p 48). Many military manuals stress that the commander must obtain the “best possible intelligence,” which has been interpreted as requiring information on concentrations of civilian persons, important civilian objects, specifically protected objects and the environment (See Australia’s Manual on the Law of Armed Conflict (1994) §§548 and 549).

What constitutes the best possible intelligence will depend upon the circumstances, but generally commanders should be maximising their available intelligence, surveillance and reconnaissance assets to obtain up-to-date and reliable information.

Considering this requirement to seek out all reasonably available information, it is entirely possible that the use of AI DSS may assist parties to an armed conflict in satisfying their IHL obligations, by synthesising or otherwise processing certain available sources of information (p 203). Indeed, whilst precautionary obligations do not require parties to possess highly sophisticated means of reconnaissance (pp 797-8), it has been argued that (p 147), if they do possess AI-DSS and it is feasible to employ them, IHL might actually require their use.

In the context of urban warfare in particular, the ICRC has recommended (p 15) that information about factors such as the presence of civilians and civilian objects should include open-source repositories such as the internet. Further, specifically considering AI and machine learning, the ICRC has concluded that, to the extent that AI-DSS tools can facilitate quicker and more widespread collection and analysis of this kind of information, they could well enable better decisions by humans that minimize risks for civilians in conflict. The use of AI-DSS to support weaponeering, for example, may assist parties in choosing means and methods of attack that can best avoid, or at least minimize, incidental civilian harm.

Importantly, the constant care obligation and the duty to take all feasible precautions in attack are positive obligations, as opposed to other IHL rules which prohibit conduct (eg. the prohibitions on indiscriminate or disproportionate attacks). Accordingly, in developing and using AI-DSS, militaries should be considering not only how such tools can assist to achieve military objectives with less civilian harm, but how they might be designed and used specifically for the objective of civilian protection. This also means identifying or building relevant datasets that can support assessments of risks to, and impacts upon civilians and civilian infrastructure.

Practical Considerations for Those Using AI-DSS

When assessing the extent to which an AI-DSS output reflects current and reliable information sources, commanders must factor in AI’s limitations in terms of predictability, understandability and explainability (see further detail here). These concerns are likely to be especially acute with systems that incorporate machine learning algorithms that continue to learn, potentially changing their functioning during use.

Assessing the reliability of AI-DSS outputs also means accounting for the likelihood that an adversary will attempt to provide disinformation such as ruses and deception, or otherwise frustrate intelligence acquisition activities. AI-DSS currently remain vulnerable to hacking and spoofing techniques that can lead to erroneous outputs, often in ways that are unpredictable and undetectable to human operators.

Further, like any information source in armed conflict, the datasets on which AI-DSS rely may be imperfect, outdated or incomplete. For example, “No Strike Lists” (NSL) can contribute to a verification process by supporting faster identification of certain objects that must not be targeted. However, a NSL will only be effective so long as it is current and complete; the NSL itself is not the reality on the ground. More importantly, the NSL usually only consists of categories of objects that benefit from special protection or the targeting of which is otherwise restricted by policy. However, the protected status of objects in armed conflict can change – sometimes rapidly – and most civilian objects that will not appear on the list. In short then, the presence of an object on a NSL contributes to identifying protected objects when verifying the status of a potential target, but the absence of an object from the list does not imply that it is a military objective.

Parallels can be drawn with AI-DSS tools, which rely upon datasets to produce “a technological rendering of the world as a statistical data relationship” (p 10). The difference is that, whilst NSLs generally rely upon a limited number of databases, AI-DSS tools may be trained with, and may draw upon such a large volume of datasets that it may be impossible for the human user to verify their accuracy. This makes it especially important for AI-DSS users to be able to understand what underlying datasets are feeding the system, the extent to which this data is likely to be current and reliable, and the weighting given to particular data in the DSS output (paras 19-20). Certain critical datasets may need to be, by default, labelled with overriding prominent (eg. NSLs), whilst, for others, decision-makers may need to have the ability to adjust how they are factored in.

In certain circumstances, it may be appropriate for a decision-maker to seek out expert advice concerning the functioning or underlying data of an AI-DSS. As much has been suggested in the context of cyber warfare, in terms of seeking to understand the effects of a particular cyber operation (p 49).

In any event, it seems unlikely that it would be reasonable for a commander to rely solely on the output of one AI-DSS, especially during deliberate targeting processes where more time is available to gather and cross-check against different and varied sources. Militaries have already indicated that cross-checking of intelligence is standard practice when verifying targets and assessing proportionality, and an important aspect of minimising harm to civilians. This practice should equally be applied when employing AI-DSS, ideally using different kinds of intelligence to guard against the risks of embedded errors within an AI-DSS.

If a commander, planner or staff officer did rely solely on an AI-DSS, the reasonableness of their decision would need to be judged not only in light of the AI DSS output, but also taking account of other information that was reasonably available.

Conclusion

AI-DSS are often claimed to hold the potential to increase IHL compliance and to produce better outcomes for civilians in armed conflict. In certain circumstances, the use of AI DSS may well assist parties to an armed conflict in satisfying their IHL obligations, by providing an additional available source of information.

However, these tools may be ill-suited for certain tasks in the messy reality of warfare, especially noting their dependence on quality data and clear goals, and their limited capacity for transfer across different contexts. In some cases, drawing upon an AI-DSS could actually undermine the quality of decision-making, and pose additional risks to civilians.

Further, even though an AI-DSS can draw in and synthesise data from many different sources, this does not absolve a commander of their obligation to proactively seek out information from other reasonably available sources. Indeed, the way in which AI tools function – their limitations in terms of predictability, understandability and explainability – make it all the more important that their output be cross-checked.

Finally, AI-DSS must only be applied within legal, policy and doctrinal frameworks that ensure respect for international humanitarian law. Otherwise, these tools will only serve to replicate, and arguably exacerbate, unlawful or otherwise harmful outcomes at a faster rate and on a larger scale.



No comments: