Thursday, April 04, 2024

EU: AI Act fails to set gold standard for human rights

POSTED ON APRIL 04, 2024

.
As EU institutions are expected to conclusively adopt the EU Artificial Intelligence Act in April 2024, ARTICLE 19 joins those voicing criticism about how the Act fails to set a gold standard for human rights protection. Over the last three years of negotiation, together with the coalition of digital rights organisations, we called on lawmakers to demand that AI works for people and that regulation prioritises the protection of fundamental human rights. We believe that in several areas, the AI Act is a missed opportunity to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence and many other rights and freedoms are protected when it comes to artificial intelligence.

For the last three years, as part of the European Digital Rights (EDRi) coalition, ARTICLE 19 has demanded that artificial intelligence (AI) works for people and that its regulation prioritises the protection of fundamental human rights. We have put forward our collective vision for an approach where ‘human-centric’ is not just a buzzword, where people on the move are treated with dignity, and where lawmakers are bold enough to draw red lines against unacceptable uses of AI systems.

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. But while they celebrate, we take a much more critical stance. We want to highlight the many missed opportunities to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence, and many other rights and freedoms are protected when it comes to AI. Here’s our round-up of how the final law fares against our collective demands.

This analysis is based on the latest available version of the AI Act text, dated 6 March 2024. There may still be small changes made before the law’s final adoption.
First, we called on EU lawmakers to empower affected people by upholding a framework of accountability, transparency, accessibility, and redress. How did they do?

Some accessibility barriers have been broken down, but more needs to be done:Article 16 (ja) of the AI Act fulfills our call for accessibility by stating that high-risk AI systems must comply with accessibility requirements. However, we still believe that this should be extended to apply to low and medium-risk AI systems as well, in order to ensure that the needs of people with disabilities are central in the development of all AI systems which could impact them.

More transparency about certain AI deployments, but big loopholes for the private sector and security agencies:The AI Act establishes a publicly-accessible EU database to provide transparency about AI systems that pose higher risks to people’s rights or safety. While originally only providers of high-risk AI systems were subject to transparency requirements, we successfully persuaded decision-makers that deployers of AI systems – those who actually use the system – shall also be subject to transparency obligations.
Those providers and deployers will be subject to transparency obligations who put on the market or use AI systems in high-risk areas – such as in the areas of employment and education – as designated by Annex III. Providers will be required to register their high-risk system in the database and to enter information about it such as the description of its intended purpose, a concise description of the information used by the system, and its operating logic. Deployers of high-risk AI systems who are public authorities – or those acting on their behalf – will be obliged to register the use of the system. They will be required to enter information in the database such as a summary of the findings of a fundamental rights impact assessment (FRIA) and a summary of the data protection impact assessment. However, deployers of high-risk AI systems in the private sector area will not be required to register the use of high-risk systems – another critical issue;
The major shortcoming of the EU database is that negotiators agreed on a carve-out for law enforcement, migration, asylum, and border control authorities. Providers and deployers of high-risk systems in these areas will be requested to register only a limited amount of information, and only in a non-publicly accessible section of the database. Certain important pieces of information, such as the training data used, will not be disclosed at all. This will prevent affected people, civil society, journalists, watchdog organisations and academics from exercising public scrutiny in these high-stake areas which are prone to fundamental rights violations, and hold them accountable.

Fundamental rights impact assessments are included, but concerns remain about how meaningful they will be:We successfully convinced EU institutions of the need for fundamental rights impact assessments (FRIAs). However, based on the final AI Act text, we have doubts whether it will actually prevent human rights violations and serve as a meaningful tool of accountability. We see three primary shortcomings:
Lack of meaningful assessment and the obligation to prevent negative impacts: while the new rules require deployers of high-risk AI systems to list risks of harm to people, there is no explicit obligation to assess whether these risks are acceptable in light of fundamental rights law, nor to prevent them wherever possible. Regrettably, deployers only have to specify which measures will be taken once risks materialise, likely once the harm has already been done;
No mandatory stakeholder engagement: the requirement to engage external stakeholders, including civil society and people affected by AI, in the assessment process was also removed from the article at the last stages of negotiations. This means that civil society organisations will not have a direct, legally-binding way to contribute to impact assessments;
Transparency exceptions for law enforcement and migration authorities: while in principle, deployers of high-risk AI systems will have to publish the summary of the results of FRIAs, this will not be the case for law enforcement and migration authorities. The public will not even have access to mere information that an authority uses a high-risk AI system in the first place. Instead, all information related to the use of AI in law enforcement and migration will only be included in a non-public database, severely limiting constructive public oversight and scrutiny. This is a very concerning development as, arguably, the risks to human rights, civic space and rule of law are the most severe in these two areas. Moreover, while deployers are obliged to notify the relevant market surveillance authority of the outcome of their FRIA, there is an exemption to comply with this obligation to notify for ‘exceptional reasons of public security’. This excuse is often misused as a justification to carry on disproportionate policing and border management activities.

When it comes to complaints and redress, there some remedies, but no clear recognition of the ‘affected person’:Civil society has advocated for robust rights and redress mechanisms for individuals and groups affected by high-risk AI systems. We have demanded the creation of a new section titled ‘Rights of Affected Persons’, which would delineate specific rights and remedies for individuals impacted by AI systems. However, the section has not been created but instead, we have a ‘remedies’ chapter that includes only some of our demands;
This chapter of remedies includes the right to lodge complaints with a market surveillance authority, but lacks teeth, as it remains unclear how effectively these authorities will be able to enforce compliance and hold violators accountable. Similarly, the right to an explanation of individual decision-making processes, particularly for AI systems listed as high-risk, raises questions about the practicality and accessibility of obtaining meaningful explanations from deployers. Furthermore, the effectiveness of these mechanisms in practice remains uncertain, given the absence of provisions such as the right to representation of natural persons, or the ability for public interest organisations to lodge complaints with national supervisory authorities.

The Act allows a double standard when it comes to the human rights of people outside the EU:The AI Act falls short of civil society’s demand to ensure that EU-based AI providers whose systems impact people outside of the EU are subject to the same requirements as those inside the EU. The Act does not stop EU-based companies from exporting AI systems which are banned in the EU, therefore creating a huge risk of violating the rights of people in non-EU countries by EU-made technologies that are essentially incompatible with human rights. Additionally, the Act does not require exported high-risk systems to follow the technical, transparency or other safeguards otherwise required when AI systems are intended for use within the EU, again risking the violation of rights of people outside of the EU by EU-made technologies.
Second, we urged EU lawmakers to limit harmful and discriminatory surveillance by national security, law enforcement and migration authorities. How did they do?

The blanket exemption for national security risks undermining other rules:The AI Act and its safeguards will not apply to AI systems if they are developed or used solely for the purpose of national security, and regardless of whether this is done by a public authority or a private company. This exemption introduces a significant loophole that will automatically exempt certain AI systems from scrutiny and limit the applicability of human rights safeguards envisioned in the AI Act;
In practical terms, it would mean that governments could invoke national security to introduce biometric mass surveillance systems, without having to apply any safeguards envisioned in the AI Act, without conducting a fundamental rights impact assessment and without ensuring that the AI system meets high technical standards and does not discriminate against certain groups;
Such a broad exemption is not justified under EU treaties and goes against the established jurisprudence of the European Court of Justice. While national security can be a justified ground for exceptions from the AI Act, this has to be assessed case-by-case, in line with the EU Charter of Fundamental Rights. The adopted text, however, makes national security a largely digital rights-free zone. We are concerned about the lack of clear national-level procedures to verify if the national security threat invoked by the government is indeed legitimate and serious enough to justify the use of the system and if the system is developed and used with respect for fundamental rights. The EU has also set a worrying precedent regionally and globally; broad national security exemptions have now been introduced in the newly-adopted Council of Europe Convention on AI.

Predictive policing, live public facial recognition, biometric categorisation and emotion recognition are only partially banned, legitimising these dangerous practices:We called for comprehensive bans against any use of AI that isn’t compatible with rights and freedoms – such as proclaimed AI ‘mind reading’, biometric surveillance systems that treat us as walking bar-codes, or algorithms used to decide whether we are innocent or guilty. All of these examples are now partially banned in the AI Act, which is an important signal that the EU is prepared to draw red lines against unacceptably harmful uses of AI;
At the same time, all of these bans contain significant and disappointing loopholes, which means that they will not achieve their full potential. In some cases, these loopholes risk having the opposite effect from what a ban should: they give the signal that some forms of biometric mass surveillance and AI-fuelled discrimination are legitimate in the EU, which risks setting a dangerous global precedent;
For example, the fact that emotion recognition and biometric categorisation systems are prohibited in the workplace and in education settings, but are still allowed when used by law enforcement and migration authorities, signal that the EU’s will to test the most abusive and intrusive surveillance systems against the most marginalised in society;
Moreover, when it comes to live public facial recognition, the Act paves the way to legalise some specific uses of these systems for the first time ever in the EU – despite our analysis showing that all public-space uses of these systems constitute an unacceptable violation of everyone’s rights and freedoms.

The serious harms of retrospective facial recognition are largely ignored:When it comes to retrospective facial recognition, this practice is not banned at all by the AI Act. As we have explained, the use of retrospective (post) facial recognition and other biometric surveillance systems (called ‘remote biometric identification’, or ‘RBI’ in the text) are just as invasive and rights-violating as live (real-time) systems. Yet the AI Act makes a big error in claiming that the extra time for retrospective uses will mitigate possible harms;
While several lawmakers have argued that they managed to insert safeguards, our analysis is that the safeguards are not meaningful enough and could be easily circumvented by police. In one place, the purported safeguard even suggests that simply the suspicion of any crime having taken place would be enough to justify the use of a post RBI system – a lower threshold than we currently benefit from now under EU data protection law.

People on the move are not afforded the same rights as everyone else, with only weak – and at times absent – rules on the use of AI at borders and in migration contexts:In its final version, the EU AI Act sets a dangerous precedent for the use of surveillance technology against migrants, people on the move and marginalised groups. The legislation develops a separate legal framework for the use of AI by migration control authorities, in order to enable the testing and the use of dangerous surveillance technologies at the EU borders and disproportionately against racialised people;
None of the bans meaningfully apply to the migration context, and the transparency obligations present ad-hoc exemptions for migration authorities, allowing them to act with impunity and far away from public scrutiny;
The list of high-risk systems fails to capture the many AI systems used in the migration context, as it excludes dangerous systems such as non-remote biometric identification systems, fingerprint scanners, or forecasting tools used to predict, interdict, and curtail migration;
Finally, AI systems used as part of EU large-scale migration databases (e.g. Eurodac, the Schengen Information System, and ETIAS) will not have to be compliant with the Regulation until 2030, which gives plenty of time to normalise the use of surveillance technology.
Third, we urged EU lawmakers to push back on Big Tech lobbying and address environmental impacts. How did they do?

The risk classification framework has become a self-regulatory exercise:Initially, all use cases included in the list of high-risk applications would have had to follow specific obligations. However, as a result of heavy industry lobbying, providers of high-risk systems will be now able to decide if their systems is high-risk or not, as an additional ‘filter’ was added into that classification system;
Providers will still have to register sufficient documentation in the public database to explain why they don’t consider their system to be high-risk. However, this obligation will not apply when they are providing systems to law enforcement and migration authorities. This will paving the way for the free and deregulated procurement of surveillance systems in the policing and border contexts.

The Act takes only a tentative first step to address environmental impacts of AI:We have serious concerns about how the exponential use of AI systems can have severe impacts on the environment, including through resource consumption, extractive mining and energy-intensive processing. Today, information on the environmental impacts of AI is a closely-guarded corporate secret. This makes it difficult to assess the environmental harms of AI and to develop political solutions to reduce carbon emissions and other negative impacts;
The first draft of the AI Act completely neglected these risks, despite civil society and researchers repeatedly calling for the energy consumption of AI systems to be made transparent. To address this problem, the AI Act now requires that providers of GPAI models that are trained with large amounts of data and consume a lot of electricity must document their energy consumption. The Commission now has the task of developing a suitable methodology for measuring the energy consumption in a comparable and verifiable way;
The AI Act also requires that standardised reporting and documentation procedures must be created to ensure the efficient use of resources by some AI systems. These procedures should help to reduce the energy and other resource consumption of high-risk AI systems during their life cycle. These standards are also intended to promote the energy-efficient development of general-purpose AI models;
These reporting standards are a crucial first step to provide basic transparency about some ecological impacts of AI, first and foremost the energy use. But they can only serve as a starting point for more comprehensive policy approaches that address all environmental harms along the AI production process, such as water and minerals. We cannot rely on self-regulation, given how fast the climate crisis is evolving.
What’s next for the AI Act?

The coming year will be decisive for the EU’s AI Act, with different EU institutions, national lawmakers and even company representatives setting standards, publishing interpretive guidelines and driving the Act’s implementation across the EU’s member countries. Some parts of the law – the prohibitions – could become operational as soon as November. It is therefore vital that civil society groups are given a seat at the table, and that this work is not done in opaque settings and behind closed doors.

We urge lawmakers around the world who are also considering bringing in horizontal rules on AI to learn from the EU’s many mistakes outlined above. A meaningful set of protections must ensure that AI rules truly work for individuals, communities, society, rule of law, and the planet.

While this long chapter of lawmaking is now coming to a close, the next chapter of implementation – and trying to get as many wins out of this Regulation as possible – is just beginning. As a group, we are drafting an implementation guide for civil society, coming later this year. We want to express our thanks to the entire AI core group, who have worked tirelessly for over three years to analyse, advocate and mobilise around the EU AI Act. In particular, we thank the work, dedication and vision of Sarah Chander, of the Equinox Racial Justice Institute, for her leadership of this group in the last three years.

TECH & RIGHTS

Packed With Loopholes: Why the AI Act Fails to Protect Civic Space and the Rule of Law

The AI Act fails to effectively protect the rule of law and civic space. ECNL, Liberties and European Civic Forum (ECF) gives our analysis of its shortcomings.


by LibertiesEU
April 04, 2024



The unaccountable and opaque use of Artificial Intelligence (AI), especially by public authorities, can undermine civic space and the rule of law. In the European Union, we have already witnessed AI-driven technologies being used to surveil activists, assess whether airline passengers pose a terrorism risk or appoint judges to court cases. The fundamental rights framework as well as rule of law standards require that robust safeguards are in place to protect people and our societies from the negative impacts of AI.

For this reason, the European Centre for Not-for-Profit Law (ECNL), Liberties and the European Civic Forum (ECF) closely monitored and contributed to the discussions on the EU’s Artificial Intelligence Act (AI Act), first proposed in 2021. From the beginning, we advocated for strong protections for fundamental rights and civic space and called on European policymakers to ensure that the AI Act is fully coherent with rule of law standards.

The European Parliament approved the AI Act on 13 March 2024, thus marking the end of a three-year-long legislative process. Yet to come are guidelines and delegated acts to clarify the often vague requirements. In this article, we take stock of the extent to which fundamental rights, civic space and the rule of law will be safeguarded and provide an analysis of key AI Act provisions.
Far from a golden standard for a rights-based AI regulation

Our overall assessment is that the AI Act fails to effectively protect the rule of law and civic space, instead prioritising industry interests, security services and law enforcement bodies. While the Act requires AI developers to maintain high standards for the technical development of AI systems (e.g. in terms of documentation or data quality), measures intended to protect fundamental rights, including key civic rights and freedoms, are insufficient to prevent abuses. They are riddled with far-reaching exceptions, lowering protection standards, especially in the area of law enforcement and migration.

The AI Act was negotiated and finalised in a rush, leaving significant gaps and legal uncertainty, which the European Commission will have to clarify in the next months and years by issuing delegated acts and guidelines. Regulating emerging technology requires flexibility, but the Act leaves too much to the discretion of the Commission, secondary legislation or voluntary codes of conduct. These could easily undermine the safeguards established by the AI Act, further eroding the fundamental rights and rule of law standards in the long term.

CSOs’ contributions will be necessary for a rights-based implementation of the AI Act

The AI Act will enter into effect in stages, with full application expected in 2026. The European Commission will develop guidance and delegated acts specifying various requirements for the implementation, including guidance on the interpretation of prohibitions, as well as a template for conducting fundamental rights impact assessments. It will be crucial for civil society to actively contribute to this process with their expertise and real-life examples. In the next months, we will publish a map of key opportunities where these contributions can be made. We also call on the European Commission and other bodies responsible for the implementation and enforcement of the AI Act to proactively facilitate civil society participation and to prioritise diverse voices including those of people affected by various AI systems, especially those belonging to marginalised groups.

5 flaws of the AI Act from the perspective of civic space and the rule of law

1. Gaps and loopholes can turn prohibitions into empty declarations

2. AI companies’ self-assessment of risks jeopardises fundamental rights protections

3. Standards for fundamental rights impact assessments are weak

4. The use of AI for national security purposes will be a rights-free zone

5. Civic participation in the implementation and enforcement is not guaranteed
The AI Act limitations showcase the need for a European Civil Dialogue Agreement

The legislative process surrounding the AI Act was marred by a significant lack of civil dialogue - the obligation of the EU institutions to engage in an open, transparent, and regular process with representative associations and civil society. To date, there is no legal framework regulating the European civil dialogue, although civil society has been calling for it in various contexts. Since the announcement of the AI Act, civil society has made great efforts to coordinate horizontally to feed into the process, engaging diverse organisations at the national and European levels. In the absence of clear guidelines on how civil society input should be included ahead of the drafting of EU laws and policies, the framework proposed by the European Commission to address the widespread impact of AI technologies on society and fundamental rights was flawed. Throughout the preparatory and political stages, the process remained opaque, with limited transparency regarding decision-making and little opportunity for input from groups representing a rights-based approach, particularly in the Council and during trilogue negotiations. This absence of inclusivity raises concerns about the adopted text’s impact on society at large. It not only undermines people’s trust in the legislative process and the democratic legitimacy of the AI Act but also hampers its key objective to guarantee the safety and fundamental rights of all.

However, in contrast to public interest and fundamental rights advocacy groups, market and for-profit lobbyists and representatives of law enforcement authorities and security services had great influence in the legislative process of the AI Act. This imbalanced representation favoured commercial interests and the narrative of external security threats over the broader societal impacts of AI.

Read our analysis in full here.


Symposium on Military AI and the Law of Armed Conflict: Human-machine Interaction in the Military Domain and the Responsible AI Framework


04.04.24 | 

[Dr Ingvild Bode is Associate Professor at the Centre for War Studies, University of Southern Denmark. She is the Principal Investigator of the European Research Council-funded project AutoNorms: Weaponised Artificial Intelligence, Norms, and Order (08/2020-07/2025) and also serves as the co-chair of the IEEE-SA Research Group on AI and Autonomy in Defence Systems.

Anna Nadibaidze is a researcher for the European Research Council funded AutoNorms project based at the Center for War Studies, University of Southern Denmark.]


Artificial intelligence (AI) technologies are increasingly part of military processes. Militaries use AI technologies, for example, for decision support and in combat operations, including as part of weapon systems. Contrary to some previous expectations, especially notable popular culture depictions of ‘sentient’ humanoid machines willing to destroy humanity or ‘robot wars’ between machines, integrating AI into the military does not mean that AI technologies replace humans. Rather, military personnel interact with AI technologies, and likely at an increasing frequency, as part of their day-to-day activities, which include the targeting process. Some militaries have adapted the language of human-machine teaming to describe these instances of human-machine interaction. This term can refer to humans interacting with both uncrewed, (semi-)autonomous platforms, or AI-based software systems. Such developments are increasingly promoted as key trends in defence innovation. For instance, the UK Ministry of Defence considers the “effective integration of humans, AI and robotics into warfighting systems—human-machine teams” to be “at the core of future military advantage”.

At the same time, many states highlight that they intend to develop and use these technologies in a ‘responsible’ manner. The framework of Responsible AI in the military domain is growing in importance across policy and expert discourse, moving beyond the focus on autonomous weapon systems that can “select and apply force without human intervention”. Instead, this framework assumes that AI will be integrated into various military processes and interact with humans in different ways, and therefore it is imperative to find ways of doing so responsibly, for instance by ensuring understandability, reliability, and accountability.

Our contribution connects these intersecting trends in offering a preliminary examination of the extent to which the Responsible AI framework addresses challenges attached to changing human-machine interaction in the military domain. To do so, we proceed in two steps: first, we sketch the kind of challenges raised by instances of human-machine interaction in a military context. We argue that human-machine interaction may fundamentally change the quality of human agency, understood as the ability to make choices and act, in warfare. It does so by introducing a form of distributed agency in military decision-making, including in but not limited to the targeting process. Therefore, there is a need to examine the types of distributed agency that will emerge, or have already emerged, as computational techniques under the ‘AI’ umbrella term are increasingly integrated into military processes. Second, we consider the extent to which the emerging Responsible AI framework, as well as principles associated with it, demonstrates potential to address these challenges.

1. Human-machine Interaction and Distributed Agency

Appropriate forms of human agency and control over use-of-force decision-making are necessary on ethical, legal, and security grounds. (Western) military thinking on human-machine or human-AI teaming recognises this. Human-machine interaction involves sharing cognitive tasks with AI technologies as their use is chiefly associated with the speedy processing of large amounts of data/information. It follows that any decision made in the context of human-machine interaction implies a combination of ‘human’ and ‘machine’ decision-making. This interplay changes how human agency is exercised. Instead of producing zero-sum outcomes, we are likely to encounter a form of distributed agency in military decisions that rely on human-machine interaction. Above all, distributed agency involves a blurring of the distinction between instances of ‘human’ and ‘AI’ agency.

Understanding this distributed agency could, in the first place, consider particularities of how ‘human’ and ‘AI’ agents make choices and act and what this means for interaction dynamics. This is an evolving topic of interest as AI technologies are increasingly integrated into the military domain. The reality of distributed agency is not clear-cut. Any ‘AI agency’ results from human activity throughout the algorithmic design and training process that has become ‘invisible’ at the point of use. This human activity includes programmers who create the basic algorithmic parameters, workers who prepare the data that training machine learning algorithms requires through a series of iterative micro-tasks often subsumed as ‘labelling data’, but also the people whose data is used to train such algorithms. It is therefore important to think about ‘human’ and ‘AI’ agency as part of a relational, complex, socio-technical system. From the perspective of the many groups of humans that are part of this system, interacting with AI creates both affordances or action potentials and constraints. Studying different configurations of this complex system could then advance our understanding of distributed agency.

These initial insights into how technological affordances and constraints shape distributed agency matter in the military domain because they affect human decision-making, including in a warfare context. What does it actually mean for humans to work with AI technologies? The long-established literature in human-factor analysis describes numerous fundamental obstacles that people face when interacting with complex systems integrating automated and AI technologies. These include “poor understanding of what the systems are doing, high workload when trying to interact with AI systems, poor situation awareness (SA) and performance deficits when intervention is needed, biases in decision making based on system outputs, and degradation”. Such common operational challenges of human-machine interaction raise fundamental political, ethical, legal, social, and security concerns. There are particularly high stakes in the military domain because AI technologies used in this context have the potential to inflict severe harm, such as physical injury, human rights violations, death, and (large-scale) destruction.


2. Responsible AI and Challenges of Human-machine Interaction

The Responsible AI framework has been gaining prominence among policymaking and expert circles of different states, especially the US and its allies. In 2023, the US released its Political Declaration on Responsible Military Use of AI and Autonomy, endorsed by 50 other states as of January 2024. US Deputy Secretary of Defense Kathleen Hicks stated that the new Replicator Initiative, aimed at producing large numbers of all-domain, attritable autonomous systems, will be carried out “while remaining steadfast to [the DoD’s] responsible and ethical approach to AI and autonomous systems, where DoD has been a world leader for over a decade”. At the same time, the concept of responsible military AI use has also been entrenched by the Responsible AI in the Military Domain (REAIM) Summit co-hosted by the Netherlands and the Republic of Korea. More than 55 states supported the Summit’s Call to Action in February 2023, and a second edition of the event is expected in Seoul in 2024.

The Responsible AI framework broadens the debate beyond lethal autonomous weapon systems (LAWS), which have been the focus of discussions at the UN CCW in Geneva throughout the last decade. The effort to consider different uses of AI in the military, including in decision support, is a step towards recognising the challenges of human-machine interaction and potential new forms of distributed agency. These changes are happening in various ways and do not necessarily revolve around ‘full’ autonomy, weapon systems, or humans ‘out of the loop’. Efforts to consider military systems integrating autonomous and AI technologies as part of lifecycle frameworks underline this. Such frameworks demonstrate that situations of human-machine interaction need to be addressed and occur at various lifecycle stages from research & development, procurement & acquisition, TEVV, potential deployment, to retirement. Addressing such concerns therefore deserve the type of debate offered by the REAIM platform: a multi-stakeholder discussion representing global perspectives on (changing) human-machine interaction in the military.

At the same time, the Responsible AI framework is nebulous and imprecise in its guidance on ensuring that challenges of human-machine interaction are addressed. So far, it functions as a “floating signifier”, in the sense that the concept can be understood in different ways, often depending on the interests of those who interpret it. This was already visible during the first REAIM Summit in The Hague, where most participants agreed on the importance of being responsible, but not on how to get there. Some of the common themes among the REAIM and US initiatives include commitment to international law, accountability, and responsibility, ensuring global security and stability, human oversight over military AI capabilities, as well as appropriate training of personnel involved in interacting with the capabilities. But beyond these broad principles, it remains unclear what constitutes ‘appropriate’ forms of human-machine interaction, and the forms of agency these involve, in relation to acting responsibly and in conformity with international law – that, in itself, offers unclear guidance. It must be noted, however, that defining ‘Responsible AI’ is no easy task because it requires considering the various dimensions of a complex socio-technical system which includes not only the technical aspects but also political, legal, and social ones. It has already been a challenging exercise in the civilian domain to pinpoint the exact characteristics of this concept, although key terms such as explainability, transparency, privacy, and security are often mentioned in Responsible AI strategies.

Importantly, the Responsible AI framework allows for various interpretations of the form, or mechanism, of global governance needed to address the challenges of human-machine interaction in the military. There are divergent approaches on the appropriate direction to take. For instance, US policymakers seek to “codify norms” for the responsible use of AI through the US political declaration, a form of soft law, interpreted by some experts as a way for Washington to promote its vision in its perceived strategic competition with Beijing. Meanwhile, many states favour a global legal and normative framework in the form of hard law, such as a legally binding instrument establishing appropriate forms of human-machine interaction, especially in relation to targeting, including the use of force. The UN’s 2023 New Agenda for Peace urges states not only to develop national strategies on responsible military use of AI, but also to “develop norms, rules and principles…through a multilateral process” which would involve engagement with industry, academia, and civil society. Some states are trying to make steps into this direction, for instance Austria took the initiative in the form of co-sponsoring a UN General Assembly First Committee resolution on LAWS, which was adopted with overwhelming support in November 2023. Overall, the Responsible AI framework’s inherent ambiguity is an opportunity for those favouring a soft law approach, especially actors who promote political declarations or sets of guidelines and argue that these are enough. Broad Responsible AI guidelines might symbolise a certain commitment or obligations, but at this stage they are insufficient to address already existing challenges to human-machine interaction in a security and military context –not least because they may not be connected to a concrete pathway toward operationalisation and implementation.

Note: This essay outlines initial thinking that forms the basis of a new research project called “Human-Machine Interaction: The Distributed Agency of Humans and Machines in Military AI” (HuMach) funded by the Independent Research Fund Denmark. Led by Ingvild Bode, the project will start later in 2024.



Symposium on Military AI and the Law of Armed Conflict: A (Pre)cautionary Note About Artificial Intelligence in Military Decision Making


04.04.24 | 


[Georgia Hinds is a Legal Adviser with the ICRC in Geneva, working on the legal and humanitarian implications of autonomous weapons, AI and other new technologies of warfare. Before joining the ICRC, she worked in the Australian Government, advising on public international law including international humanitarian and human rights law, and international criminal law, and served as a Reservist Officer with the Australian Army. The views expressed on this blog are those of the author alone and do not engage the ICRC, or previous employers, in any form.]

Introduction

Most of us would struggle to define ‘artificial intelligence.’ Fewer still could explain how it functions. And yet AI technologies permeate our daily lives. They also pervade today’s battlefields. Over the past eighteen months, reports of AI-enabled systems being used to inform targeting decisions in contemporary conflicts have sparked debates (including on this platform) around legal, moral and operational issues.

Sometimes called ‘decision support systems’ (DSS), these are computerized tools that are designed to aid human decision-making by bringing together and analysing information, and in some cases proposing options as to how to achieve a goal [see, e.g., Bo and Dorsey]. Increasingly, DSS in the military domain are incorporating more complex forms of AI, and are being applied to a wider range of tasks.

These technologies do not actually make decisions, and they are not necessarily part of weapon systems that deliver force. Nevertheless, they can significantly influence the range of actions and decisions that form part of military planning and targeting processes.

This post considers implications for the design and use of these tools in armed conflict, arising from international humanitarian law (IHL) obligations, particularly the rules governing the conduct of hostilities.

Taking ‘Constant Care’, How Might AI-DSS Help or Hinder?

Broadly, in the conduct of military operations, parties to an armed conflict must take constant care to spare the civilian population, civilians and civilian objects.

The obligation of constant care is an obligation of conduct, to mitigate risk and prevent harm. It applies across the planning or execution of military operation, and is not restricted to ‘attacks’ within the meaning of IHL (paras 2191, 1936, 1875). It includes, for example, ground operations, establishment of military installations, defensive preparations, quartering of troops, and search operations. It has been said that this requirement to take ‘constant care’ must “animate all strategic, operational and tactical decision-making.”

In assessing the risk to civilians that may arise from the use of an AI-DSS, a first step must be assessing whether the system is actually suitable for the intended task. Applying AI – particularly machine learning – to problems for which it is not well suited, has the potential to actually undermine decision-making (p 19). Automating processes that feed into decision-making can be advantageous where quality data is available and the system is given clear goals (p 12). In contrast, “militaries risk facing bad or tragic outcomes” where they provide AI systems with clear objectives but in uncertain circumstances, or where they use quality data but task AI systems with open-ended judgments. Uncertain circumstances abound in armed conflict, and the contextual, qualitative judgements required by IHL are notoriously difficult. Further, AI systems generally lack the ability to transfer knowledge from one context or domain to another (p 207), making it potentially problematic to apply an AI-DSS in a different armed conflict, or even in different circumstances in the same conflict. It is clear then, that whilst AI systems may be useful for some tasks in military operations (eg. in navigation and maintenance and supply chain management), they will be inappropriate for many others.

Predictions about enemy behaviour will likely be far less reliable than those about friendly forces, not only due to a lack of relevant quality data, but also because armed forces will often adopt tactics to confuse or mislead their enemy. Similarly, AI-DSS would struggle to infer something open-ended or ill-defined, like the purpose of a person’s act. A more suitable application could be in support of weaponeering processes, and the modelling of estimated effects, where such systems are already deployed, and where the DSS should have access to greater amounts of data derived from tests and simulations.

Artificial Intelligence to Gain the ‘Best Possible Intelligence’?

Across military planning and targeting processes, the general requirement is that decisions required by IHL’s rules on the conduct of hostilities must be based on an assessment of the information from all sources reasonably available at the relevant time. This includes an obligation to proactively seek out and collect relevant and reasonably available information (p 48). Many military manuals stress that the commander must obtain the “best possible intelligence,” which has been interpreted as requiring information on concentrations of civilian persons, important civilian objects, specifically protected objects and the environment (See Australia’s Manual on the Law of Armed Conflict (1994) §§548 and 549).

What constitutes the best possible intelligence will depend upon the circumstances, but generally commanders should be maximising their available intelligence, surveillance and reconnaissance assets to obtain up-to-date and reliable information.

Considering this requirement to seek out all reasonably available information, it is entirely possible that the use of AI DSS may assist parties to an armed conflict in satisfying their IHL obligations, by synthesising or otherwise processing certain available sources of information (p 203). Indeed, whilst precautionary obligations do not require parties to possess highly sophisticated means of reconnaissance (pp 797-8), it has been argued that (p 147), if they do possess AI-DSS and it is feasible to employ them, IHL might actually require their use.

In the context of urban warfare in particular, the ICRC has recommended (p 15) that information about factors such as the presence of civilians and civilian objects should include open-source repositories such as the internet. Further, specifically considering AI and machine learning, the ICRC has concluded that, to the extent that AI-DSS tools can facilitate quicker and more widespread collection and analysis of this kind of information, they could well enable better decisions by humans that minimize risks for civilians in conflict. The use of AI-DSS to support weaponeering, for example, may assist parties in choosing means and methods of attack that can best avoid, or at least minimize, incidental civilian harm.

Importantly, the constant care obligation and the duty to take all feasible precautions in attack are positive obligations, as opposed to other IHL rules which prohibit conduct (eg. the prohibitions on indiscriminate or disproportionate attacks). Accordingly, in developing and using AI-DSS, militaries should be considering not only how such tools can assist to achieve military objectives with less civilian harm, but how they might be designed and used specifically for the objective of civilian protection. This also means identifying or building relevant datasets that can support assessments of risks to, and impacts upon civilians and civilian infrastructure.

Practical Considerations for Those Using AI-DSS

When assessing the extent to which an AI-DSS output reflects current and reliable information sources, commanders must factor in AI’s limitations in terms of predictability, understandability and explainability (see further detail here). These concerns are likely to be especially acute with systems that incorporate machine learning algorithms that continue to learn, potentially changing their functioning during use.

Assessing the reliability of AI-DSS outputs also means accounting for the likelihood that an adversary will attempt to provide disinformation such as ruses and deception, or otherwise frustrate intelligence acquisition activities. AI-DSS currently remain vulnerable to hacking and spoofing techniques that can lead to erroneous outputs, often in ways that are unpredictable and undetectable to human operators.

Further, like any information source in armed conflict, the datasets on which AI-DSS rely may be imperfect, outdated or incomplete. For example, “No Strike Lists” (NSL) can contribute to a verification process by supporting faster identification of certain objects that must not be targeted. However, a NSL will only be effective so long as it is current and complete; the NSL itself is not the reality on the ground. More importantly, the NSL usually only consists of categories of objects that benefit from special protection or the targeting of which is otherwise restricted by policy. However, the protected status of objects in armed conflict can change – sometimes rapidly – and most civilian objects that will not appear on the list. In short then, the presence of an object on a NSL contributes to identifying protected objects when verifying the status of a potential target, but the absence of an object from the list does not imply that it is a military objective.

Parallels can be drawn with AI-DSS tools, which rely upon datasets to produce “a technological rendering of the world as a statistical data relationship” (p 10). The difference is that, whilst NSLs generally rely upon a limited number of databases, AI-DSS tools may be trained with, and may draw upon such a large volume of datasets that it may be impossible for the human user to verify their accuracy. This makes it especially important for AI-DSS users to be able to understand what underlying datasets are feeding the system, the extent to which this data is likely to be current and reliable, and the weighting given to particular data in the DSS output (paras 19-20). Certain critical datasets may need to be, by default, labelled with overriding prominent (eg. NSLs), whilst, for others, decision-makers may need to have the ability to adjust how they are factored in.

In certain circumstances, it may be appropriate for a decision-maker to seek out expert advice concerning the functioning or underlying data of an AI-DSS. As much has been suggested in the context of cyber warfare, in terms of seeking to understand the effects of a particular cyber operation (p 49).

In any event, it seems unlikely that it would be reasonable for a commander to rely solely on the output of one AI-DSS, especially during deliberate targeting processes where more time is available to gather and cross-check against different and varied sources. Militaries have already indicated that cross-checking of intelligence is standard practice when verifying targets and assessing proportionality, and an important aspect of minimising harm to civilians. This practice should equally be applied when employing AI-DSS, ideally using different kinds of intelligence to guard against the risks of embedded errors within an AI-DSS.

If a commander, planner or staff officer did rely solely on an AI-DSS, the reasonableness of their decision would need to be judged not only in light of the AI DSS output, but also taking account of other information that was reasonably available.

Conclusion

AI-DSS are often claimed to hold the potential to increase IHL compliance and to produce better outcomes for civilians in armed conflict. In certain circumstances, the use of AI DSS may well assist parties to an armed conflict in satisfying their IHL obligations, by providing an additional available source of information.

However, these tools may be ill-suited for certain tasks in the messy reality of warfare, especially noting their dependence on quality data and clear goals, and their limited capacity for transfer across different contexts. In some cases, drawing upon an AI-DSS could actually undermine the quality of decision-making, and pose additional risks to civilians.

Further, even though an AI-DSS can draw in and synthesise data from many different sources, this does not absolve a commander of their obligation to proactively seek out information from other reasonably available sources. Indeed, the way in which AI tools function – their limitations in terms of predictability, understandability and explainability – make it all the more important that their output be cross-checked.

Finally, AI-DSS must only be applied within legal, policy and doctrinal frameworks that ensure respect for international humanitarian law. Otherwise, these tools will only serve to replicate, and arguably exacerbate, unlawful or otherwise harmful outcomes at a faster rate and on a larger scale.



U$A

New Rule Strengthening Federal Job Protections Could Counter Trump Promises To Remake The Government


The Office of Personnel Management regulations will bar career civil servants from being reclassified as political appointees or as other at-will workers, who are more easily dismissed from their jobs.


Associated Press
Updated on: 4 April 2024 


File photo Photo: AP

The government’s chief human resources agency issued a new rule on Thursday making it harder to fire thousands of federal employees, hoping to head off former President Donald Trump’s promises to radically remake the workforce along ideological lines if he wins back the White House in November

The Office of Personnel Management regulations will bar career civil servants from being reclassified as political appointees or as other at-will workers, who are more easily dismissed from their jobs. It comes in response to Schedule F, an executive order Trump issued in 2020 that sought to allow for reclassifying tens of thousands of the 2.2 million federal employees and thus reduce their job security protections.

President Joe Biden nullified Schedule F upon taking office. But if Trump, a Republican, were to revive it during a second administration, he could dramatically increase the around 4,000 federal employees who are considered political appointees and typically change with each new president.

In a statement issued Thursday, Biden, a Democrat, called the rule a “step toward combatting corruption and partisan interference to ensure civil servants are able to focus on the most important task at hand: delivering for the American people.’

The potential effects of the change are wide-reaching since how many federal employees might have been affected by Schedule F is unclear. The National Treasury Employee Union used freedom of information requests to obtain documents suggesting that workers like office managers and specialists in human resources and cybersecurity might have been among those subject to reclassification.

The new rule moves to counter a future Schedule F order by spelling out procedural requirements for reclassifying federal employees and clarifying that civil service protections accrued by employees can’t be taken away, regardless of job type. It also makes clear that policymaking classifications apply to noncareer, political appointments.

“It will now be much harder for any president to arbitrarily remove the nonpartisan professionals who staff our federal agencies just to make room for hand-picked partisan loyalists,” National Treasury Employees Union President Doreen Greenwald said in a statement.

Good government groups and liberal think tanks and activists have cheered the rule. They viewed cementing federal worker protections as a top priority given that replacing existing government employees with new, more conservative alternatives is a key piece of the conservative Heritage Foundation’s nearly 1,000-page playbook known as Project 2025.

That plan calls for vetting and potentially firing scores of federal workers and recruiting conservative replacements to wipe out what leading Republicans have long decried as the “deep state” governmental bureaucracy.

Skye Perryman, president and CEO of Democracy Forward, which has led a coalition of nearly 30 advocacy organizations supporting the rule, called it “extraordinarily strong” and said it can effectively counter the “highly resourced, anti-democratic groups” behind Project 2025.

“This is not a wonky issue, even though it may be billed that way at times,” Perryman said. “This is really foundational to how we can ensure that the government delivers for people and, for us, that’s what a democracy is about.”

The final rule, which runs to 237 pages, is being published in the federal registry and set to formally take effect next month. The Office of Personnel Management first proposed the changes last November, then reviewed and responded to 4,000-plus public comments on them. Officials at some top conservative organizations were among those opposing the new rule, but around two-thirds of the comments were supportive.

If Trump wins another term, his administration could direct the Office of Personnel Management to draft new rules. But the process takes months and requires detailed explanation on why new regulations would be improvements — potentially allowing for legal challenges to be brought by opponents.

Rob Shriver, deputy director of the Office of Personnel Management, said the new rule ensures that federal employee protections “cannot be erased by a technical, HR process” which he said “Schedule F sought to do.”

“This rule is about making sure the American public can continue to count on federal workers to apply their skills and expertise in carrying out their jobs, no matter their personal political beliefs,” Shriver said on a call with reporters.

He noted that 85% of federal workers are based outside the Washington area and are “our friends, neighbors and family members,” who are “dedicated to serving the American people, not political agendas.”
South Korea’s president meets leader of striking doctors as he seeks to end their walkouts

South Korean President Yoon Suk Yeol speaks during a cabinet meeting at the government complex in Sejong, South Korea, Wednesday, March 6, 2024. South Korea’s president met the leader of thousands of striking junior doctors on Thursday, April 4, 2024, 

By Hyung-jin Kim - Associated Press - Thursday, April 4, 2024

SEOUL, South Korea (AP) — South Korea’s president met the leader of thousands of striking junior doctors on Thursday and promised to respect their position during future talks over the government’s contentious push to sharply increase medical school admissions.

The meeting between President Yoon Suk Yeol and Park Dan, head of an emergency committee for the Korea Intern Resident Association, was the first of its kind since more than 90% of the country’s 13,000 trainee doctors walked off the job in February, disrupting hospital operations.


During a lengthy televised public address Monday, Yoon defended his plan to recruit 2,000 more medical students each year, from the current cap of 3,058. But he said his government remains open to talks if doctors come up with a unified proposal that logically explains their calls for a much smaller hike of the enrollment quota.

On Thursday, Yoon and Park met for more than two hours, during which “the president said he would respect the position of trainee doctors in the event of talks with the medical circle on medical reform issues including an increase of doctors,” according to Yoon’s office.

It didn’t say whether the government plans any immediate talks with the doctors and whether Yoon’s comments would mean he’s willing to lower the size of his proposed medical school admission increase. But Yoon has faced calls from many, including some in his own conservative ruling party, to make concessions as the party’s candidates face an uphill battle against their liberal rivals ahead of next week’s crucial parliamentary elections.

During the meeting, Yoon also listened to Park’s views on problems facing South Korea’s medical system, and the two exchanged opinions on how to improve working conditions for interns and medical residents, Yoon’s office said in a statement.

Yoon has said the 2,000-student enrollment increase is the minimum necessary, given that South Korea has one of the world’s most rapidly aging populations and its doctor-to-patient ratio is the lowest among advanced economies.

But many doctors have argued that universities can’t deal with such an abrupt increase in the number of students, and that it would ultimately undermine the quality of the country’s medical services. But critics say doctors, one of the best-paid professions in South Korea, simply worry that the supply of more doctors would result in lower future incomes.

Public surveys show that a majority of ordinary South Koreans support Yoon’s plan. The doctors’ strikes have triggered hundreds of cancelled surgeries and other medical treatments at hospitals and deepened worries about prolonged medical impasse. Observers say ordinary people are increasingly fed up with the protracted confrontation between the government and doctors and want the strikes to end.


 

Kenya: Health Crisis Persists in Kenya As Doctors Reject Govt Offer




Nairiobi — The health crisis will persist in Kenya after doctors rejected a government offer aimed at ending two-week-long strike that has severely disrupted health services.

Abidan Mwachi, Chairman of the Kenya Medical Practitioners, Pharmacists, and Dentists Union (KMPDU), announced the rejection on social media platform X, stating firmly, "We decline these proposals in total," citing the government's failure to fulfill its promise to pay salary arrears.

The strike, initiated on March 15 by the KMPDU representing more than 7,000 members, demanded the payment of salary arrears and immediate hiring of trainee doctors, among other grievances.

In response, the government announced measures to address the doctors' demands, claiming that salary arrears had been settled and trainee doctors would be hired starting Thursday with a budget allocation of Ksh2.4 billion ($18.39 million).

However, Mwachi's rejection underscores the ongoing discord between doctors and the government, prolonging the healthcare crisis, with patients struggling to access care.

Rooted in a 2017 collective bargaining agreement (CBA), doctors' demands include adequate medical insurance cover for themselves and their dependents, along with addressing salary payment delays and compensating doctors pursuing higher degrees while working in public hospitals.

Kenya's health sector, plagued by funding shortages and staffing deficiencies, has endured recurring strikes.

The current standoff exacerbates the disruption in medical services, amplifying concerns over the country's healthcare infrastructure.

OPINION...
Why Israel insists on defending its lies about the October attack


April 4, 2024 at 8:30 am

Pro-Palestinian Jewish American demonstrators rally in New York City, United States on February 22, 2024
[Selçuk Acar/Anadolu Agency]


by Dr Mustafa Fetouri
MFetouri

When you try something and find out it is useless, it is stupid and much more useless to continue with it. At least, that is logic and that is how the human brain works. Not in Israel, though.

Since the 7 October attack on its military bases and semi-militarised Kibbutzim, in which Hamas and other Palestinian fighters strategically surprised the Israeli military might near and around the Gaza Strip, Israel kept putting out the kind of stories that have long since been verified and found to be nothing but cheap propaganda. Israeli decision-makers and its media machine appear to be telling themselves “lie until you believe yourself and others will soon follow”.

This has been a pattern repeated, time and again, in blatant cheap attempts to further dehumanise the Palestinians, all of them, not just Hamas fighters, by portraying them as savages, cold-blooded murderers bent on killing civilians and raping women.

Since old habits die hard, if they ever do, Israel’s habit of lying is part of its short history of 75 years. From day one, it used lies and supporting propagandists around the world to perpetuate its own fake stories and untrue claims that the world used to – to a degree – believe every word the Israelis put out about any event that involves the Palestinians. It has done so in almost all previous atrocities from 1947 to date and it is repeating the same practices now in its genocide in Gaza.

Take, for example, the attack on the World Central Kitchen which killed seven of the Charity’s workers, Monday 1 April, and how Israel quickly admitted responsibility, offered condolences and the Prime Minister, Benjamin Netanyahu, regretted what happened, describing the airstrike as “unintentional”. The reality is that all charities coordinate their movements on a minute by minute basis with Israeli forces, so they know where they are at any moment in time, on any given day. So Netanyahu’s “unintentional” description is a big lie!

Shameful still, and in the usual Netanyahu arrogance and contempt for civilians who help Palestinians, he did not even bother to apologise for what happened, instead casting the killing as “happens in war”. He already knew no one would believe him, as the world has already caught him on camera many times before, lying not only to the public but also to his foreign counterparts, including Joe Biden, who has been supporting the Gaza genocide despite Netanyahu’s open contempt of him.

No one really expected Netanyahu to, publicly, apologise. For him it is already too much for his ego to even admit, a rarity anyway, that his forces did actually kill aid workers, including three British citizens, besides one each from the United States/Canada, Poland and Australia. Apologies were left to the army spokesman and the usually stone-faced President. Victims’ countries have demanded an investigation but, as is usually the case, Israel might investigate and, again, as usual it will find its army was not to blame. It might as well blame the victims for being in the wrong place at the wrong time. And, of course, Israel will never allow any independent investigation of its army, which it always defends as the most moral army on the planet, despite being responsible for killing more than 32,000 Palestinians since 7 October.

In its relentless efforts to have others share its lies and twisted narratives of any major event, Israel tries to have others, like the United Nations, support its stories but, in many instances, it ends up doing the opposite.

A good example of this is the UN envoy on Sexual Violence in Conflict, Ms. Pramila Patten, who visited Israel last January to look at how, according to Israel, Hamas committed rape and other sexual crimes during its raid on Israel. After spending some two weeks “investigating” what happened, lawyer Pramila Patten published her 23-page report which says too much, but with little substance in terms of facts and proof that Hamas fighters, indeed, committed war crimes.

Much of the lack of evidence actually was caused by Israeli authorities who tried to control what the UN team had access to, what potential eye witnesses, if any, might say and how they say it and, above all, where the team might go and whom it is allowed to meet.

Instead of thoroughly investigating Israeli claims of war crimes, the report talks about what Hamas did not do, while more than about what the fighters might have done—since 7 October, Israel has been accusing Hamas of everything it did not do, instead of what its fighters really did. The reason why most of the Israeli claims were – and still – found to be false.

Yet, the report should be recognised for, alas, what it did not include but rather for the things it included, as issues fully investigated and findings verified and supported by evidence. But, from the outset, Ms. Patten states that her report is not “investigative” in nature despite following similar bodies’ methodologies. In Paragraph 78, it not only emphasises the non-investigative nature of the report, but it also reminds the reader that the entire mission Ms. Patten has led to Israel was not fact-finding, either. What was not said in the mandate of the team, Israel made sure that the team does not get, despite being there upon Israeli invitation.

Israel also made sure that Ms. Patten and her team did not collect “information and/or draw conclusions” enabling them to “attribute” some of the alleged violations to “specific armed groups” because such attribution would require a “fully-fledged investigative process”. Full access is not something Israel will ever permit.

While the report said there were “reasonable” grounds that Hamas and others might have committed wars crimes, including rape, it also highlights the simple facts that: one, its team never met any victims of rape; two, all visual evidence it reviewed did not provide any “tangible” evidence of rape and, three, it ended by urging Israel to allow it full access to “complete” the investigation.

Furthermore, the documents calls on Israel to “grant access” to the International Commission of Inquiry for detailed investigation. To British lawyer and Secretary-General of the Women’s International League, Madeleine Rees, Israel has rejected this many times before and, in relation to 7 October, it rejected her own requests for investigation.

The only takeaway one comes out with, after reading the report is this: Israeli narrative must be believed and taken as facts. Today’s Israel still lives in a world where its word used to be taken at its face value, without any further scrutiny—scrutiny, the only thing Israel really hates and despises.



Israel media questions awarding prize to fraudulent lawyer behind Hamas ‘mass rape’ allegations

March 28, 2024 

Cochav Elkayam-Levy in Jerusalem, 23 November 2023. 
[Photo by NICOLAS MAETERLINCK/BELGA MAG/AFP via Getty Images]

The Israeli newspaper Yedioth Ahronoth has published an expose accusing an Israeli lawyer who claimed Hamas fighters committed systematic sexual violence on 7 October, of “fraud, and scamming donors”.

On 21 March, the Ministry of Education awarded the prestigious Israel Prize in the field of Solidarity to Dr. Cochav Elkayam-Levy; a lawyer and political science lecturer at the Hebrew University.

At the time, Education Minister, Yoav Kisch, hailed Elkayam-Levy’s “work in the international arena to expose the atrocities of Hamas” as “a crucial pillar in our ongoing struggle for justice and in our efforts to confront the perpetrators.”

“The people of Israel deeply value your work and extend their heartfelt gratitude to you,” he added.

Elkayam-Levy rose to prominence after claiming to have founded the so-called Civil Commission on October 7 Crimes by Hamas against Women and Children, and spearheaded spreading misinformation to international media outlets including the New York Times and CNN that were later debunked.

However on Monday, Ynet, the media outlet affiliated with Yedioth Ahronoth, questioned the ministry’s decision to favour her over other more professional and more reliable women in this field.

“People have disassociated themselves from her because her research is inaccurate,” an Israeli government official told Ynet.

The government source cited how Elkayam-Levy disseminated a story about Palestinian fighters “slicing the belly of a pregnant woman – a story proven to be untrue, and she spread it in the international media.”

READ: ‘Women in Gaza are being raped and this is not being investigated or reported’

“It’s no joke. Little by little, professionals have begun to distance themselves from her because she is unreliable,” the source added.

It had previously been exposed that Elkayam-Levy also presented images of female Kurdish fighters killed in combat as Jewish Israeli women who had been killed on 7 October.

According to Ynet, Elkayam-Levy has also conned Jewish donors and channelled the money into her personal bank account.

Ynet said Elkayam-Levy appealed for $8 million to fund her non-existent “civil commission” in 2024, of which $1.5 million would go to “management and administration”.

“Rahm Emanuel, the US ambassador to Japan, donated money to her, she took donations from a lot of people, and started asking for money for lectures,” the Israeli official added.

“At first she really was very active, and it was very nice,” the government source told Ynet. “And then she started calling herself ‘civil commission.’ People got confused, members of [the US] Congress turned to people who work with Israel and asked what this was about – did Israel create a commission? It’s a confusing name.”

“And to the question of is there such a thing at all? Is there such a body? The answer is: no,” the source added. “She is the body. She is this civil commission.”

Israel’s Channel 13 has also questioned Elkayam-Levy’s credibility.

“They mention her starting a ‘civil commission’ to raise awareness. It bears mentioning that the name ‘civil commission’ is very bombastic. The commission is her. And she is the commission,” Channel 13’s Raviv Drucker says.

Drucker cited the head of the Israel Prize committee as saying that Elkayam-Levy had been given the award because “she authored the Horrors Report” – a report on the mass rapes.

“But then we realise that there is no Horrors Report. There is simply no such report. It hasn’t been written, not by her and not by anyone,” Drucker explained.

“There is this letter she sent two weeks after the catastrophe, after the slaughter of 7 October. But it was just a collection of newspaper headlines, a letter only a few pages long. There is no such report,” he added.

UN rights council to consider call for arms embargo on Israel

April 4, 2024

Protestors hold a banner that says “Stop Arming Israel”, in Paris, France on 01 March 2024
 [Telmo Pinto/SOPA Images/LightRocket via Getty Images]

The United Nations Human Rights Council will tomorrow consider a draft resolution calling for an arms embargo on Israel, citing “plausible risk of genocide in Gaza.”

The draft resolution’s text condemns “the use of explosive weapons with wide-area effects by Israel” in populated areas of the Gaza Strip and demands that Tel Aviv “uphold its legal responsibility to prevent genocide.”

If the draft resolution is adopted, it will be the first position taken by the Human Rights Council since Israel launched its brutal bombing campaign in October 2023.

Pakistan submitted the text on behalf of 55 of the 56 UN member states who form part of the Organisation of Islamic Cooperation (OIC), except Albania. The draft resolution is also co-sponsored by Bolivia, Cuba and the Palestinian mission in Geneva.

The eight-page draft demands that Israel end its occupation of Palestinian territory and immediately lift its “illegal blockade” on the Gaza Strip and all other forms of “collective punishment”.

It calls upon countries to stop the sale or transfer of arms, munitions and other military equipment to Israel, citing “a plausible risk of genocide in Gaza”.

It also voices grave concern at the effects of explosive weapons on hospitals, schools, water, electricity and shelter in Gaza.

The draft resolution also calls for an immediate ceasefire in Gaza and “condemns Israeli actions that may amount to ethnic cleansing” and “the use of starvation of civilians as a method of warfare” and urges all concerned countries to prevent the forced displacement of Palestinians inside the Gaza Strip.

The United Nations Relief and Works Agency for Palestine Refugees (UNRWA) must receive adequate funding, the text of the resolution says.

Finally, it “reaffirms that criticism of Israel’s violations of international law must not be confused with anti-Semitism.”


Military pier project in Gaza could be 'on ice'

Killings cast pall on prospects for US aid plan, which critics say would needlessly put troops and others at risk



KELLEY BEAUCAR VLAHOS
APR 04, 2024

The Israeli killing of seven international aid workers this week has already had a chilling effect on the prospects of President Joe Biden’s aid surge project, which is supposed to deploy the U.S. military to build a causeway off the coast of Gaza to deliver food into Gaza, ostensibly next month.

Meanwhile, fielding questions from reporters at the White House yesterday after the killing of the World Central Kitchen workers, spokeswoman Karine Jean-Pierre said the temporary pier would be operational “in a couple of weeks.”

This is highly ambitious and likely not true. An Army spokesman claims the ships carrying the supplies for both the floating pier and the causeway that is supposed to be anchored to the yet-to-be-known location on the Gaza beach are “streaming” (POLITICO’s words, more on that below) toward the region, but they still have to build the infrastructure, and most estimates don’t expect completion until May.

More importantly, POLITICO reports that the United Nations' World Food Programme (WFP) was likely tapped by the U.S. to deliver the aid into Gaza once it the hit the beach, but is now having second thoughts because of the World Central Kitchen killings. As reported, Chef Jose Andrés’s organization had been coordinating for months with the Israel Defense Forces (IDF), and their convoy was known to the IDF the day of the deadly strikes. They were targeted and blown to bits anyway.

Now, WFP is denying there was any formal agreement between the aid organization and the U.S., and says it wants more assurances of its people’s safety before going ahead with any such contract.

“Any decision regarding the UN participation in the maritime corridor setup needs to be fully agreed on with the humanitarian agencies operating in Gaza, under conditions that allow for safe, sustained and scaled-up assistance to reach people in need,” Steve Taravella, a spokesman for the WFP said Wednesday.

This comes amid public concerns by former military officials that the project leaves U.S. soldiers vulnerable as they build the causeway, anchor it, and engage partners in the deliveries from Cyprus to Gaza. There are unconfirmed reports that the IDF and private contractors would provide a “security bubble” on the ground, but this week’s killings, plus numerous reports about IDF “kill zones” and AI targeting, not to mention the fact that it is an active war zone, give no confidence to observers who are wondering why the administration does not press Israel to just open up existing aid checkpoints on land instead.

Retired Naval officer Jerry Hendrix was quoted in the Washington Post saying the whole causeway project would leave U.S. troops “highly vulnerable” and calling the plan “stupid.”

Interestingly, the POLITICO story quoted former Assistant Secretary of Defense Mick Mulvaney without noting he was co-founder of Fogbow, which is the private contractor supposedly tapped to help the U.S. military provide logistics and security for the project, according to unconfirmed reports. Even he thinks the World Central Kitchen killings could have “a chilling effect on who will volunteer, who will deliver aid in Gaza,” he told POLITICO.

Mike DiMino, a former CIA analyst and fellow at Defense Priorities, agrees. “The WCK strike completely vindicates the immense and varied potential risks to U.S. personnel that we have repeatedly highlighted,” he told RS Wednesday. “What if (American citizens) are delivering aid ashore, and the IDF suspected a terrorist was among them? Or 'confused' armed security contractors for Hamas operatives? We found out yesterday.”

“Who will want to volunteer for that job now? Looks like the deal is on ice. I'm hoping, frankly, that the WCK incident will kill the pier idea entirely, though I won't hold my breath.”

As for the ships “streaming” into the region, according to satellite readings today only one is within range of Cyrpus, which is where the Army and Navy will begin the process. The fastest among them, the USN Roy Benavidez, is docked in Crete. The USAV Frank Besson, which left the U.S. first on March 10, is off the coast of Algeria. The smaller Army craft Monterrey, Wilson Wharf, Matamoros, and Loux, are now sailing through the Canary Islands in the Atlantic. The Naval vessel Bobo is still in Jacksonville, Florida, and the Lopez hasn’t reached Bermuda yet.At this rate the Besson will get to Cyprus first (after Benavidez), but not for another week, at least.

Kelley Beaucar Vlahos is Editorial Director of Responsible Statecraft and Senior Advisor at the Quincy Institute.

The views expressed by authors on Responsible Statecraft do not necessarily reflect those of the Quincy Institute or its associates.



CAMP PENDLETON, Calif. (July 24, 2008) Army Soldiers prepare to off load the floating Causeway for Joint Logistics off the Shore (JLOTS) at Red Beach at Camp Pendleton. JLOTS is a joint U.S. military operation aimed at preparing amphibious assault landings. This is the first JLOTS event at Camp Pendleton since 2002. (U.S. Marine Corps photo by Private 1st Class Jeremy Harris/Released)



NAKBA 2.0  CONTINUES

Israeli army arrests 40 more Palestinians in West Bank raids

At least 8,030 Palestinians detained by Israeli forces in West Bank since last October, according to Palestinian figures

 4/04/2024 Thursday
AA


At least 40 Palestinians were detained by Israeli forces in overnight military raids in the occupied West Bank, according to prisoners' affairs groups on Thursday.

Three women were among the detainees, the Commission of Detainees' Affairs and the Palestinian Prisoner Society said in a joint statement.

The arrests took place in the cities of Ramallah, Jenin, Nablus, Qalqilya, Tubas, Bethlehem and Hebron, the statement said.

The new arrests brought to 8,030 Palestinians detained by the Israeli army in the West Bank since last October, according to Palestinian figures.

Tensions have been high across the West Bank since Israel launched a deadly military offensive against the Gaza Strip after a cross-border attack by Hamas on Oct. 7, 2023.

At least 456 Palestinians have since been killed and over 4,750 others injured by Israeli army fire in the occupied territory, according to the Health Ministry.

Israel stands accused of genocide at the International Court of Justice (ICJ), which last week asked it to do more to prevent famine in Gaza, where more than 33,000 people have been killed.
Legal experts urge British government to suspend arms sales to Israel


Armored vehicles made by Britain's BAE systems. Pressure is growing on the British government to suspend exports of weapons and weapons systems to Israel despite the fact sales in 2022 were just $53.2, less than 0.02% of the value of U.S. sales and military aid. 
File Photo by Roger L. Wollenberg/UPI | License Photo

April 4 (UPI) -- More than 600 British lawyers, academics and members of the judiciary, including three former high court judges, urged Prime Minister Rishi Sunak on Thursday to suspend arms exports to Israel to "avoid complicity in serious breaches of International Humanitarian Law."

In a 17-page letter they told Sunak that as a signatory of the 1948 Genocide Convention, Britain must stop the weapons sales in light of the International Court of Justice's Jan. 26 provisional finding of "plausible risk of genocide" by Israel in Gaza.

"The ICJ's finding of plausible risk, together with the profound and escalating harm to the Palestinian people in Gaza, constitute a serious risk of genocide sufficient to trigger the U.K.'s legal obligations," the letter states.

The Genocide Convention requires nations to employ all means reasonably available to them to prevent genocide in another state as far as possible.

The letter goes on to say that the ICJ's genocide ruling "placed your government on notice that weapons might be used in its commission and that the suspension of their provision is thus a "means likely to deter" and/or "a measure to prevent" genocide."

The weapons embargo is among five demands including pushing harder to secure a permanent cease-fire, ensure safe access to and delivery of the essentials for life and medical assistance, resume funding of the U.N. Palestinian Refugee and Works Agency and sanction individuals and entities who have incited genocide against Palestinians.

In addition, they want a bilateral treaty signed last year to elevate U.K.-Israel ties to a strategic partnership by 2030 suspended and a review into suspending a trade agreement with Israel, to impose economic sanctions.

The group also argues ongoing arms exports to Israel could be in breach of the 2013 U.N. Arms Trade Treaty that prohibits the supply of weapons to carry out genocide or serious violations of international humanitarian and human rights laws as well as Britain's own export control regime.

"The U.K.'s Strategic Export Licensing Criteria require the U.K. government to refuse to license military equipment for export where there 'is a clear risk that the items might be used to commit or facilitate a serious violation of international humanitarian law,'" it wrote.

"The same principles apply where arms or military equipment might be used to commit or facilitate acts which constitute genocide."

Sunak, who is under public and political pressure over the killing Monday of three British aid workers in Gaza in an Israeli airstrike, has so far resisted calls to reconsider Britain's united stand with Israel.

"I think we've always had a very careful export licensing regime that we adhere to," he said.

"There are a set of rules, regulations and procedures that we'll always follow, and I've been consistently clear with Prime Minister Netanyahu since the start of this conflict that whilst, of course, we defend Israel's right to defend itself and its people against attacks from Hamas, they have to do that in accordance with international humanitarian law."

Senior national political figures and lawmakers in Sunak's Conservative Party, however, are pushing for a change in policy, including former national security adviser Lord Ricketts who said Britain needed to send Israel a strong message by stopping the arms exports.

Former Foreign Office minister Hugo Swire said that while he fully supported arms sales for Israel to defend itself he opposed the "selling of arms which can be -- and now look as if they are being -- used offensively in Gaza."

Welsh MP David Jones said the government must "urgently reassess its supply of arms and deliver a stern warning to Israel about its conduct."

MP Paul Bristow said the thought that British-made weapons could be used "in action that kills innocent civilians in Gaza turns the stomach."

Hampshire MP Flick Drummond called for arms sales to be stopped "for the foreseeable future".

"This has been concerning me for some time. What worries me is the prospect of U.K. arms being used in Israel's actions in Gaza, which I believe have broken international law," she said.


UK is ‘complicit’ in Israel's killing of British aid workers in Gaza, says CAAT
Campaign Against Arms Trade (CAAT)

April 4, 2024 at 9:07 am

A view of damaged vehicle carrying Western employees after Israeli attack in Deir al-Balah, Gaza on April 02, 2024
 [Ali Jadallah/Anadolu Agency]

The UK government and arms industry are both complicit in Israel’s killing of seven aid workers in Gaza, including three British citizens, the Campaign Against Arms Trade (CAAT), has alleged. The workers were killed by a strike from a Hermes 450 drone manufactured by Israeli-owned company Elbit Systems. The drone is powered by a UK-made R902(W) Wankel engine, produced by Elbit subsidiary UAV Engines Limited in the UK.

“This government is complicit in the murder of UK aid workers in Gaza,” said CAAT spokesperson Emily Apple. “It has had every opportunity to impose an arms embargo and has refused to do so.” Apple added that while CAAT’s thoughts are with the families and friends of the aid workers killed, they are also with the families and friends of the tens of thousands of Palestinians who have been killed by Israel.

CAAT’s allegation follows revelations that the Foreign, Commonwealth and Development Office (FCDO) in London is hiding legal advice that Israel is breaching International Humanitarian Law (IHL), according to Foreign Affairs Committee chair Alicia Kearns MP. The news of the suppressed legal advice was revealed in the Observer from a recorded speech Kearns made at a fundraiser.

“Not only is our government complicit in genocide, but it also knows that it is,” explained Apple. “Time and again Foreign Secretary David Cameron and FCDO ministers have refused to answer direct questions on the legal advice they’ve received. They have misled parliament and made a mockery of both our democracy and international law.”

According to its own arms export licensing criteria, the UK government must halt arms sales when there is a clear risk they could be used in IHL violations.

On 19 February, Global Legal Action Network (GLAN) and Palestinian Al-Haq legal rights NGO were refused permission to take the government to court over its arms sales. The “outrageous” refusal was based on the grounds that the government is carrying out a rolling review process. They have now been granted an oral hearing to argue again for the case to be allowed to proceed.

Since 2015, the UK has licensed £487 million ($617 million) worth of weapons to Israel. However, this does not include equipment exported via open licences. In particular, 15 per cent of the value of every US-made F-35 combat aircraft, which Israel uses to bomb Gaza, is made in the UK, exports for which are covered by an open licence with no limit on the quantity or value of exports. CAAT estimates conservatively that the work on the 36 F-35s exported to Israel up to 2023 has been worth at least £368 million ($466 million) to the UK arms industry.

Campaign Against Arms Trade and Palestine Solidarity Cornwall held an emergency vigil-cum-protest in Falmouth, Cornwall on Wednesday evening. One of the people killed in the attack, James Henderson, was from Falmouth.

“We are devastated to hear that James Henderson — known as Jimmy to his friends — was one of the aid workers killed by a targeted strike from Israel, and our deepest thoughts and condolences are with his family and friends,” said a spokesperson for Palestine Solidarity Cornwall, who confirmed that the campaign group did not speak on their behalf. “We gathered to pay our respects and show our solidarity to James, and to all of the needlessly martyred people of Palestine, as we have been doing week in and out since this genocide escalated in October.”

Palestine Solidarity Cornwall pointed out how “scandalous” it is that almost 200 aid workers have been killed by Israel over the past six months, and that the inevitable deterrent this will pose for the already extremely restricted yet essential aid sector will have a devastating effect on humanitarian aid reaching starving Palestinians in Gaza.

This is a deliberate attempt to ensure the war crime of starvation that Israel has engineered cannot be stalled by foreign aid.

“James is one of over 37,000+ people to be murdered since the start of October, each one an individual with a life, a story and a family, and our politicians can no longer look away. James’s killing, like the 37,000 Palestinians killed — including 14,000 children — could have been prevented by our government, and others around the world, ceasing arms deals with Israel and refusing to support a genocide. They are guilty. This blood is on their hands.”

According to CAAT, it is clear that the UK government has nothing but “contempt” for Palestinian people. “Despite Israel deliberately causing a famine, in which over a million people face starvation, and despite killing tens of thousands of people, this government has chosen to prioritise the profits of arms dealers over Palestinian lives,” spokesperson Apple pointed out.

“Every day people are taking action against arms companies profiting from the genocide Israel is committing. This has to continue. Every single company that supplies weapons or military must be held to account. Our government has failed us, and it has failed the Palestinian people, and it has failed its own citizens. It is down to us to take action.”

UN official says attacks on humanitarian institutions and workers prohibited under international law

Martin Griffiths, the UN Under-Secretary-General for Humanitarian Affairs, says whether Israel’s attack on the World Central Kitchen (WCK) convoy is intentional or not, it remains a crime. Israel attacked a clearly marked WCK convoy despite the organisation coordinating their route with Israeli forces. Seven aid workers were killed, including UK, US-Canadian, Australian, and Palestinian nationals.

April 4, 2024