Showing posts sorted by date for query EQUINOX. Sort by relevance Show all posts
Showing posts sorted by date for query EQUINOX. Sort by relevance Show all posts

Thursday, April 04, 2024

EU: AI Act fails to set gold standard for human rights

POSTED ON APRIL 04, 2024

.
As EU institutions are expected to conclusively adopt the EU Artificial Intelligence Act in April 2024, ARTICLE 19 joins those voicing criticism about how the Act fails to set a gold standard for human rights protection. Over the last three years of negotiation, together with the coalition of digital rights organisations, we called on lawmakers to demand that AI works for people and that regulation prioritises the protection of fundamental human rights. We believe that in several areas, the AI Act is a missed opportunity to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence and many other rights and freedoms are protected when it comes to artificial intelligence.

For the last three years, as part of the European Digital Rights (EDRi) coalition, ARTICLE 19 has demanded that artificial intelligence (AI) works for people and that its regulation prioritises the protection of fundamental human rights. We have put forward our collective vision for an approach where ‘human-centric’ is not just a buzzword, where people on the move are treated with dignity, and where lawmakers are bold enough to draw red lines against unacceptable uses of AI systems.

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. But while they celebrate, we take a much more critical stance. We want to highlight the many missed opportunities to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence, and many other rights and freedoms are protected when it comes to AI. Here’s our round-up of how the final law fares against our collective demands.

This analysis is based on the latest available version of the AI Act text, dated 6 March 2024. There may still be small changes made before the law’s final adoption.
First, we called on EU lawmakers to empower affected people by upholding a framework of accountability, transparency, accessibility, and redress. How did they do?

Some accessibility barriers have been broken down, but more needs to be done:Article 16 (ja) of the AI Act fulfills our call for accessibility by stating that high-risk AI systems must comply with accessibility requirements. However, we still believe that this should be extended to apply to low and medium-risk AI systems as well, in order to ensure that the needs of people with disabilities are central in the development of all AI systems which could impact them.

More transparency about certain AI deployments, but big loopholes for the private sector and security agencies:The AI Act establishes a publicly-accessible EU database to provide transparency about AI systems that pose higher risks to people’s rights or safety. While originally only providers of high-risk AI systems were subject to transparency requirements, we successfully persuaded decision-makers that deployers of AI systems – those who actually use the system – shall also be subject to transparency obligations.
Those providers and deployers will be subject to transparency obligations who put on the market or use AI systems in high-risk areas – such as in the areas of employment and education – as designated by Annex III. Providers will be required to register their high-risk system in the database and to enter information about it such as the description of its intended purpose, a concise description of the information used by the system, and its operating logic. Deployers of high-risk AI systems who are public authorities – or those acting on their behalf – will be obliged to register the use of the system. They will be required to enter information in the database such as a summary of the findings of a fundamental rights impact assessment (FRIA) and a summary of the data protection impact assessment. However, deployers of high-risk AI systems in the private sector area will not be required to register the use of high-risk systems – another critical issue;
The major shortcoming of the EU database is that negotiators agreed on a carve-out for law enforcement, migration, asylum, and border control authorities. Providers and deployers of high-risk systems in these areas will be requested to register only a limited amount of information, and only in a non-publicly accessible section of the database. Certain important pieces of information, such as the training data used, will not be disclosed at all. This will prevent affected people, civil society, journalists, watchdog organisations and academics from exercising public scrutiny in these high-stake areas which are prone to fundamental rights violations, and hold them accountable.

Fundamental rights impact assessments are included, but concerns remain about how meaningful they will be:We successfully convinced EU institutions of the need for fundamental rights impact assessments (FRIAs). However, based on the final AI Act text, we have doubts whether it will actually prevent human rights violations and serve as a meaningful tool of accountability. We see three primary shortcomings:
Lack of meaningful assessment and the obligation to prevent negative impacts: while the new rules require deployers of high-risk AI systems to list risks of harm to people, there is no explicit obligation to assess whether these risks are acceptable in light of fundamental rights law, nor to prevent them wherever possible. Regrettably, deployers only have to specify which measures will be taken once risks materialise, likely once the harm has already been done;
No mandatory stakeholder engagement: the requirement to engage external stakeholders, including civil society and people affected by AI, in the assessment process was also removed from the article at the last stages of negotiations. This means that civil society organisations will not have a direct, legally-binding way to contribute to impact assessments;
Transparency exceptions for law enforcement and migration authorities: while in principle, deployers of high-risk AI systems will have to publish the summary of the results of FRIAs, this will not be the case for law enforcement and migration authorities. The public will not even have access to mere information that an authority uses a high-risk AI system in the first place. Instead, all information related to the use of AI in law enforcement and migration will only be included in a non-public database, severely limiting constructive public oversight and scrutiny. This is a very concerning development as, arguably, the risks to human rights, civic space and rule of law are the most severe in these two areas. Moreover, while deployers are obliged to notify the relevant market surveillance authority of the outcome of their FRIA, there is an exemption to comply with this obligation to notify for ‘exceptional reasons of public security’. This excuse is often misused as a justification to carry on disproportionate policing and border management activities.

When it comes to complaints and redress, there some remedies, but no clear recognition of the ‘affected person’:Civil society has advocated for robust rights and redress mechanisms for individuals and groups affected by high-risk AI systems. We have demanded the creation of a new section titled ‘Rights of Affected Persons’, which would delineate specific rights and remedies for individuals impacted by AI systems. However, the section has not been created but instead, we have a ‘remedies’ chapter that includes only some of our demands;
This chapter of remedies includes the right to lodge complaints with a market surveillance authority, but lacks teeth, as it remains unclear how effectively these authorities will be able to enforce compliance and hold violators accountable. Similarly, the right to an explanation of individual decision-making processes, particularly for AI systems listed as high-risk, raises questions about the practicality and accessibility of obtaining meaningful explanations from deployers. Furthermore, the effectiveness of these mechanisms in practice remains uncertain, given the absence of provisions such as the right to representation of natural persons, or the ability for public interest organisations to lodge complaints with national supervisory authorities.

The Act allows a double standard when it comes to the human rights of people outside the EU:The AI Act falls short of civil society’s demand to ensure that EU-based AI providers whose systems impact people outside of the EU are subject to the same requirements as those inside the EU. The Act does not stop EU-based companies from exporting AI systems which are banned in the EU, therefore creating a huge risk of violating the rights of people in non-EU countries by EU-made technologies that are essentially incompatible with human rights. Additionally, the Act does not require exported high-risk systems to follow the technical, transparency or other safeguards otherwise required when AI systems are intended for use within the EU, again risking the violation of rights of people outside of the EU by EU-made technologies.
Second, we urged EU lawmakers to limit harmful and discriminatory surveillance by national security, law enforcement and migration authorities. How did they do?

The blanket exemption for national security risks undermining other rules:The AI Act and its safeguards will not apply to AI systems if they are developed or used solely for the purpose of national security, and regardless of whether this is done by a public authority or a private company. This exemption introduces a significant loophole that will automatically exempt certain AI systems from scrutiny and limit the applicability of human rights safeguards envisioned in the AI Act;
In practical terms, it would mean that governments could invoke national security to introduce biometric mass surveillance systems, without having to apply any safeguards envisioned in the AI Act, without conducting a fundamental rights impact assessment and without ensuring that the AI system meets high technical standards and does not discriminate against certain groups;
Such a broad exemption is not justified under EU treaties and goes against the established jurisprudence of the European Court of Justice. While national security can be a justified ground for exceptions from the AI Act, this has to be assessed case-by-case, in line with the EU Charter of Fundamental Rights. The adopted text, however, makes national security a largely digital rights-free zone. We are concerned about the lack of clear national-level procedures to verify if the national security threat invoked by the government is indeed legitimate and serious enough to justify the use of the system and if the system is developed and used with respect for fundamental rights. The EU has also set a worrying precedent regionally and globally; broad national security exemptions have now been introduced in the newly-adopted Council of Europe Convention on AI.

Predictive policing, live public facial recognition, biometric categorisation and emotion recognition are only partially banned, legitimising these dangerous practices:We called for comprehensive bans against any use of AI that isn’t compatible with rights and freedoms – such as proclaimed AI ‘mind reading’, biometric surveillance systems that treat us as walking bar-codes, or algorithms used to decide whether we are innocent or guilty. All of these examples are now partially banned in the AI Act, which is an important signal that the EU is prepared to draw red lines against unacceptably harmful uses of AI;
At the same time, all of these bans contain significant and disappointing loopholes, which means that they will not achieve their full potential. In some cases, these loopholes risk having the opposite effect from what a ban should: they give the signal that some forms of biometric mass surveillance and AI-fuelled discrimination are legitimate in the EU, which risks setting a dangerous global precedent;
For example, the fact that emotion recognition and biometric categorisation systems are prohibited in the workplace and in education settings, but are still allowed when used by law enforcement and migration authorities, signal that the EU’s will to test the most abusive and intrusive surveillance systems against the most marginalised in society;
Moreover, when it comes to live public facial recognition, the Act paves the way to legalise some specific uses of these systems for the first time ever in the EU – despite our analysis showing that all public-space uses of these systems constitute an unacceptable violation of everyone’s rights and freedoms.

The serious harms of retrospective facial recognition are largely ignored:When it comes to retrospective facial recognition, this practice is not banned at all by the AI Act. As we have explained, the use of retrospective (post) facial recognition and other biometric surveillance systems (called ‘remote biometric identification’, or ‘RBI’ in the text) are just as invasive and rights-violating as live (real-time) systems. Yet the AI Act makes a big error in claiming that the extra time for retrospective uses will mitigate possible harms;
While several lawmakers have argued that they managed to insert safeguards, our analysis is that the safeguards are not meaningful enough and could be easily circumvented by police. In one place, the purported safeguard even suggests that simply the suspicion of any crime having taken place would be enough to justify the use of a post RBI system – a lower threshold than we currently benefit from now under EU data protection law.

People on the move are not afforded the same rights as everyone else, with only weak – and at times absent – rules on the use of AI at borders and in migration contexts:In its final version, the EU AI Act sets a dangerous precedent for the use of surveillance technology against migrants, people on the move and marginalised groups. The legislation develops a separate legal framework for the use of AI by migration control authorities, in order to enable the testing and the use of dangerous surveillance technologies at the EU borders and disproportionately against racialised people;
None of the bans meaningfully apply to the migration context, and the transparency obligations present ad-hoc exemptions for migration authorities, allowing them to act with impunity and far away from public scrutiny;
The list of high-risk systems fails to capture the many AI systems used in the migration context, as it excludes dangerous systems such as non-remote biometric identification systems, fingerprint scanners, or forecasting tools used to predict, interdict, and curtail migration;
Finally, AI systems used as part of EU large-scale migration databases (e.g. Eurodac, the Schengen Information System, and ETIAS) will not have to be compliant with the Regulation until 2030, which gives plenty of time to normalise the use of surveillance technology.
Third, we urged EU lawmakers to push back on Big Tech lobbying and address environmental impacts. How did they do?

The risk classification framework has become a self-regulatory exercise:Initially, all use cases included in the list of high-risk applications would have had to follow specific obligations. However, as a result of heavy industry lobbying, providers of high-risk systems will be now able to decide if their systems is high-risk or not, as an additional ‘filter’ was added into that classification system;
Providers will still have to register sufficient documentation in the public database to explain why they don’t consider their system to be high-risk. However, this obligation will not apply when they are providing systems to law enforcement and migration authorities. This will paving the way for the free and deregulated procurement of surveillance systems in the policing and border contexts.

The Act takes only a tentative first step to address environmental impacts of AI:We have serious concerns about how the exponential use of AI systems can have severe impacts on the environment, including through resource consumption, extractive mining and energy-intensive processing. Today, information on the environmental impacts of AI is a closely-guarded corporate secret. This makes it difficult to assess the environmental harms of AI and to develop political solutions to reduce carbon emissions and other negative impacts;
The first draft of the AI Act completely neglected these risks, despite civil society and researchers repeatedly calling for the energy consumption of AI systems to be made transparent. To address this problem, the AI Act now requires that providers of GPAI models that are trained with large amounts of data and consume a lot of electricity must document their energy consumption. The Commission now has the task of developing a suitable methodology for measuring the energy consumption in a comparable and verifiable way;
The AI Act also requires that standardised reporting and documentation procedures must be created to ensure the efficient use of resources by some AI systems. These procedures should help to reduce the energy and other resource consumption of high-risk AI systems during their life cycle. These standards are also intended to promote the energy-efficient development of general-purpose AI models;
These reporting standards are a crucial first step to provide basic transparency about some ecological impacts of AI, first and foremost the energy use. But they can only serve as a starting point for more comprehensive policy approaches that address all environmental harms along the AI production process, such as water and minerals. We cannot rely on self-regulation, given how fast the climate crisis is evolving.
What’s next for the AI Act?

The coming year will be decisive for the EU’s AI Act, with different EU institutions, national lawmakers and even company representatives setting standards, publishing interpretive guidelines and driving the Act’s implementation across the EU’s member countries. Some parts of the law – the prohibitions – could become operational as soon as November. It is therefore vital that civil society groups are given a seat at the table, and that this work is not done in opaque settings and behind closed doors.

We urge lawmakers around the world who are also considering bringing in horizontal rules on AI to learn from the EU’s many mistakes outlined above. A meaningful set of protections must ensure that AI rules truly work for individuals, communities, society, rule of law, and the planet.

While this long chapter of lawmaking is now coming to a close, the next chapter of implementation – and trying to get as many wins out of this Regulation as possible – is just beginning. As a group, we are drafting an implementation guide for civil society, coming later this year. We want to express our thanks to the entire AI core group, who have worked tirelessly for over three years to analyse, advocate and mobilise around the EU AI Act. In particular, we thank the work, dedication and vision of Sarah Chander, of the Equinox Racial Justice Institute, for her leadership of this group in the last three years.

TECH & RIGHTS

Packed With Loopholes: Why the AI Act Fails to Protect Civic Space and the Rule of Law

The AI Act fails to effectively protect the rule of law and civic space. ECNL, Liberties and European Civic Forum (ECF) gives our analysis of its shortcomings.


by LibertiesEU
April 04, 2024



The unaccountable and opaque use of Artificial Intelligence (AI), especially by public authorities, can undermine civic space and the rule of law. In the European Union, we have already witnessed AI-driven technologies being used to surveil activists, assess whether airline passengers pose a terrorism risk or appoint judges to court cases. The fundamental rights framework as well as rule of law standards require that robust safeguards are in place to protect people and our societies from the negative impacts of AI.

For this reason, the European Centre for Not-for-Profit Law (ECNL), Liberties and the European Civic Forum (ECF) closely monitored and contributed to the discussions on the EU’s Artificial Intelligence Act (AI Act), first proposed in 2021. From the beginning, we advocated for strong protections for fundamental rights and civic space and called on European policymakers to ensure that the AI Act is fully coherent with rule of law standards.

The European Parliament approved the AI Act on 13 March 2024, thus marking the end of a three-year-long legislative process. Yet to come are guidelines and delegated acts to clarify the often vague requirements. In this article, we take stock of the extent to which fundamental rights, civic space and the rule of law will be safeguarded and provide an analysis of key AI Act provisions.
Far from a golden standard for a rights-based AI regulation

Our overall assessment is that the AI Act fails to effectively protect the rule of law and civic space, instead prioritising industry interests, security services and law enforcement bodies. While the Act requires AI developers to maintain high standards for the technical development of AI systems (e.g. in terms of documentation or data quality), measures intended to protect fundamental rights, including key civic rights and freedoms, are insufficient to prevent abuses. They are riddled with far-reaching exceptions, lowering protection standards, especially in the area of law enforcement and migration.

The AI Act was negotiated and finalised in a rush, leaving significant gaps and legal uncertainty, which the European Commission will have to clarify in the next months and years by issuing delegated acts and guidelines. Regulating emerging technology requires flexibility, but the Act leaves too much to the discretion of the Commission, secondary legislation or voluntary codes of conduct. These could easily undermine the safeguards established by the AI Act, further eroding the fundamental rights and rule of law standards in the long term.

CSOs’ contributions will be necessary for a rights-based implementation of the AI Act

The AI Act will enter into effect in stages, with full application expected in 2026. The European Commission will develop guidance and delegated acts specifying various requirements for the implementation, including guidance on the interpretation of prohibitions, as well as a template for conducting fundamental rights impact assessments. It will be crucial for civil society to actively contribute to this process with their expertise and real-life examples. In the next months, we will publish a map of key opportunities where these contributions can be made. We also call on the European Commission and other bodies responsible for the implementation and enforcement of the AI Act to proactively facilitate civil society participation and to prioritise diverse voices including those of people affected by various AI systems, especially those belonging to marginalised groups.

5 flaws of the AI Act from the perspective of civic space and the rule of law

1. Gaps and loopholes can turn prohibitions into empty declarations

2. AI companies’ self-assessment of risks jeopardises fundamental rights protections

3. Standards for fundamental rights impact assessments are weak

4. The use of AI for national security purposes will be a rights-free zone

5. Civic participation in the implementation and enforcement is not guaranteed
The AI Act limitations showcase the need for a European Civil Dialogue Agreement

The legislative process surrounding the AI Act was marred by a significant lack of civil dialogue - the obligation of the EU institutions to engage in an open, transparent, and regular process with representative associations and civil society. To date, there is no legal framework regulating the European civil dialogue, although civil society has been calling for it in various contexts. Since the announcement of the AI Act, civil society has made great efforts to coordinate horizontally to feed into the process, engaging diverse organisations at the national and European levels. In the absence of clear guidelines on how civil society input should be included ahead of the drafting of EU laws and policies, the framework proposed by the European Commission to address the widespread impact of AI technologies on society and fundamental rights was flawed. Throughout the preparatory and political stages, the process remained opaque, with limited transparency regarding decision-making and little opportunity for input from groups representing a rights-based approach, particularly in the Council and during trilogue negotiations. This absence of inclusivity raises concerns about the adopted text’s impact on society at large. It not only undermines people’s trust in the legislative process and the democratic legitimacy of the AI Act but also hampers its key objective to guarantee the safety and fundamental rights of all.

However, in contrast to public interest and fundamental rights advocacy groups, market and for-profit lobbyists and representatives of law enforcement authorities and security services had great influence in the legislative process of the AI Act. This imbalanced representation favoured commercial interests and the narrative of external security threats over the broader societal impacts of AI.

Read our analysis in full here.


Symposium on Military AI and the Law of Armed Conflict: Human-machine Interaction in the Military Domain and the Responsible AI Framework


04.04.24 | 

[Dr Ingvild Bode is Associate Professor at the Centre for War Studies, University of Southern Denmark. She is the Principal Investigator of the European Research Council-funded project AutoNorms: Weaponised Artificial Intelligence, Norms, and Order (08/2020-07/2025) and also serves as the co-chair of the IEEE-SA Research Group on AI and Autonomy in Defence Systems.

Anna Nadibaidze is a researcher for the European Research Council funded AutoNorms project based at the Center for War Studies, University of Southern Denmark.]


Artificial intelligence (AI) technologies are increasingly part of military processes. Militaries use AI technologies, for example, for decision support and in combat operations, including as part of weapon systems. Contrary to some previous expectations, especially notable popular culture depictions of ‘sentient’ humanoid machines willing to destroy humanity or ‘robot wars’ between machines, integrating AI into the military does not mean that AI technologies replace humans. Rather, military personnel interact with AI technologies, and likely at an increasing frequency, as part of their day-to-day activities, which include the targeting process. Some militaries have adapted the language of human-machine teaming to describe these instances of human-machine interaction. This term can refer to humans interacting with both uncrewed, (semi-)autonomous platforms, or AI-based software systems. Such developments are increasingly promoted as key trends in defence innovation. For instance, the UK Ministry of Defence considers the “effective integration of humans, AI and robotics into warfighting systems—human-machine teams” to be “at the core of future military advantage”.

At the same time, many states highlight that they intend to develop and use these technologies in a ‘responsible’ manner. The framework of Responsible AI in the military domain is growing in importance across policy and expert discourse, moving beyond the focus on autonomous weapon systems that can “select and apply force without human intervention”. Instead, this framework assumes that AI will be integrated into various military processes and interact with humans in different ways, and therefore it is imperative to find ways of doing so responsibly, for instance by ensuring understandability, reliability, and accountability.

Our contribution connects these intersecting trends in offering a preliminary examination of the extent to which the Responsible AI framework addresses challenges attached to changing human-machine interaction in the military domain. To do so, we proceed in two steps: first, we sketch the kind of challenges raised by instances of human-machine interaction in a military context. We argue that human-machine interaction may fundamentally change the quality of human agency, understood as the ability to make choices and act, in warfare. It does so by introducing a form of distributed agency in military decision-making, including in but not limited to the targeting process. Therefore, there is a need to examine the types of distributed agency that will emerge, or have already emerged, as computational techniques under the ‘AI’ umbrella term are increasingly integrated into military processes. Second, we consider the extent to which the emerging Responsible AI framework, as well as principles associated with it, demonstrates potential to address these challenges.

1. Human-machine Interaction and Distributed Agency

Appropriate forms of human agency and control over use-of-force decision-making are necessary on ethical, legal, and security grounds. (Western) military thinking on human-machine or human-AI teaming recognises this. Human-machine interaction involves sharing cognitive tasks with AI technologies as their use is chiefly associated with the speedy processing of large amounts of data/information. It follows that any decision made in the context of human-machine interaction implies a combination of ‘human’ and ‘machine’ decision-making. This interplay changes how human agency is exercised. Instead of producing zero-sum outcomes, we are likely to encounter a form of distributed agency in military decisions that rely on human-machine interaction. Above all, distributed agency involves a blurring of the distinction between instances of ‘human’ and ‘AI’ agency.

Understanding this distributed agency could, in the first place, consider particularities of how ‘human’ and ‘AI’ agents make choices and act and what this means for interaction dynamics. This is an evolving topic of interest as AI technologies are increasingly integrated into the military domain. The reality of distributed agency is not clear-cut. Any ‘AI agency’ results from human activity throughout the algorithmic design and training process that has become ‘invisible’ at the point of use. This human activity includes programmers who create the basic algorithmic parameters, workers who prepare the data that training machine learning algorithms requires through a series of iterative micro-tasks often subsumed as ‘labelling data’, but also the people whose data is used to train such algorithms. It is therefore important to think about ‘human’ and ‘AI’ agency as part of a relational, complex, socio-technical system. From the perspective of the many groups of humans that are part of this system, interacting with AI creates both affordances or action potentials and constraints. Studying different configurations of this complex system could then advance our understanding of distributed agency.

These initial insights into how technological affordances and constraints shape distributed agency matter in the military domain because they affect human decision-making, including in a warfare context. What does it actually mean for humans to work with AI technologies? The long-established literature in human-factor analysis describes numerous fundamental obstacles that people face when interacting with complex systems integrating automated and AI technologies. These include “poor understanding of what the systems are doing, high workload when trying to interact with AI systems, poor situation awareness (SA) and performance deficits when intervention is needed, biases in decision making based on system outputs, and degradation”. Such common operational challenges of human-machine interaction raise fundamental political, ethical, legal, social, and security concerns. There are particularly high stakes in the military domain because AI technologies used in this context have the potential to inflict severe harm, such as physical injury, human rights violations, death, and (large-scale) destruction.


2. Responsible AI and Challenges of Human-machine Interaction

The Responsible AI framework has been gaining prominence among policymaking and expert circles of different states, especially the US and its allies. In 2023, the US released its Political Declaration on Responsible Military Use of AI and Autonomy, endorsed by 50 other states as of January 2024. US Deputy Secretary of Defense Kathleen Hicks stated that the new Replicator Initiative, aimed at producing large numbers of all-domain, attritable autonomous systems, will be carried out “while remaining steadfast to [the DoD’s] responsible and ethical approach to AI and autonomous systems, where DoD has been a world leader for over a decade”. At the same time, the concept of responsible military AI use has also been entrenched by the Responsible AI in the Military Domain (REAIM) Summit co-hosted by the Netherlands and the Republic of Korea. More than 55 states supported the Summit’s Call to Action in February 2023, and a second edition of the event is expected in Seoul in 2024.

The Responsible AI framework broadens the debate beyond lethal autonomous weapon systems (LAWS), which have been the focus of discussions at the UN CCW in Geneva throughout the last decade. The effort to consider different uses of AI in the military, including in decision support, is a step towards recognising the challenges of human-machine interaction and potential new forms of distributed agency. These changes are happening in various ways and do not necessarily revolve around ‘full’ autonomy, weapon systems, or humans ‘out of the loop’. Efforts to consider military systems integrating autonomous and AI technologies as part of lifecycle frameworks underline this. Such frameworks demonstrate that situations of human-machine interaction need to be addressed and occur at various lifecycle stages from research & development, procurement & acquisition, TEVV, potential deployment, to retirement. Addressing such concerns therefore deserve the type of debate offered by the REAIM platform: a multi-stakeholder discussion representing global perspectives on (changing) human-machine interaction in the military.

At the same time, the Responsible AI framework is nebulous and imprecise in its guidance on ensuring that challenges of human-machine interaction are addressed. So far, it functions as a “floating signifier”, in the sense that the concept can be understood in different ways, often depending on the interests of those who interpret it. This was already visible during the first REAIM Summit in The Hague, where most participants agreed on the importance of being responsible, but not on how to get there. Some of the common themes among the REAIM and US initiatives include commitment to international law, accountability, and responsibility, ensuring global security and stability, human oversight over military AI capabilities, as well as appropriate training of personnel involved in interacting with the capabilities. But beyond these broad principles, it remains unclear what constitutes ‘appropriate’ forms of human-machine interaction, and the forms of agency these involve, in relation to acting responsibly and in conformity with international law – that, in itself, offers unclear guidance. It must be noted, however, that defining ‘Responsible AI’ is no easy task because it requires considering the various dimensions of a complex socio-technical system which includes not only the technical aspects but also political, legal, and social ones. It has already been a challenging exercise in the civilian domain to pinpoint the exact characteristics of this concept, although key terms such as explainability, transparency, privacy, and security are often mentioned in Responsible AI strategies.

Importantly, the Responsible AI framework allows for various interpretations of the form, or mechanism, of global governance needed to address the challenges of human-machine interaction in the military. There are divergent approaches on the appropriate direction to take. For instance, US policymakers seek to “codify norms” for the responsible use of AI through the US political declaration, a form of soft law, interpreted by some experts as a way for Washington to promote its vision in its perceived strategic competition with Beijing. Meanwhile, many states favour a global legal and normative framework in the form of hard law, such as a legally binding instrument establishing appropriate forms of human-machine interaction, especially in relation to targeting, including the use of force. The UN’s 2023 New Agenda for Peace urges states not only to develop national strategies on responsible military use of AI, but also to “develop norms, rules and principles…through a multilateral process” which would involve engagement with industry, academia, and civil society. Some states are trying to make steps into this direction, for instance Austria took the initiative in the form of co-sponsoring a UN General Assembly First Committee resolution on LAWS, which was adopted with overwhelming support in November 2023. Overall, the Responsible AI framework’s inherent ambiguity is an opportunity for those favouring a soft law approach, especially actors who promote political declarations or sets of guidelines and argue that these are enough. Broad Responsible AI guidelines might symbolise a certain commitment or obligations, but at this stage they are insufficient to address already existing challenges to human-machine interaction in a security and military context –not least because they may not be connected to a concrete pathway toward operationalisation and implementation.

Note: This essay outlines initial thinking that forms the basis of a new research project called “Human-Machine Interaction: The Distributed Agency of Humans and Machines in Military AI” (HuMach) funded by the Independent Research Fund Denmark. Led by Ingvild Bode, the project will start later in 2024.



Symposium on Military AI and the Law of Armed Conflict: A (Pre)cautionary Note About Artificial Intelligence in Military Decision Making


04.04.24 | 


[Georgia Hinds is a Legal Adviser with the ICRC in Geneva, working on the legal and humanitarian implications of autonomous weapons, AI and other new technologies of warfare. Before joining the ICRC, she worked in the Australian Government, advising on public international law including international humanitarian and human rights law, and international criminal law, and served as a Reservist Officer with the Australian Army. The views expressed on this blog are those of the author alone and do not engage the ICRC, or previous employers, in any form.]

Introduction

Most of us would struggle to define ‘artificial intelligence.’ Fewer still could explain how it functions. And yet AI technologies permeate our daily lives. They also pervade today’s battlefields. Over the past eighteen months, reports of AI-enabled systems being used to inform targeting decisions in contemporary conflicts have sparked debates (including on this platform) around legal, moral and operational issues.

Sometimes called ‘decision support systems’ (DSS), these are computerized tools that are designed to aid human decision-making by bringing together and analysing information, and in some cases proposing options as to how to achieve a goal [see, e.g., Bo and Dorsey]. Increasingly, DSS in the military domain are incorporating more complex forms of AI, and are being applied to a wider range of tasks.

These technologies do not actually make decisions, and they are not necessarily part of weapon systems that deliver force. Nevertheless, they can significantly influence the range of actions and decisions that form part of military planning and targeting processes.

This post considers implications for the design and use of these tools in armed conflict, arising from international humanitarian law (IHL) obligations, particularly the rules governing the conduct of hostilities.

Taking ‘Constant Care’, How Might AI-DSS Help or Hinder?

Broadly, in the conduct of military operations, parties to an armed conflict must take constant care to spare the civilian population, civilians and civilian objects.

The obligation of constant care is an obligation of conduct, to mitigate risk and prevent harm. It applies across the planning or execution of military operation, and is not restricted to ‘attacks’ within the meaning of IHL (paras 2191, 1936, 1875). It includes, for example, ground operations, establishment of military installations, defensive preparations, quartering of troops, and search operations. It has been said that this requirement to take ‘constant care’ must “animate all strategic, operational and tactical decision-making.”

In assessing the risk to civilians that may arise from the use of an AI-DSS, a first step must be assessing whether the system is actually suitable for the intended task. Applying AI – particularly machine learning – to problems for which it is not well suited, has the potential to actually undermine decision-making (p 19). Automating processes that feed into decision-making can be advantageous where quality data is available and the system is given clear goals (p 12). In contrast, “militaries risk facing bad or tragic outcomes” where they provide AI systems with clear objectives but in uncertain circumstances, or where they use quality data but task AI systems with open-ended judgments. Uncertain circumstances abound in armed conflict, and the contextual, qualitative judgements required by IHL are notoriously difficult. Further, AI systems generally lack the ability to transfer knowledge from one context or domain to another (p 207), making it potentially problematic to apply an AI-DSS in a different armed conflict, or even in different circumstances in the same conflict. It is clear then, that whilst AI systems may be useful for some tasks in military operations (eg. in navigation and maintenance and supply chain management), they will be inappropriate for many others.

Predictions about enemy behaviour will likely be far less reliable than those about friendly forces, not only due to a lack of relevant quality data, but also because armed forces will often adopt tactics to confuse or mislead their enemy. Similarly, AI-DSS would struggle to infer something open-ended or ill-defined, like the purpose of a person’s act. A more suitable application could be in support of weaponeering processes, and the modelling of estimated effects, where such systems are already deployed, and where the DSS should have access to greater amounts of data derived from tests and simulations.

Artificial Intelligence to Gain the ‘Best Possible Intelligence’?

Across military planning and targeting processes, the general requirement is that decisions required by IHL’s rules on the conduct of hostilities must be based on an assessment of the information from all sources reasonably available at the relevant time. This includes an obligation to proactively seek out and collect relevant and reasonably available information (p 48). Many military manuals stress that the commander must obtain the “best possible intelligence,” which has been interpreted as requiring information on concentrations of civilian persons, important civilian objects, specifically protected objects and the environment (See Australia’s Manual on the Law of Armed Conflict (1994) §§548 and 549).

What constitutes the best possible intelligence will depend upon the circumstances, but generally commanders should be maximising their available intelligence, surveillance and reconnaissance assets to obtain up-to-date and reliable information.

Considering this requirement to seek out all reasonably available information, it is entirely possible that the use of AI DSS may assist parties to an armed conflict in satisfying their IHL obligations, by synthesising or otherwise processing certain available sources of information (p 203). Indeed, whilst precautionary obligations do not require parties to possess highly sophisticated means of reconnaissance (pp 797-8), it has been argued that (p 147), if they do possess AI-DSS and it is feasible to employ them, IHL might actually require their use.

In the context of urban warfare in particular, the ICRC has recommended (p 15) that information about factors such as the presence of civilians and civilian objects should include open-source repositories such as the internet. Further, specifically considering AI and machine learning, the ICRC has concluded that, to the extent that AI-DSS tools can facilitate quicker and more widespread collection and analysis of this kind of information, they could well enable better decisions by humans that minimize risks for civilians in conflict. The use of AI-DSS to support weaponeering, for example, may assist parties in choosing means and methods of attack that can best avoid, or at least minimize, incidental civilian harm.

Importantly, the constant care obligation and the duty to take all feasible precautions in attack are positive obligations, as opposed to other IHL rules which prohibit conduct (eg. the prohibitions on indiscriminate or disproportionate attacks). Accordingly, in developing and using AI-DSS, militaries should be considering not only how such tools can assist to achieve military objectives with less civilian harm, but how they might be designed and used specifically for the objective of civilian protection. This also means identifying or building relevant datasets that can support assessments of risks to, and impacts upon civilians and civilian infrastructure.

Practical Considerations for Those Using AI-DSS

When assessing the extent to which an AI-DSS output reflects current and reliable information sources, commanders must factor in AI’s limitations in terms of predictability, understandability and explainability (see further detail here). These concerns are likely to be especially acute with systems that incorporate machine learning algorithms that continue to learn, potentially changing their functioning during use.

Assessing the reliability of AI-DSS outputs also means accounting for the likelihood that an adversary will attempt to provide disinformation such as ruses and deception, or otherwise frustrate intelligence acquisition activities. AI-DSS currently remain vulnerable to hacking and spoofing techniques that can lead to erroneous outputs, often in ways that are unpredictable and undetectable to human operators.

Further, like any information source in armed conflict, the datasets on which AI-DSS rely may be imperfect, outdated or incomplete. For example, “No Strike Lists” (NSL) can contribute to a verification process by supporting faster identification of certain objects that must not be targeted. However, a NSL will only be effective so long as it is current and complete; the NSL itself is not the reality on the ground. More importantly, the NSL usually only consists of categories of objects that benefit from special protection or the targeting of which is otherwise restricted by policy. However, the protected status of objects in armed conflict can change – sometimes rapidly – and most civilian objects that will not appear on the list. In short then, the presence of an object on a NSL contributes to identifying protected objects when verifying the status of a potential target, but the absence of an object from the list does not imply that it is a military objective.

Parallels can be drawn with AI-DSS tools, which rely upon datasets to produce “a technological rendering of the world as a statistical data relationship” (p 10). The difference is that, whilst NSLs generally rely upon a limited number of databases, AI-DSS tools may be trained with, and may draw upon such a large volume of datasets that it may be impossible for the human user to verify their accuracy. This makes it especially important for AI-DSS users to be able to understand what underlying datasets are feeding the system, the extent to which this data is likely to be current and reliable, and the weighting given to particular data in the DSS output (paras 19-20). Certain critical datasets may need to be, by default, labelled with overriding prominent (eg. NSLs), whilst, for others, decision-makers may need to have the ability to adjust how they are factored in.

In certain circumstances, it may be appropriate for a decision-maker to seek out expert advice concerning the functioning or underlying data of an AI-DSS. As much has been suggested in the context of cyber warfare, in terms of seeking to understand the effects of a particular cyber operation (p 49).

In any event, it seems unlikely that it would be reasonable for a commander to rely solely on the output of one AI-DSS, especially during deliberate targeting processes where more time is available to gather and cross-check against different and varied sources. Militaries have already indicated that cross-checking of intelligence is standard practice when verifying targets and assessing proportionality, and an important aspect of minimising harm to civilians. This practice should equally be applied when employing AI-DSS, ideally using different kinds of intelligence to guard against the risks of embedded errors within an AI-DSS.

If a commander, planner or staff officer did rely solely on an AI-DSS, the reasonableness of their decision would need to be judged not only in light of the AI DSS output, but also taking account of other information that was reasonably available.

Conclusion

AI-DSS are often claimed to hold the potential to increase IHL compliance and to produce better outcomes for civilians in armed conflict. In certain circumstances, the use of AI DSS may well assist parties to an armed conflict in satisfying their IHL obligations, by providing an additional available source of information.

However, these tools may be ill-suited for certain tasks in the messy reality of warfare, especially noting their dependence on quality data and clear goals, and their limited capacity for transfer across different contexts. In some cases, drawing upon an AI-DSS could actually undermine the quality of decision-making, and pose additional risks to civilians.

Further, even though an AI-DSS can draw in and synthesise data from many different sources, this does not absolve a commander of their obligation to proactively seek out information from other reasonably available sources. Indeed, the way in which AI tools function – their limitations in terms of predictability, understandability and explainability – make it all the more important that their output be cross-checked.

Finally, AI-DSS must only be applied within legal, policy and doctrinal frameworks that ensure respect for international humanitarian law. Otherwise, these tools will only serve to replicate, and arguably exacerbate, unlawful or otherwise harmful outcomes at a faster rate and on a larger scale.



Sunday, March 31, 2024

Why April Fools Day in France Involves Fish Pranks

It’s a long and fishy history.

BY AMELIA PARENTEAU
MARCH 31, 2024

"Allow me to address to you / With my deepest tenderness / This beautiful fish, fresh and discreet / To which I have confided my secret," says this April Fish card in French. 

IF YOU FIND YOURSELF IN France on April 1, don’t be surprised if something seems fishy. Maybe someone gives you a chocolate or a pastry in the shape of a cod? Perhaps you find a paper haddock stuck to your back, and then everyone erupts into laughter and starts pointing and shouting “poisson d’avril”? Don’t be alarmed, you’ve simply immersed yourself in the centuries-long French tradition of April Fool’s Day, known as poisson d’avril or “April Fish.”

“The idea of April Fool’s Day, or April 1, as a special day is murky,” says Jack Santino, a folklorist and Professor Emeritus at Bowling Green State University in Ohio. “Every country has its own historical event they think gave rise to it.” But France’s tradition is the only one that involves aquatic life. Historians have many theories about the origins of this piscine tradition, but no overall consensus. The most common theories are connected to pagan celebrations of the vernal equinox, Christianity, a 16th-century calendar change, and the start of the French fishing season.

April fools may trace back to Ancient Rome, but France’s fish part is harder to pin down. 

Some historians date this tradition back to the Ancient Roman pagan festival of Hilaria, a celebration marking the vernal equinox with games and masquerades. Santino says ancient Roman and Celtic celebrations of the vernal equinox are likely forerunners. Connections to those rituals “provide a kind of cultural vocabulary that people can draw on,” according to Santino. However, he believes they probably don’t have a direct connection to the fish part.

For some, that’s where Christianity comes in. The “ichthus” fish—an ancient Hellenic Christian acronym for “Jesus Christ, Son of God, Savior”—is nowadays widely recognized as a symbol of Christianity, but was originally used as a secret marker of Christian affiliation. Moreover, the Lenten forty-day period between Ash Wednesday and Easter Sunday prohibits the consumption of meat, so fish is often served as a substitute protein during this period.

The depiction of Lent from 1893 shows how long fish has been a major part of the Christian tradition. 

As the end of Lent often occurs on or near April 1, celebrations including fish imagery would be apt to mark the end of the fasting season. Some even go so far as to surmise that poisson d’avril is a corruption of the word “passion,” as in “passion of the Christ,” into “poisson,” the French word for fish. Despite these cultural associations, Santino points out there is no actual evidence for this link to Christianity.

Then there’s the popular calendar change theory that has been widely discounted by experts today, but still comes up. In 1564, King Charles IX of France issued the Edict of Roussillon, which moved the start of the calendar year from somewhere in the period of March 25 and April 1 (different provinces kept their own calendars) to January 1.


Pope Gregory XIII standardized January 1 as the beginning of the calendar year throughout the entire Christian empire with the adoption of the Gregorian calendar in 1582. One might surmise that those who still observed the start of the new year on April 1 rather than January 1 were the “April Fools” in question and therefore subject to pranks. However, references to poisson d’avril predate the 1564 edict, occurring in print as early as 1466, which debunks this explanation.
Now paper, people used to hook real dead fish onto the backs of fishermen.
 JACK GAROFALO/GETTY IMAGES

Another plausible theory involves actual fishing. As the days get longer in the northern hemisphere, the return of spring also marks the beginning of the fishing season in France, on or near the first day of April. Some posit that the prank of offering a fish was to tease fishermen who, at this time, either had no fish or an incredible abundance. They would either have to wait around for spawning fish to be of legal size before catching them or, once it was finally time, they would be overwhelmed by catching so many fish rushing upstream. According to this theory, real herrings were the original sea critter of choice for the prank, and the trick was to hook a dead herring onto a fisherman’s back and see how long it took him to notice, as the fish began to progressively stink over the course of the day.

The poisson d’avril tradition took another turn in the early 20th century, when friends and lovers would exchange decorative postcards featuring ornate images of fish. The majority of these cards were inscribed with funny rhyming messages that were often flirtatious and suggestive, but cloaked in humor. While most cards depict young women, flowers, and fish, the ocean and other marine animals are occasionally featured, as well as references to advances in technology, such as airplanes and automobiles. Pierre Ickowicz, chief curator of the Château de Dieppe Museum in Normandy, which houses an impressive collection of these cards, says the card exchange tradition seems to have died out shortly after World War I. The museum’s 1,716 postcards are mainly from the 1920s-1930s

.
Poisson d’avril postcards from the 1920s and ’30s were full of flirtation and fish. WELLCOME COLLECTION/PUBLIC DOMAIN; FOTOTECA GILARDI/GETTY IMAGES

These days in France, the most common observers of poisson d’avril are schoolchildren, who delight in taping paper fish to the backs of their siblings, classmates, and teachers. Although the execution has varied over time, from dead herring accessories to postcards to paper fish, the prankster nature has been consistent.

“This idea of playing pranks on people is something that would be obnoxious if it weren’t socially condoned on certain days,” says Santino. He notes that times of transition are often connected to rites of passage where societal rules can be broken. “If poisson d’avril has to do with a recognition of springtime, I would link it to the idea of a celebratory transition into a new period of time, and part of that celebration means we can do things that are not usually allowed.”

Today, people celebrate poisson d’avril in both neighboring Italy and in Quebec, Canada, a former colony of France. The exact origins remain murky, but the fish endures. Whether or not you participate in any kind of trickster behavior on the first of April, there’s surely some relief today that an actual dead, stinky fish is no longer a regular part of April Fool’s day—or at least hopefully that bit of history doesn’t plant any devilish ideas.

Children are the main culprits today, but anyone can end up with a paper fish on their back on April 1. KEYSTONE-FRANCE/GETTY IMAGES; LAURENT SOLA/GETTY IMAGES

Friday, March 15, 2024

Patrick in the Anthropocene

 BY LEE HALL

MARCH 15, 2024



Image source: Dave (CC BY-ND 2.0 Deed)

Ah, March! Relief. Renewal. Green beer, so we can get merry and forget about the ancestors.


U.S. canal and railroad developers put Irish migrants to work in the 1850s. Workers who succumbed to injuries, exhaustion, and disease were buried without ceremony. In Malvern, Pennsylvania, grave researcher Frank Watson spoke of teenaged and adult workers buried in a human “trash heap.” A Chicago-area mass-grave marker observes:


“They arrived sick and penniless, and took hard and dangerous jobs building the Chicago & Alton Railroad. Known but to God, they rest here in individual anonymity – far from the old homes of their heirs – yet forever short of the new homes of their hopes.”

Some years ago I came across my father’s family name in an old news story about Irish workers buried near a railway. And I wondered: Were we related? My father’s family swore they came from French nobility, not from the shanty Irish like your mother.

But JFK became president the year I was born and the narrative was bound to shift. JFK too had forebears who fled the potato famine. (We say “famine” so we can forget it was deliberate starvation. As Sinéad O’Connor reminded everyone, Ireland’s food was shipped to England; Irish people caught eating anything except potatoes could be shot dead.)

The Fitzgeralds and Kennedys first worked in Boston as common tradespeople. Eventually, they’d run shops and bars, and make successful bids for political posts. Who, in the all-encompassing quest for Standard of Living, had time to look back?

Noel Ignatiev explored the way Irish Catholics climbed up the U.S. class ladder in How the Irish Became White. That book might have been a user’s manual for my forebears as it explains how they did in fact become white. Meanwhile, JFK publicly vowed that the USA would be first on the moon. And it was. JFK’s key project leader was Wernher von Braun, who’d developed Nazi Germany’s “vengeance weapons”—the V-2 rockets.

Humans had entered some new phase, some kind of hyper-self-domestication. Trains weren’t built so much to move ordinary people as to deliver freight and luxury goods. Apollo 11 got resources that could have funded vital social networks. Gil Scott-Heron called it. Making a nation (for some) Number One eclipsed real values.

But getting back to the shamrocks and beer…Now comes Saint Patrick’s Day in all its whiskey-soaked and dollar-shop green glory.

A Star Performance

Patrick—the bishop Patricius—claimed credit for converting Ireland to Catholicism in the 5th century:

“Never before did they know of God except to serve idols and unclean things. But now, they have become the people of the Lord, and are called children of God. The sons and daughters of the leaders of the Irish are seen to be monks and virgins of Christ!”

Now, if the Anthropocene Awards are ever produced for star-quality performances, nominate Patrick. Why bother to learn from others when you can stamp out their knowledge instead? And this was superhero-level stamping-out. Unless, more likely, Patrick is just a diversion, superimposed on history to blot out the druidic take on the universe.

The Collins Dictionary traced the root meaning of the word druid to the term oak-wise. We need more oak wisdom.

But a 5th-century Roman Catholic “patron saint of Ireland” had no use for it. And in a later century—Sinéad told us all about this, too—the British pope Adrian IV handed Ireland to England, setting the stage for the British to eventually starve the Irish people and ban the Irish language, forcing them to leave, die, or live with no memory of their cultural story.

Of course, the Anthropocene epoch is riddled with crimes against humanity; who was Patrick, but one of history’s common tormentors? Patrick, like any other conqueror, could have championed a different route, guided by the connections humans knew before the times of nations and borders, before we authorized some to routinely confine and control others.

Point of Contention

Naomi Klein, a few years ago, objected to the term Anthropocene:

“Diagnoses like this erase the very existence of human systems that organised life differently: systems that insist that humans must think seven generations in the future; must be not only good citizens but also good ancestors; must take no more than they need and give back to the land in order to protect and augment the cycles of regeneration.”

So at essence, Klein has a “Not all humans…” take. There’s an idea that indigenous communities have no connection to human-driven extinctions or geological crises. Is that so? Indigenous humans set out to domesticate living communities more than 10,000 years ago.

Now, a quest to declare the Anthropocene an official geological epoch has stalled as experts debate just how far back they’ll pin the starting point. The official working group focuses on measurable, physical evidence of human-caused changes—microplastics, coal, pesticides—and situates the start of the Anthropocene in 1952, pointing to the global plutonium fallout from nuclear weapon testing.

Wait, though. We were deep into the Anthropocene by the 1950s. I Love Lucy was already on in 1952. It was the year Hasbro unveiled Mr. Potato Head, that breakthrough use of plastic which turned children into TV advertising consumers. The first patent for a bar code product was issued that year. In 1952, according to a study guide from the Michigan Farm Bureau: “The first Herringbone parlor is used. This helped farmers move a row of cows in together for milking in one clean space.”

Cows, you might remember, are the descendants of some of Earth’s most formidable animals—the aurochs. Living in their natural habitat, carrying out their evolution on their terms, was their birthright. But we humans developed breeding technologies to make them smaller, turn them into our underlings, fence them in, kill them, eat their flesh and drink what we could pull from their teats. By the 1600s we had killed off the last of their free-living ancestors.

We could say humans entered our current, late stage of hyper-self-domestication by the time petkeeping became popular, back in the Elizabethan era. The Anthropocene was fully fledged much earlier. Never mind. As my friend Patricia Fairey emailed, “At the current rate the Anthropocene won’t last long.”

And now, the vernal equinox approaches. Let’s turn off our computers, go out to the oaks, and welcome it.


Lee Hall holds an LL.M. in environmental law with a focus on climate change, and has taught law as an adjunct at Rutgers–Newark and at Widener–Delaware Law. Lee is an author, public speaker, and creator of the Studio for the Art of Animal Liberation on Patreon.

Monday, March 11, 2024

Spring equinox 2024: When it is and why it's also called the vernal equinox

Tiffany Acosta
Arizona Republic


Spring is blooming and with it comes the spring equinox. This celestial event occurs annually, marking the moment when the Earth's axis is neither tilted away from nor toward the sun, resulting in nearly equal lengths of day and night across the globe.

This phenomenon symbolizes the transition from winter to spring in the Northern Hemisphere and from summer to autumn in the Southern Hemisphere.

Beyond its astronomical significance, the spring equinox holds cultural, spiritual and metaphorical importance for many people worldwide. Throughout history, cultures have marked this occasion with festivals and ceremonies.

Here is everything you need to know about the spring equinox.

When is the spring equinox 2024?

The spring equinox officially starts at 8:06 p.m. Arizona time on Tuesday, March 19.
What is the difference between spring equinox and vernal equinox?

According to NASA, the terms "spring equinox" and "vernal equinox" refer to the same astronomical event and are used interchangeably. Both terms describe the moment when the sun crosses the celestial equator, moving from south to north.

Why is it called vernal equinox?

The term "vernal equinox" originates from Latin, where "vernal" means spring and "equinox" denotes the equal length of day and night. The term "vernal equinox" specifically emphasizes the seasonal aspect while "spring equinox" is more generic, referring to the equinox that occurs in springtime.
Is spring equinox always March 21?

No. The spring equinox does not always occur on March 21. While March 21 is often cited as the date of the spring equinox, it can occur on March 20 or 21st, depending on the year and time zone, according to Almanac.com. This variation is due to the complexities of Earth's orbit around the Sun and the adjustments made in the calendar system to account for these movements.

What happens at the spring equinox?

The spring equinox marks the moment when the sun crosses the celestial equator, heading northward. On this occasion, day and night are approximately of equal duration all over the Earth, according to the National Weather Service.

The spring equinox is considered the beginning of spring in the Northern Hemisphere. Cultures around the world have celebrated this event for centuries through various rituals, festivals and traditions, often focusing on themes of fertility, growth and the balance between light and dark.

Will spring come early 2024?

Sorry, Punxsutawney Phil, but predicting whether spring will come early in a specific year depends on numerous factors such as weather patterns, atmospheric conditions and regional climate dynamics.

While the spring equinox occurs at a fixed point in time each year, the arrival of warmer temperatures, the blooming of flowers and other signs of spring can vary.


Some years may experience earlier spring due to warmer weather patterns or climate variability, while others may see colder temperatures lingering longer.

The spring equinox typically falls on March 20 or 21, but in a leap year like 2024, when February has an extra day, the equinox may occur a bit earlier.
What are the 4 equinox dates?

Here are the 2024 equinox and solstice dates, according to the National Weather Service:
Spring (vernal) equinox: March 19, 2024, at 9:06 p.m.
Summer solstice: June 20, 2024, at 2:51 p.m.
Autumn equinox: Sept. 22, 2024, at 6:43 a.m.
Winter solstice: Dec. 20, 2024, at 2:20 a.m.

All times are Arizona time.
What does the spring equinox symbolize?

The spring equinox symbolizes renewal and rejuvenation, the transition from darkness to light as nature emerges from the dormancy of winter.

Many cultures observe the spring equinox with festivals and rituals centered around fertility, abundance and the renewal of life, according to the almanac.com.

Ancient monuments such as the Sphinx in Egypt and Angkor Wat in Cambodia align with the equinox, showcasing humanity's historical reverence for this celestial event.

The spring equinox is also regarded as a time for balance, harmony and personal growth.
Yes, 'SNL' took on Kyrsten Sinema. No, it wasn't funny. Scarlett Johansson was, thoughWhen is the solar eclipse 2024? Here's how much of it you'll be able to see in ArizonaWhat time is the State of the Union? How to watch it — and the predictable media coverageHow to watch and stream the Oscars — and why this year you really should




Why is it called equinox?

The term "equinox" comes from the Latin words "aequus," meaning equal, and "nox," meaning night. It is called so because during the equinox, day and night are approximately equal in length.

It's a moment of balance and symmetry in the Earth's orbit around the sun, symbolizing the cyclical nature of time and the changing of seasons.

Wednesday, February 28, 2024


Leap of imagination: how February 29 reminds us of our mysterious relationship with time and space

THE CONVERSATION
Published: February 27, 2024 

If you find it intriguing that February 28 will be followed this week by February 29, rather than March 1 as it usually is, spare a thought for those alive in 1582. Back then, Thursday October 4 was followed by Friday October 15.

Ten whole days were snatched from the present when Pope Gregory XIII issued a papal bull to “restore” the calendar from discrepancies that had crept into the Julian calendar, introduced by Julius Caesar in 45 BCE.

The new Gregorian calendar returned the northern hemisphere’s vernal equinox to its “proper” place, around March 21. (The equinox is when the Earth’s axis is tilted neither toward nor away from the sun, and is used to determine the date of Easter.)

The Julian calendar had observed a leap year every four years, but this meant time had drifted out of alignment with the dates of celestial events and astronomical seasons.


In the Gregorian calendar, leap days were added only to years that were a multiple of four – like 2024 – with an exception for years that were evenly divisible by 100, but not 400 – like 1700.

Simply put, leap days exist because it doesn’t take a neat 365 days for Earth to orbit the Sun. It takes 365.2422 days. Tracking the movement of celestial objects through space in an orderly pattern doesn’t quite work, which is why we have February – time’s great mop.

Father Time: statue of Pope Gregory XIII in Bologna, Italy. Getty Images

Time and space

This is just part of the history of how February – the shortest month, and originally the last month in the Roman calendar – came to have the job of absorbing those inconsistencies in the temporal calculations of the world’s most commonly used calendar.

There is plenty of science, maths and astrophysics explaining the relationship between time and the planet we live on. But I like to think leap years and days offer something even more interesting to consider: why do we have calendars anyway?

And what have they got to do with how we understand the wonder and strangeness of our existence in the universe? Because calendars tell a story, not just about time, but also about space.

Our reckoning of time on Earth is through our spatial relationship to the Sun, Moon and stars. Time, and its place in our lives, sits somewhere between the scientific, the celestial and the spiritual.

Read more: Why does a leap year have 366 days?

It is notoriously slippery, subjective and experiential. It is also marked, tracked and determined in myriad ways across different cultures, from tropical to solar to lunar calendars.

It is the Sun that measures a day and gives us our first reference point for understanding time. But it is the Moon, as a major celestial body, that extends our perception of time. By stretching a span of one day into something longer, it offers us a chance for philosophical reflection.

The Sun (or its effect at least) is either present or not present. The Moon, however, goes through phases of transformation. It appears and disappears, changing shape and hinting that one night is not exactly like the one before or after.

The Moon also has a distinct rhythm that can be tracked and understood as a pattern, giving us another sense of duration. Time is just that – overlapping durations: instants, seconds, minutes, hours, days, weeks, months, years, decades, lifetimes, centuries, ages.

Rhythm of the night: the Moon is central to our perception of time passing. Getty Images


The elusive Moon


It is almost impossible to imagine how time might feel in the absence of all the tools and gadgets we use to track, control and corral it. But it’s also hard to know what we might do in the absence of time as a unit of productivity – a measurable, dispensable resource.

The closest we might come is simply to imagine what life might feel like in the absence of the Moon. Each day would rise and fall, in a rhythm of its own, but without visible reference to anything else. Just endless shifts from light to dark.

Nights would be almost completely dark without the light of the Moon. Only stars at a much further distance would puncture the inky sky. The world around us would change – trees would grow, mammals would age and die, land masses would shift and change – but all would happen in an endless cycle of sunrise to sunset.

Read more: Scientists are hoping to redefine the second – here's why

The light from the Sun takes eight minutes to reach Earth, so the sunlight we see is always eight minutes in the past.

I remember sitting outside when I first learned this, and wondering what the temporal delay might be between me and other objects: a plum tree, trees at the end of the street, hills in the distance, light on the horizon when looking out over the ocean, stars in the night sky.

Moonlight, for reference, takes about 1.3 seconds to get to Earth. Light always travels at the same speed, it is entirely constant. The differing duration between how long it takes for sunlight or moonlight to reach the Earth is determined by the space in between.

Time on the other hand, is anything but constant. There are countless ways we characterise it. The mere fact we have so many calendars and ways of describing perceptual time hints at our inability to pin it down.

Calendars give us the impression we can, and have, made time predictable and understandable. Leap years, days and seconds serve as a periodic reminder that we haven’t.

Author
Emily O'Hara
Senior Lecturer, Spatial Design + Temporary Practices, Auckland University of Technology