Showing posts sorted by date for query DEREGULATED ELECTRICITY. Sort by relevance Show all posts
Showing posts sorted by date for query DEREGULATED ELECTRICITY. Sort by relevance Show all posts

Monday, June 17, 2024

UK

2024 manifesto versus 1997: ‘There are big similarities, but big differences’

Photo: @Keir_Starmer

Since the first polls of the official election campaign came out earlier this month, there has been talk of Labour repeating its 1997 landslide.

And now we have Labour’s 2024 manifesto, further similarities are in evidence. Like Tony Blair’s 1997 New Labour manifesto, Keir Starmer’s document makes education and health the central policy priorities. Both manifestos promise to be tough on crime, abolish hereditary peers in the House of Lords and devolve more powers to the regions.

There are also some important differences. Chief among these is that 2024 Labour is promising less spending than 1997 Labour, and yet more state intervention. This divergence shows a significant development in the party’s approach to economic growth. It is a welcome departure from the Conservative and New Labour, market-led, neoliberal consensus on economic growth.

Both manifestos promise to improve the quality of education for school children, expand pre-school learning, expand lifelong education, improve pupil-teacher ratios and reform tertiary education. However, Blair promised, in nominal terms, twice as much spending in education in 1997 than Starmer is promising in 2024 (£3 billion against £1.5bn).

Both manifestos promise to invest in the NHS and to cut waiting lists. Both manifestos promise to introduce a living wage. The minimum wage was a Labour electoral promise and they introduced it in 1999. In 2024 Labour is promising to legislate a minimum wage that is a real living wage.

Both manifestos state that Labour are not big spenders, but wise spenders. In both manifestos, the party is very careful to distance itself from its “tax and spend” image.

In both manifestos, the party promises not to raise income tax. As in 1997, the party defends this choice on the grounds that average working families already face a high tax burden. Here, one might also find a similarity between 1997 and 2024 in what the manifestos do not explicitly say.

In 1997, the Labour manifesto attacked the Conservatives for cutting capital gains tax, without making a specific pledge in either direction. In 2024 Starmer and his future chancellor of the exchequer haven’t promised they won’t raise capital gains tax.

Spotting the change

So, in many ways, Starmer has taken Labour back to Tony Blair’s third way social democracy. This is not surprising as he, like his predecessor, is trying to build the winning middle and upper middle class coalition that will bring the party to power.

But I believe the 2024 manifesto does actually contain some radical policy proposals. Blair and New Labour were very much tied to the neoliberal dogma of free markets, where economic growth is primarily driven by the private sector. Housebuilders and investors were facilitated with predictable and favourable tax policies while the government helped them further with its own investments in human capital (education and health).

Starmer’s Labour party goes further. The 2024 Labour manifesto promises to create Great British Energy, a state-owned energy company that will invest in green energy. This is a significant departure from providing incentives to private companies; it is a recognition that the state has a significant, independent role to play in energy transition.

It also promises to create a national wealth fund that will invest in public infrastructure, such as ports and hydrogen technology. Once more, this is a considerably more statist approach to public investment than we have seen in the UK. Both these policy promises depart significantly from the 1997 New Labour manifesto and economic growth plan. They are bolder at endorsing state-led growth initiatives.

Why is it here that divergence with New Labour becomes apparent? There are probably a few reasons, including the failure of the private sector to invest in infrastructure and increase productivity.

But the 2024 Labour manifesto should primarily be understood as representing Labour’s electoral coalition – of working and middle class voters. Yet, unlike in 1997, it is not a coalition solely bound by cheap credit and ever rising asset and property price rises, but by a need for the state to intervene to bring back growth via a centrally planned industrial strategy.

Since the early 1980s and Thather’s right to buy scheme, Conservative and Labour governments, alike, have deregulated the banking and financial sector making mortgages more accessible and cheaper. This credit-led, house price inflation benefits landlords and the better-off middle class and widens inequality.

The unaffordability of housing feeds political polarisation making New Labour’s 1997 coalition hard to repeat. Starmer and Labour are being called upon to offer a new working and middle class coalition that cannot be driven by consumption. It can only be driven by investment, higher skills and higher wages.

It is the ultimate case of a “supply-side” social democratic strategy that aims to reconcile two things – the demand for higher wages and quality of life among the working and middle classes and the fiscal frugality demanded by capital holders and higher income earners.

As political scientist Carles Boix argued in his seminal work, this coalition is best forged through wise investment in human and physical capital and macroeconomic stability. The key questions for Starmer at this stage are whether this coalition is a lasting one, and whether his team can achieve the much sought-after economic growth that will bring this plan to realisation.

This article is republished from The Conversation under a Creative Commons license. Read the original article on the site here.


‘A troubled history: Leaders and their manifestos, from MacDonald to Wilson’



Steven Fielding 
 15th June, 2024
© Allan Warren/CC BY-SA 3.0:

“Labour policy is directed to the creation of a humane and civilised society.” So announced Labour’s Appeal to the Nation, the party’s 1923 manifesto.

This was the one on which it fought the general election that saw Ramsay MacDonald become Prime Minister and form in January 1924 the first Labour government, inaugurating what the party recently described as its Century of Achievement.

A document of little more than 1,000 words, it was very different to Labour’s 23,000- word long manifesto which this week was launched across all media platforms. The 1923 manifesto was a wonderfully vague document with hand-waving references to Labour’s commitment to the scientific organisation of industry, the abolition of slum housing and equality between men and women.

It even concluded with an appeal to voters ‘to oppose the squalid materialism that dominates the world today’. If this suited the windy rhetoric of Labour’s leader the manifesto still promised the party would nationalise the coal, rail and electricity industries as well as impose a wealth tax on fortunes in excess of £5,000 (the equivalent of £400,000 today).

MacDonald signed off on these commitments, which pleased many in his party, expecting to see the return of Stanley Baldwin’s Conservatives to government.

MacDonald saw the manifesto like the Greens do today

Just like the Green Party leadership today he saw the election as a propaganda opportunity, to build up the party vote and win a few more seats. Just like everybody else MacDonald was shocked to find himself, thanks to the vagaries of three-party competition under first past the post, forming a minority administration.

The alacrity with which MacDonald set aside the most eye-catching policies in the manifesto was criticised by the Labour left some of whom urged him to present it in full to the Commons and dare Liberal and Conservative MPs to vote it down.

That, they inevitable would have done, after which the left wanted MacDonald to call another election thinking it would mobilise more support for their idea of socialism.

READ MORE: ‘No surprises, but fear not: Labour manifesto is the start, not the end’

Instead, hoping to demonstrate they could be trusted to hold the reins of government, MacDonald and his Cabinet followed a more cautious course, trying to win support in the Commons for a variety of more modest reforms the most notable of which was the 1924 Housing Act. This provided central government funds to subsidise the building of over half a million council homes until it was repealed in 1933.

The legislation had not even been thought of when the manifesto had been written. As a result of MacDonald’s fast-and-loose attitude to the manifesto Labour increased its vote at the December 1924 election by nearly 25 per cent although it still lost office.

Wilson had no intention of delivering manifesto wealth tax pledge

Since then, Labour leaders coming from opposition into government have had a variable and complicated relationship with their party’s manifesto. At one extreme was Clement Attlee whose government more than lived up to the promises of Let Us Face the Future in 1945 – although much of that programme had already been put into practice during the Second World War.

At the other end of the spectrum was Harold Wilson. Labour’s February 1974 manifesto promised to introduce a wealth tax as part of ‘a fundamental and irreversible shift in the balance of power and wealth in favour of working people and their families’. This Wilson had absolutely no intention of delivering.

READ MORE: ‘Labour manifesto shows a new centrism – with the state key to driving growth’

Like MacDonald, Wilson was taken aback when he found himself unexpectedly at the head of a minority government even though Edward Heath’s Conservatives had won more votes. Even after winning a modest Commons majority in October Wilson set his face against the platform on which he had ostensibly won power.

For this ‘betrayal’ Wilson and his successor Jim Callaghan was not forgiven by the Labour left, which took its revenge after Labour lost in 1979, blaming defeat on the failure of the Parliamentary leadership to live up to the manifesto’s socialist ambition. Labour’s manifesto this time round has been written with the intention of providing Keir Starmer with a relatively ‘serious’ and ‘fully-costed’ programme for government.

Could Labour still opt for a wealth tax?

Evoking Labour’s 1997 manifesto – and not just in how heavily it features pictures of the Labour leader – it falls short of what many in the party would like to see. But it makes concrete if modest commitments, the success of which can subsequently be measured by a sceptical electorate.

There is certainly no talk of a wealth tax this time as part of Starmer’s attempt to avoid any hostages to fortune this side of the election that might still be exploited by the Conservatives and its many allies in the media.

READ MORE: ‘The manifesto’s not perfect, but at the launch you could feel change is coming’

In the eyes of some of the left that’s because Starmer has got in his ‘betrayal’ early by setting aside many of the ten pledges he made to win the leadership in 2020 and abandoning the commitment to spend £28 billion annually on green projects.

It is also a reaction to the much-criticised ‘over-loaded’ 2019 Corbynite manifesto which looked incredible to many of those voters the party wants to recapture in 2024. The one benefit of this under-loaded manifesto is what it promises it will likely do – Labour will be in deep trouble if not – and by under-promising it could over-deliver.

Who knows maybe there will – at long last – be wealth tax of sorts in Rachel Reeves’ first Budget?

Find out more through our wider  2024 Labour party manifesto coverage so far…

OVERVIEW:

Manifesto launch: Highlights, reaction and analysis as it happened

Saturday, May 25, 2024

What’s Really Wrong with Thames Water?



By Leonard Hyman & William Tilles - May 22, 2024


Thames Water again? We’re not in the water business. But this story has a moral, so read on. There are many reasons for the current predicament in the UK’s water utility business. This situation resembles one of those detective stories where almost everyone is a suspect. The victim, of course, is the British public suffering under boil water alerts and facing increasingly polluted beaches, lakes, and rivers. But who is to blame? The question itself is complex and difficult to answer. Each day another news article appears expressing outrage about excessive management compensation, capital misallocation, or regulatory malfeasance while concluding with a somber note about worsening pollution in some local lake or stream. Making these problems seem overly complex and difficult to resolve has a numbing effect on the public and encourages a kind of learned helplessness. This also permits politicians to avoid accountability for the mess that they or their predecessors created.

But let’s get back to Thames Water. Their obvious problem is water pollution. The equally obvious solution is a much higher level of capital spending on sewage treatment and related facilities. And that’s where the easy answers end. Why? First, because no one trusts the existing management and board, who are ultimately responsible for the dire state of corporate affairs, to properly supervise the billions in needed remedial capital expenditures. Second, no one trusts the regulator either. Ofwat has shown a profound indifference to its public responsibilities while permitting a grotesque overleveraging of its utilities. This excessive debt has financially crippled these once solidly investment grade companies. Dealing with this excessive debt is a big part of what has to be resolved. Bankrupting the holding company, Kemble, and all related non operating entities, would be simplest. But there's another problem. Lastly, the public seems to have lost faith in the private sector’s ability to handle this but it’s even less clear how a re-nationalization of this industry would occur. That’s why these water utility issues are difficult. We have simultaneous problems with misguided management still focused on capital extraction, wildly incompetent regulators, and a public increasingly convinced that privatization was a big mistake and that renationalizing these industries is the best outcome. And all of these issues are linked.

Of all the institutional failures here, the apparent abdication of Ofwat from even rudimentary capital structure regulation is to us the most baffling. As we’ve written before, the water industry is one of the safest and least risky types of utility. Everyone needs their product and demand is predictable. But from an analytical perspective, low business risk implies the ability to take on more financial leverage and vice versa. High-risk businesses should have very little leverage, if any. Said differently, business risk and financial risk are inverse correlates. The Thames Water management seems to have taken this idea of a low-risk water utility business and pushed it to its logical, leverage-able limit. And now they, and the British public, are stuck.

Lastly, the problem regulators have is one of public trust, or the lack of it. Why should people in the UK trust them to properly supervise a large remedial capital program for improved water quality when they have basically ignored all semblance of regulatory propriety in the past? To us that’s the true appeal of renationalization. It gets around a dysfunctional British regulatory body.

Here’s the lesson for us. Years of disregard for the public service function, allowed by a permissive regulator, finally reached the point where even politicians took notice and denounced the market-embracing setup they had previously championed. Who loses? Probably the utility’s owners. Now imagine, across the pond, American electricity customers without service for days in sweltering heat, or in severe cold, and it keeps happening, while regulators sit by helplessly. Can the public trust the regulators or the management to correct the situation? Maybe the key lesson from the Thatcher era of utility privatization is simply that private enterprises can’t provide public services. Or at least not without vigilant regulators. These deregulated utilities keep malfunctioning and it keeps happening. When does an enterprising politician turn this into an election issue?  So far not yet.  But the summer has not yet begun.  Maybe Thames Water should be a case study at some Edison Electric Institute management retreat. Learn what not to do.

By Leonard Hyman and William Tilles for Oilprice.com

Thursday, April 04, 2024

Javier’s Milei’s Amputation Regime for Argentina

The country’s new president has imposed a set of brutal austerity measures as part of a so-called “chainsaw plan.” The carnage is already mounting.

JACOB SUGARMAN
THE NATION
Javier Milei, Argentina’s new president, lifts a chainsaw during an election rally on September 25, 2023, in Buenos Aires.(Photo by Tomas Cuesta / Getty Images)

BUENOS AIRES—The crowd marches languorously down Diagonal Norte toward Argentina’s presidential palace bearing a series of cardboard characters painted a metallic grey. Together, they spell out the phrase “Son 30,000”—the estimated number of people who were killed or forcibly disappeared during the US-backed dictatorship that ruled from 1976 to 1983.

It’s just after noon on March 24, the country’s Day of Memory for Truth and Justice, and hundreds of thousands have taken to the street to commemorate the victims of the Argentine junta and declare “nunca más” (never again). But this year’s march is uniquely fraught. In November, Argentina elected economist and television personality cum politician Javier Milei—a self-styled anarcho-capitalist who openly denies the junta’s crimes.

Earlier that morning, while demonstrators flooded Plaza de Mayo outside, the Casa Rosada released a nearly 13-minute video on X, formerly Twitter, providing a “complete” accounting of the period, with testimonials accusing left-wing guerilla groups of acts of terror. Six days before, the human rights organization HIJOS (Hijos por la Identidad y la Justicia contra el Olvido y el Silencio—“Children for Identity and Justice against Forgetting and Silence” ) published a statement announcing that one of its members had been bound and sexually assaulted, and that her assailants had spray-painted the letters “VLLC” on the wall of her home. (Milei’s personal slogan is “Viva La Libertad Carajo,” or “Long Live Freedom Dammit.”)

“He’s an idiot,” said Beatriz Conde, a 73-year-old retiree from nearby Avellaneda. “I should apologize to the idiots, poor things. Milei is worse. He has no heart. He’s garbage.”

Conde noted that she’s been unable to survive on her pension, whose value has plummeted since Milei took office in December. She has also struggled to secure her prescription medication, which is in increasingly short supply.

“I’m here because the country is collapsing, and this guy is going to be the death of its elderly,” she continued.

Francisco Manterola, a 32-year-old history teacher from Buenos Aires, was similarly concerned that his country was careening toward catastrophe. Manterola, who had dressed his 6-month-old daughter in a white handkerchief like those worn by the Mothers of Plaza de Mayo, said that he was afraid of losing his job as an archivist at the Ministry of the Interior—one of four he’s currently working to make ends meet.

“We hope it doesn’t happen, but they’re firing people indiscriminately,” he said. “We’re nothing more than numbers on an Excel spreadsheet to this government.”

In late February, Milei appeared at the Conservative Political Action Conference in National Harbor, Maryland, to take his victory lap after defeating former economy minister Sergio Massa in Argentina’s runoff presidential election. There, he gave a bear hug to a diffident Donald Trump and delivered a jeremiad against the perils of socialism to a half empty auditorium at the Gaylord National Resort & Convention Center. Prior to that, he made a pilgrimage to Israel to express his support for Prime Minister Benjamin Netanyahu’s campaign in Gaza and detail his own plans to move Argentina’s embassy to West Jerusalem—an announcement he has since walked back.

In Argentina, however, things have grown increasingly dire. A 54 percent devaluation of the peso announced days after Milei entered the Casa Rosada, coupled with a 36.6 percent inflation rate for 2024 through February, has priced essential goods out of the reach of working families while driving thousands more into destitution. A recent report from the Argentine Catholic University’s Social Debt Observatory found that 57 percent of the country is now living below the poverty line—the country’s highest rate since 2004.

Although that figure cannot be laid at the feet of an administration that has only been in power since December, it’s clear that Milei’s so-called “chainsaw plan” is already creating carnage. In addition to removing subsidies for services like transportation, gas, and electricity, Milei has deregulated broad swaths of the Argentine economy via an 86-page executive order, lifting basic controls on supermarket prices and creating limitations on severance pay and maternity leave as part of a frontal assault on workers’ rights. (The Senate subsequently voted down a massive “omnibus bill” that would have endowed Milei with sweeping legislative authority.)

Despite the brutality of these measures, however, Milei has often appeared to delight in the outrage he’s engendered if not the harm he has inflicted. Speaking to a group of school children at his alma mater in the middle-class neighborhood of Villa Devoto in February, he joked about the size of a donkey’s genitalia while railing against public education and legalized abortion. Then, at this year’s International Economic Forum of the Americas in Buenos Aires in March, the Murray Rothbard acolyte made a crack about bureaucrats “fisting” the general public during a speech in which he boasted that his government had frozen 200,000 social welfare programs and fired 50,000 state workers, with plans to terminate the contracts of an additional 70,000.

Argentina doesn’t have a president so much as a troll in chief—a pickup-truck decal of the comic-strip kid Calvin peeing made flesh. (Indeed, Milei has threatened to “piss” on provincial governors who refuse to back his reforms.) And whether or not he ultimately succeeds at enacting his legislative agenda, an entire nation remains at his mercy.

“The government believes that it’s doing the dirty work now and that its austerity program will bear fruit later,” said Martin Burgos, an economist at the Latin American Faculty of Social Sciences. “The question is whether the path that it has chosen is going to produce positive results, and there’s reason for skepticism. Monthly inflation hit 20.6 percent in January, and while it dipped to 13.2 percent in February, these numbers are ultimately unsustainable.”

The Milei administration has been quick to tout its success in achieving Argentina’s first monthly budget surpluses since 2012, but this “victory” has come at a steep human cost. A February report from the Argentine Institute of Fiscal Analysis found that roughly one-third of the country’s $2.7-trillion-peso (approximately $3.1 billion) reduction in public spending came from cuts in the form of government pension payments failing to keep pace with inflation. The reduction itself was the largest in 30 years.

“You have to look at how Argentina got here in the first place,” said Mark Weisbrot, codirector of the Center for Economic Policy Research. “In June 2018, Argentina took out a $57 billion loan from the International Monetary Fund, the IMF’s largest loan ever. In accordance with the agreement, the Mauricio Macri administration cut spending and raised interest rates in order to restore market confidence. Instead, these policies pushed the country into a recession and the borrowed money fled the country. The government then doubled down with more fiscal and monetary tightening and sank the economy even further, triggering an inflation-depreciation spiral.”

“Milei has so far made it worse,” Weisbrot continued. “Annual inflation has been more than 700 percent during his first three months, and the peso has fallen by more than half.”

Although the official value of the peso has dropped precipitously, the dollar has remained comparatively weak in the country’s black market, at least in part because so many Argentines have been converting their savings to pay for these exorbitant new costs. Burgos warns that if inflation continues apace, and Argentina grows more expensive in dollars as well, it could see a decrease in exports, which would further deplete reserves.

“A new devaluation would, in turn, generate more inflation, and the problems that we’ve been having would only be amplified, because the speed of these inflation increases will accelerate,” he added.

For the Argentine middle class, the crisis has already arrived. Cecilia Fanti, who opened a 65-square foot bookstore in 2017 and now runs two successful outlets in Buenos Aires, is fearful about the future of her business if the recession drags on for more than a year. Sales at Céspedes Libros were down 32 percent in February amid rampant inflation and increased production costs.

“Books are expensive,” Fanti told The Nation. “This is true all over the world, but an average book now costs 20,000 pesos (around $22), which is approximately 10 percent of a minimum monthly salary in Argentina. Who can afford that?”

Exacerbating matters, the Milei administration has already attempted to repeal a law protecting independent bookstores from supermarkets and other large retailers by ensuring their books must be sold at the same price. The demise of the “omnibus bill” has provided these shops with a reprieve of sorts, but there’s no guarantee Milei won’t strike the law down via executive order.

“This is a government of brutes,” said Fanti. “They don’t understand culture because they have no interest in culture. To them, it’s the stuff of communists and leftists. They want to deregulate books like they would any other product, for purely dogmatic reasons. But beyond questions of ideology, these people can’t comprehend that film and publishing are functioning industries that generate jobs and money.”

Last month, the Milei administration suspended operational funding for the National Institute of Cinema and Audiovisual Arts, citing its commitment to a “zero-deficit budget.” The reasoning behind the decision, which is expected to scuttle both domestic film productions and international film festivals, drew criticism from no less than France’s prestigious film magazine Cahiers du Cinema. Milei has similarly pledged to shut down the national wire service Télam—a move that would limit the public’s access to news around the country and put upwards of 700 journalists out of work.

One such journalist is Silvina Molina, the 56-year-old editor of the news agency’s Society section and its former Gender editor. Like the rest of her colleagues, she learned of Milei’s plans when he announced them during an address to Congress on March 1—mere hours after the government’s director of public media oversight, Diego Chaher, had assured staff that the agency’s finances were in order. That Monday, Molina was locked out of her office with a police unit guarding the entrance. Now, after 13 years at Télam, she is uncertain not only about her own future but how she’ll be able to care for her mother, who suffers from Alzheimer’s. In March, Molina learned that the monthly cost of her mother’s care would be increasing 150,000 pesos—or 43 percent.

“If you talk to my colleagues, you’ll hear plenty of stories like this,” she told The Nation. “The uncertainty is really difficult. Not knowing whether you have a job or whether you’ll be able to find work. It’s very distressing.”

Uncertainty and distress are perhaps the only constants for the residents of Villa 31, one of Buenos Aires’s most notorious slums. On the Saturday before Argentina’s national day of remembrance, Viviana Rodriguez’s soup kitchen is teeming with children between the ages of 5 and 10, with a few adults lingering outside the entrance. A cauldron of chicken and potato stew is bubbling on a gas stove in a narrow kitchen, and the sound of cumbia emanates softly from the space’s big-screen television. The kids occupy themselves with felt-tip markers and few bowls of cheese puffs while a rooster crows outside on a street named for the former first lady and populist icon Eva Perón.

Rodriguez, 53, has lived in Villa 31 for 35 years and operated the soup kitchen since 2018, offering a range of services from rudimentary healthcare to support for the victims of gender-based violence. But since Milei assumed office, she has been forced to cut the kitchen’s days of operation from three to one. As she tells it, there’s simply not enough food to go around.

“We’re not getting the goods we need,” she said. “We used to get assistance from the federal government. Now there’s nothing at all.”

Rodriguez, who is part of the social organization Movimiento Libres del Sur (Freemen of the South Movement), observed that while there were other soup kitchens in the area open during the week, each was completely overrun. Meanwhile, her calls to government officials had gone unanswered.

“What are we going to do?” she asked, surveying the intimate scene in front of her. “To tell you the truth, we’re in a real bad way.”

Just over 100 days into his first term, the pain from Milei’s brand of governance is only beginning.

EU: AI Act fails to set gold standard for human rights

POSTED ON APRIL 04, 2024

.
As EU institutions are expected to conclusively adopt the EU Artificial Intelligence Act in April 2024, ARTICLE 19 joins those voicing criticism about how the Act fails to set a gold standard for human rights protection. Over the last three years of negotiation, together with the coalition of digital rights organisations, we called on lawmakers to demand that AI works for people and that regulation prioritises the protection of fundamental human rights. We believe that in several areas, the AI Act is a missed opportunity to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence and many other rights and freedoms are protected when it comes to artificial intelligence.

For the last three years, as part of the European Digital Rights (EDRi) coalition, ARTICLE 19 has demanded that artificial intelligence (AI) works for people and that its regulation prioritises the protection of fundamental human rights. We have put forward our collective vision for an approach where ‘human-centric’ is not just a buzzword, where people on the move are treated with dignity, and where lawmakers are bold enough to draw red lines against unacceptable uses of AI systems.

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. But while they celebrate, we take a much more critical stance. We want to highlight the many missed opportunities to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence, and many other rights and freedoms are protected when it comes to AI. Here’s our round-up of how the final law fares against our collective demands.

This analysis is based on the latest available version of the AI Act text, dated 6 March 2024. There may still be small changes made before the law’s final adoption.
First, we called on EU lawmakers to empower affected people by upholding a framework of accountability, transparency, accessibility, and redress. How did they do?

Some accessibility barriers have been broken down, but more needs to be done:Article 16 (ja) of the AI Act fulfills our call for accessibility by stating that high-risk AI systems must comply with accessibility requirements. However, we still believe that this should be extended to apply to low and medium-risk AI systems as well, in order to ensure that the needs of people with disabilities are central in the development of all AI systems which could impact them.

More transparency about certain AI deployments, but big loopholes for the private sector and security agencies:The AI Act establishes a publicly-accessible EU database to provide transparency about AI systems that pose higher risks to people’s rights or safety. While originally only providers of high-risk AI systems were subject to transparency requirements, we successfully persuaded decision-makers that deployers of AI systems – those who actually use the system – shall also be subject to transparency obligations.
Those providers and deployers will be subject to transparency obligations who put on the market or use AI systems in high-risk areas – such as in the areas of employment and education – as designated by Annex III. Providers will be required to register their high-risk system in the database and to enter information about it such as the description of its intended purpose, a concise description of the information used by the system, and its operating logic. Deployers of high-risk AI systems who are public authorities – or those acting on their behalf – will be obliged to register the use of the system. They will be required to enter information in the database such as a summary of the findings of a fundamental rights impact assessment (FRIA) and a summary of the data protection impact assessment. However, deployers of high-risk AI systems in the private sector area will not be required to register the use of high-risk systems – another critical issue;
The major shortcoming of the EU database is that negotiators agreed on a carve-out for law enforcement, migration, asylum, and border control authorities. Providers and deployers of high-risk systems in these areas will be requested to register only a limited amount of information, and only in a non-publicly accessible section of the database. Certain important pieces of information, such as the training data used, will not be disclosed at all. This will prevent affected people, civil society, journalists, watchdog organisations and academics from exercising public scrutiny in these high-stake areas which are prone to fundamental rights violations, and hold them accountable.

Fundamental rights impact assessments are included, but concerns remain about how meaningful they will be:We successfully convinced EU institutions of the need for fundamental rights impact assessments (FRIAs). However, based on the final AI Act text, we have doubts whether it will actually prevent human rights violations and serve as a meaningful tool of accountability. We see three primary shortcomings:
Lack of meaningful assessment and the obligation to prevent negative impacts: while the new rules require deployers of high-risk AI systems to list risks of harm to people, there is no explicit obligation to assess whether these risks are acceptable in light of fundamental rights law, nor to prevent them wherever possible. Regrettably, deployers only have to specify which measures will be taken once risks materialise, likely once the harm has already been done;
No mandatory stakeholder engagement: the requirement to engage external stakeholders, including civil society and people affected by AI, in the assessment process was also removed from the article at the last stages of negotiations. This means that civil society organisations will not have a direct, legally-binding way to contribute to impact assessments;
Transparency exceptions for law enforcement and migration authorities: while in principle, deployers of high-risk AI systems will have to publish the summary of the results of FRIAs, this will not be the case for law enforcement and migration authorities. The public will not even have access to mere information that an authority uses a high-risk AI system in the first place. Instead, all information related to the use of AI in law enforcement and migration will only be included in a non-public database, severely limiting constructive public oversight and scrutiny. This is a very concerning development as, arguably, the risks to human rights, civic space and rule of law are the most severe in these two areas. Moreover, while deployers are obliged to notify the relevant market surveillance authority of the outcome of their FRIA, there is an exemption to comply with this obligation to notify for ‘exceptional reasons of public security’. This excuse is often misused as a justification to carry on disproportionate policing and border management activities.

When it comes to complaints and redress, there some remedies, but no clear recognition of the ‘affected person’:Civil society has advocated for robust rights and redress mechanisms for individuals and groups affected by high-risk AI systems. We have demanded the creation of a new section titled ‘Rights of Affected Persons’, which would delineate specific rights and remedies for individuals impacted by AI systems. However, the section has not been created but instead, we have a ‘remedies’ chapter that includes only some of our demands;
This chapter of remedies includes the right to lodge complaints with a market surveillance authority, but lacks teeth, as it remains unclear how effectively these authorities will be able to enforce compliance and hold violators accountable. Similarly, the right to an explanation of individual decision-making processes, particularly for AI systems listed as high-risk, raises questions about the practicality and accessibility of obtaining meaningful explanations from deployers. Furthermore, the effectiveness of these mechanisms in practice remains uncertain, given the absence of provisions such as the right to representation of natural persons, or the ability for public interest organisations to lodge complaints with national supervisory authorities.

The Act allows a double standard when it comes to the human rights of people outside the EU:The AI Act falls short of civil society’s demand to ensure that EU-based AI providers whose systems impact people outside of the EU are subject to the same requirements as those inside the EU. The Act does not stop EU-based companies from exporting AI systems which are banned in the EU, therefore creating a huge risk of violating the rights of people in non-EU countries by EU-made technologies that are essentially incompatible with human rights. Additionally, the Act does not require exported high-risk systems to follow the technical, transparency or other safeguards otherwise required when AI systems are intended for use within the EU, again risking the violation of rights of people outside of the EU by EU-made technologies.
Second, we urged EU lawmakers to limit harmful and discriminatory surveillance by national security, law enforcement and migration authorities. How did they do?

The blanket exemption for national security risks undermining other rules:The AI Act and its safeguards will not apply to AI systems if they are developed or used solely for the purpose of national security, and regardless of whether this is done by a public authority or a private company. This exemption introduces a significant loophole that will automatically exempt certain AI systems from scrutiny and limit the applicability of human rights safeguards envisioned in the AI Act;
In practical terms, it would mean that governments could invoke national security to introduce biometric mass surveillance systems, without having to apply any safeguards envisioned in the AI Act, without conducting a fundamental rights impact assessment and without ensuring that the AI system meets high technical standards and does not discriminate against certain groups;
Such a broad exemption is not justified under EU treaties and goes against the established jurisprudence of the European Court of Justice. While national security can be a justified ground for exceptions from the AI Act, this has to be assessed case-by-case, in line with the EU Charter of Fundamental Rights. The adopted text, however, makes national security a largely digital rights-free zone. We are concerned about the lack of clear national-level procedures to verify if the national security threat invoked by the government is indeed legitimate and serious enough to justify the use of the system and if the system is developed and used with respect for fundamental rights. The EU has also set a worrying precedent regionally and globally; broad national security exemptions have now been introduced in the newly-adopted Council of Europe Convention on AI.

Predictive policing, live public facial recognition, biometric categorisation and emotion recognition are only partially banned, legitimising these dangerous practices:We called for comprehensive bans against any use of AI that isn’t compatible with rights and freedoms – such as proclaimed AI ‘mind reading’, biometric surveillance systems that treat us as walking bar-codes, or algorithms used to decide whether we are innocent or guilty. All of these examples are now partially banned in the AI Act, which is an important signal that the EU is prepared to draw red lines against unacceptably harmful uses of AI;
At the same time, all of these bans contain significant and disappointing loopholes, which means that they will not achieve their full potential. In some cases, these loopholes risk having the opposite effect from what a ban should: they give the signal that some forms of biometric mass surveillance and AI-fuelled discrimination are legitimate in the EU, which risks setting a dangerous global precedent;
For example, the fact that emotion recognition and biometric categorisation systems are prohibited in the workplace and in education settings, but are still allowed when used by law enforcement and migration authorities, signal that the EU’s will to test the most abusive and intrusive surveillance systems against the most marginalised in society;
Moreover, when it comes to live public facial recognition, the Act paves the way to legalise some specific uses of these systems for the first time ever in the EU – despite our analysis showing that all public-space uses of these systems constitute an unacceptable violation of everyone’s rights and freedoms.

The serious harms of retrospective facial recognition are largely ignored:When it comes to retrospective facial recognition, this practice is not banned at all by the AI Act. As we have explained, the use of retrospective (post) facial recognition and other biometric surveillance systems (called ‘remote biometric identification’, or ‘RBI’ in the text) are just as invasive and rights-violating as live (real-time) systems. Yet the AI Act makes a big error in claiming that the extra time for retrospective uses will mitigate possible harms;
While several lawmakers have argued that they managed to insert safeguards, our analysis is that the safeguards are not meaningful enough and could be easily circumvented by police. In one place, the purported safeguard even suggests that simply the suspicion of any crime having taken place would be enough to justify the use of a post RBI system – a lower threshold than we currently benefit from now under EU data protection law.

People on the move are not afforded the same rights as everyone else, with only weak – and at times absent – rules on the use of AI at borders and in migration contexts:In its final version, the EU AI Act sets a dangerous precedent for the use of surveillance technology against migrants, people on the move and marginalised groups. The legislation develops a separate legal framework for the use of AI by migration control authorities, in order to enable the testing and the use of dangerous surveillance technologies at the EU borders and disproportionately against racialised people;
None of the bans meaningfully apply to the migration context, and the transparency obligations present ad-hoc exemptions for migration authorities, allowing them to act with impunity and far away from public scrutiny;
The list of high-risk systems fails to capture the many AI systems used in the migration context, as it excludes dangerous systems such as non-remote biometric identification systems, fingerprint scanners, or forecasting tools used to predict, interdict, and curtail migration;
Finally, AI systems used as part of EU large-scale migration databases (e.g. Eurodac, the Schengen Information System, and ETIAS) will not have to be compliant with the Regulation until 2030, which gives plenty of time to normalise the use of surveillance technology.
Third, we urged EU lawmakers to push back on Big Tech lobbying and address environmental impacts. How did they do?

The risk classification framework has become a self-regulatory exercise:Initially, all use cases included in the list of high-risk applications would have had to follow specific obligations. However, as a result of heavy industry lobbying, providers of high-risk systems will be now able to decide if their systems is high-risk or not, as an additional ‘filter’ was added into that classification system;
Providers will still have to register sufficient documentation in the public database to explain why they don’t consider their system to be high-risk. However, this obligation will not apply when they are providing systems to law enforcement and migration authorities. This will paving the way for the free and deregulated procurement of surveillance systems in the policing and border contexts.

The Act takes only a tentative first step to address environmental impacts of AI:We have serious concerns about how the exponential use of AI systems can have severe impacts on the environment, including through resource consumption, extractive mining and energy-intensive processing. Today, information on the environmental impacts of AI is a closely-guarded corporate secret. This makes it difficult to assess the environmental harms of AI and to develop political solutions to reduce carbon emissions and other negative impacts;
The first draft of the AI Act completely neglected these risks, despite civil society and researchers repeatedly calling for the energy consumption of AI systems to be made transparent. To address this problem, the AI Act now requires that providers of GPAI models that are trained with large amounts of data and consume a lot of electricity must document their energy consumption. The Commission now has the task of developing a suitable methodology for measuring the energy consumption in a comparable and verifiable way;
The AI Act also requires that standardised reporting and documentation procedures must be created to ensure the efficient use of resources by some AI systems. These procedures should help to reduce the energy and other resource consumption of high-risk AI systems during their life cycle. These standards are also intended to promote the energy-efficient development of general-purpose AI models;
These reporting standards are a crucial first step to provide basic transparency about some ecological impacts of AI, first and foremost the energy use. But they can only serve as a starting point for more comprehensive policy approaches that address all environmental harms along the AI production process, such as water and minerals. We cannot rely on self-regulation, given how fast the climate crisis is evolving.
What’s next for the AI Act?

The coming year will be decisive for the EU’s AI Act, with different EU institutions, national lawmakers and even company representatives setting standards, publishing interpretive guidelines and driving the Act’s implementation across the EU’s member countries. Some parts of the law – the prohibitions – could become operational as soon as November. It is therefore vital that civil society groups are given a seat at the table, and that this work is not done in opaque settings and behind closed doors.

We urge lawmakers around the world who are also considering bringing in horizontal rules on AI to learn from the EU’s many mistakes outlined above. A meaningful set of protections must ensure that AI rules truly work for individuals, communities, society, rule of law, and the planet.

While this long chapter of lawmaking is now coming to a close, the next chapter of implementation – and trying to get as many wins out of this Regulation as possible – is just beginning. As a group, we are drafting an implementation guide for civil society, coming later this year. We want to express our thanks to the entire AI core group, who have worked tirelessly for over three years to analyse, advocate and mobilise around the EU AI Act. In particular, we thank the work, dedication and vision of Sarah Chander, of the Equinox Racial Justice Institute, for her leadership of this group in the last three years.

TECH & RIGHTS

Packed With Loopholes: Why the AI Act Fails to Protect Civic Space and the Rule of Law

The AI Act fails to effectively protect the rule of law and civic space. ECNL, Liberties and European Civic Forum (ECF) gives our analysis of its shortcomings.


by LibertiesEU
April 04, 2024



The unaccountable and opaque use of Artificial Intelligence (AI), especially by public authorities, can undermine civic space and the rule of law. In the European Union, we have already witnessed AI-driven technologies being used to surveil activists, assess whether airline passengers pose a terrorism risk or appoint judges to court cases. The fundamental rights framework as well as rule of law standards require that robust safeguards are in place to protect people and our societies from the negative impacts of AI.

For this reason, the European Centre for Not-for-Profit Law (ECNL), Liberties and the European Civic Forum (ECF) closely monitored and contributed to the discussions on the EU’s Artificial Intelligence Act (AI Act), first proposed in 2021. From the beginning, we advocated for strong protections for fundamental rights and civic space and called on European policymakers to ensure that the AI Act is fully coherent with rule of law standards.

The European Parliament approved the AI Act on 13 March 2024, thus marking the end of a three-year-long legislative process. Yet to come are guidelines and delegated acts to clarify the often vague requirements. In this article, we take stock of the extent to which fundamental rights, civic space and the rule of law will be safeguarded and provide an analysis of key AI Act provisions.
Far from a golden standard for a rights-based AI regulation

Our overall assessment is that the AI Act fails to effectively protect the rule of law and civic space, instead prioritising industry interests, security services and law enforcement bodies. While the Act requires AI developers to maintain high standards for the technical development of AI systems (e.g. in terms of documentation or data quality), measures intended to protect fundamental rights, including key civic rights and freedoms, are insufficient to prevent abuses. They are riddled with far-reaching exceptions, lowering protection standards, especially in the area of law enforcement and migration.

The AI Act was negotiated and finalised in a rush, leaving significant gaps and legal uncertainty, which the European Commission will have to clarify in the next months and years by issuing delegated acts and guidelines. Regulating emerging technology requires flexibility, but the Act leaves too much to the discretion of the Commission, secondary legislation or voluntary codes of conduct. These could easily undermine the safeguards established by the AI Act, further eroding the fundamental rights and rule of law standards in the long term.

CSOs’ contributions will be necessary for a rights-based implementation of the AI Act

The AI Act will enter into effect in stages, with full application expected in 2026. The European Commission will develop guidance and delegated acts specifying various requirements for the implementation, including guidance on the interpretation of prohibitions, as well as a template for conducting fundamental rights impact assessments. It will be crucial for civil society to actively contribute to this process with their expertise and real-life examples. In the next months, we will publish a map of key opportunities where these contributions can be made. We also call on the European Commission and other bodies responsible for the implementation and enforcement of the AI Act to proactively facilitate civil society participation and to prioritise diverse voices including those of people affected by various AI systems, especially those belonging to marginalised groups.

5 flaws of the AI Act from the perspective of civic space and the rule of law

1. Gaps and loopholes can turn prohibitions into empty declarations

2. AI companies’ self-assessment of risks jeopardises fundamental rights protections

3. Standards for fundamental rights impact assessments are weak

4. The use of AI for national security purposes will be a rights-free zone

5. Civic participation in the implementation and enforcement is not guaranteed
The AI Act limitations showcase the need for a European Civil Dialogue Agreement

The legislative process surrounding the AI Act was marred by a significant lack of civil dialogue - the obligation of the EU institutions to engage in an open, transparent, and regular process with representative associations and civil society. To date, there is no legal framework regulating the European civil dialogue, although civil society has been calling for it in various contexts. Since the announcement of the AI Act, civil society has made great efforts to coordinate horizontally to feed into the process, engaging diverse organisations at the national and European levels. In the absence of clear guidelines on how civil society input should be included ahead of the drafting of EU laws and policies, the framework proposed by the European Commission to address the widespread impact of AI technologies on society and fundamental rights was flawed. Throughout the preparatory and political stages, the process remained opaque, with limited transparency regarding decision-making and little opportunity for input from groups representing a rights-based approach, particularly in the Council and during trilogue negotiations. This absence of inclusivity raises concerns about the adopted text’s impact on society at large. It not only undermines people’s trust in the legislative process and the democratic legitimacy of the AI Act but also hampers its key objective to guarantee the safety and fundamental rights of all.

However, in contrast to public interest and fundamental rights advocacy groups, market and for-profit lobbyists and representatives of law enforcement authorities and security services had great influence in the legislative process of the AI Act. This imbalanced representation favoured commercial interests and the narrative of external security threats over the broader societal impacts of AI.

Read our analysis in full here.


Symposium on Military AI and the Law of Armed Conflict: Human-machine Interaction in the Military Domain and the Responsible AI Framework


04.04.24 | 

[Dr Ingvild Bode is Associate Professor at the Centre for War Studies, University of Southern Denmark. She is the Principal Investigator of the European Research Council-funded project AutoNorms: Weaponised Artificial Intelligence, Norms, and Order (08/2020-07/2025) and also serves as the co-chair of the IEEE-SA Research Group on AI and Autonomy in Defence Systems.

Anna Nadibaidze is a researcher for the European Research Council funded AutoNorms project based at the Center for War Studies, University of Southern Denmark.]


Artificial intelligence (AI) technologies are increasingly part of military processes. Militaries use AI technologies, for example, for decision support and in combat operations, including as part of weapon systems. Contrary to some previous expectations, especially notable popular culture depictions of ‘sentient’ humanoid machines willing to destroy humanity or ‘robot wars’ between machines, integrating AI into the military does not mean that AI technologies replace humans. Rather, military personnel interact with AI technologies, and likely at an increasing frequency, as part of their day-to-day activities, which include the targeting process. Some militaries have adapted the language of human-machine teaming to describe these instances of human-machine interaction. This term can refer to humans interacting with both uncrewed, (semi-)autonomous platforms, or AI-based software systems. Such developments are increasingly promoted as key trends in defence innovation. For instance, the UK Ministry of Defence considers the “effective integration of humans, AI and robotics into warfighting systems—human-machine teams” to be “at the core of future military advantage”.

At the same time, many states highlight that they intend to develop and use these technologies in a ‘responsible’ manner. The framework of Responsible AI in the military domain is growing in importance across policy and expert discourse, moving beyond the focus on autonomous weapon systems that can “select and apply force without human intervention”. Instead, this framework assumes that AI will be integrated into various military processes and interact with humans in different ways, and therefore it is imperative to find ways of doing so responsibly, for instance by ensuring understandability, reliability, and accountability.

Our contribution connects these intersecting trends in offering a preliminary examination of the extent to which the Responsible AI framework addresses challenges attached to changing human-machine interaction in the military domain. To do so, we proceed in two steps: first, we sketch the kind of challenges raised by instances of human-machine interaction in a military context. We argue that human-machine interaction may fundamentally change the quality of human agency, understood as the ability to make choices and act, in warfare. It does so by introducing a form of distributed agency in military decision-making, including in but not limited to the targeting process. Therefore, there is a need to examine the types of distributed agency that will emerge, or have already emerged, as computational techniques under the ‘AI’ umbrella term are increasingly integrated into military processes. Second, we consider the extent to which the emerging Responsible AI framework, as well as principles associated with it, demonstrates potential to address these challenges.

1. Human-machine Interaction and Distributed Agency

Appropriate forms of human agency and control over use-of-force decision-making are necessary on ethical, legal, and security grounds. (Western) military thinking on human-machine or human-AI teaming recognises this. Human-machine interaction involves sharing cognitive tasks with AI technologies as their use is chiefly associated with the speedy processing of large amounts of data/information. It follows that any decision made in the context of human-machine interaction implies a combination of ‘human’ and ‘machine’ decision-making. This interplay changes how human agency is exercised. Instead of producing zero-sum outcomes, we are likely to encounter a form of distributed agency in military decisions that rely on human-machine interaction. Above all, distributed agency involves a blurring of the distinction between instances of ‘human’ and ‘AI’ agency.

Understanding this distributed agency could, in the first place, consider particularities of how ‘human’ and ‘AI’ agents make choices and act and what this means for interaction dynamics. This is an evolving topic of interest as AI technologies are increasingly integrated into the military domain. The reality of distributed agency is not clear-cut. Any ‘AI agency’ results from human activity throughout the algorithmic design and training process that has become ‘invisible’ at the point of use. This human activity includes programmers who create the basic algorithmic parameters, workers who prepare the data that training machine learning algorithms requires through a series of iterative micro-tasks often subsumed as ‘labelling data’, but also the people whose data is used to train such algorithms. It is therefore important to think about ‘human’ and ‘AI’ agency as part of a relational, complex, socio-technical system. From the perspective of the many groups of humans that are part of this system, interacting with AI creates both affordances or action potentials and constraints. Studying different configurations of this complex system could then advance our understanding of distributed agency.

These initial insights into how technological affordances and constraints shape distributed agency matter in the military domain because they affect human decision-making, including in a warfare context. What does it actually mean for humans to work with AI technologies? The long-established literature in human-factor analysis describes numerous fundamental obstacles that people face when interacting with complex systems integrating automated and AI technologies. These include “poor understanding of what the systems are doing, high workload when trying to interact with AI systems, poor situation awareness (SA) and performance deficits when intervention is needed, biases in decision making based on system outputs, and degradation”. Such common operational challenges of human-machine interaction raise fundamental political, ethical, legal, social, and security concerns. There are particularly high stakes in the military domain because AI technologies used in this context have the potential to inflict severe harm, such as physical injury, human rights violations, death, and (large-scale) destruction.


2. Responsible AI and Challenges of Human-machine Interaction

The Responsible AI framework has been gaining prominence among policymaking and expert circles of different states, especially the US and its allies. In 2023, the US released its Political Declaration on Responsible Military Use of AI and Autonomy, endorsed by 50 other states as of January 2024. US Deputy Secretary of Defense Kathleen Hicks stated that the new Replicator Initiative, aimed at producing large numbers of all-domain, attritable autonomous systems, will be carried out “while remaining steadfast to [the DoD’s] responsible and ethical approach to AI and autonomous systems, where DoD has been a world leader for over a decade”. At the same time, the concept of responsible military AI use has also been entrenched by the Responsible AI in the Military Domain (REAIM) Summit co-hosted by the Netherlands and the Republic of Korea. More than 55 states supported the Summit’s Call to Action in February 2023, and a second edition of the event is expected in Seoul in 2024.

The Responsible AI framework broadens the debate beyond lethal autonomous weapon systems (LAWS), which have been the focus of discussions at the UN CCW in Geneva throughout the last decade. The effort to consider different uses of AI in the military, including in decision support, is a step towards recognising the challenges of human-machine interaction and potential new forms of distributed agency. These changes are happening in various ways and do not necessarily revolve around ‘full’ autonomy, weapon systems, or humans ‘out of the loop’. Efforts to consider military systems integrating autonomous and AI technologies as part of lifecycle frameworks underline this. Such frameworks demonstrate that situations of human-machine interaction need to be addressed and occur at various lifecycle stages from research & development, procurement & acquisition, TEVV, potential deployment, to retirement. Addressing such concerns therefore deserve the type of debate offered by the REAIM platform: a multi-stakeholder discussion representing global perspectives on (changing) human-machine interaction in the military.

At the same time, the Responsible AI framework is nebulous and imprecise in its guidance on ensuring that challenges of human-machine interaction are addressed. So far, it functions as a “floating signifier”, in the sense that the concept can be understood in different ways, often depending on the interests of those who interpret it. This was already visible during the first REAIM Summit in The Hague, where most participants agreed on the importance of being responsible, but not on how to get there. Some of the common themes among the REAIM and US initiatives include commitment to international law, accountability, and responsibility, ensuring global security and stability, human oversight over military AI capabilities, as well as appropriate training of personnel involved in interacting with the capabilities. But beyond these broad principles, it remains unclear what constitutes ‘appropriate’ forms of human-machine interaction, and the forms of agency these involve, in relation to acting responsibly and in conformity with international law – that, in itself, offers unclear guidance. It must be noted, however, that defining ‘Responsible AI’ is no easy task because it requires considering the various dimensions of a complex socio-technical system which includes not only the technical aspects but also political, legal, and social ones. It has already been a challenging exercise in the civilian domain to pinpoint the exact characteristics of this concept, although key terms such as explainability, transparency, privacy, and security are often mentioned in Responsible AI strategies.

Importantly, the Responsible AI framework allows for various interpretations of the form, or mechanism, of global governance needed to address the challenges of human-machine interaction in the military. There are divergent approaches on the appropriate direction to take. For instance, US policymakers seek to “codify norms” for the responsible use of AI through the US political declaration, a form of soft law, interpreted by some experts as a way for Washington to promote its vision in its perceived strategic competition with Beijing. Meanwhile, many states favour a global legal and normative framework in the form of hard law, such as a legally binding instrument establishing appropriate forms of human-machine interaction, especially in relation to targeting, including the use of force. The UN’s 2023 New Agenda for Peace urges states not only to develop national strategies on responsible military use of AI, but also to “develop norms, rules and principles…through a multilateral process” which would involve engagement with industry, academia, and civil society. Some states are trying to make steps into this direction, for instance Austria took the initiative in the form of co-sponsoring a UN General Assembly First Committee resolution on LAWS, which was adopted with overwhelming support in November 2023. Overall, the Responsible AI framework’s inherent ambiguity is an opportunity for those favouring a soft law approach, especially actors who promote political declarations or sets of guidelines and argue that these are enough. Broad Responsible AI guidelines might symbolise a certain commitment or obligations, but at this stage they are insufficient to address already existing challenges to human-machine interaction in a security and military context –not least because they may not be connected to a concrete pathway toward operationalisation and implementation.

Note: This essay outlines initial thinking that forms the basis of a new research project called “Human-Machine Interaction: The Distributed Agency of Humans and Machines in Military AI” (HuMach) funded by the Independent Research Fund Denmark. Led by Ingvild Bode, the project will start later in 2024.



Symposium on Military AI and the Law of Armed Conflict: A (Pre)cautionary Note About Artificial Intelligence in Military Decision Making


04.04.24 | 


[Georgia Hinds is a Legal Adviser with the ICRC in Geneva, working on the legal and humanitarian implications of autonomous weapons, AI and other new technologies of warfare. Before joining the ICRC, she worked in the Australian Government, advising on public international law including international humanitarian and human rights law, and international criminal law, and served as a Reservist Officer with the Australian Army. The views expressed on this blog are those of the author alone and do not engage the ICRC, or previous employers, in any form.]

Introduction

Most of us would struggle to define ‘artificial intelligence.’ Fewer still could explain how it functions. And yet AI technologies permeate our daily lives. They also pervade today’s battlefields. Over the past eighteen months, reports of AI-enabled systems being used to inform targeting decisions in contemporary conflicts have sparked debates (including on this platform) around legal, moral and operational issues.

Sometimes called ‘decision support systems’ (DSS), these are computerized tools that are designed to aid human decision-making by bringing together and analysing information, and in some cases proposing options as to how to achieve a goal [see, e.g., Bo and Dorsey]. Increasingly, DSS in the military domain are incorporating more complex forms of AI, and are being applied to a wider range of tasks.

These technologies do not actually make decisions, and they are not necessarily part of weapon systems that deliver force. Nevertheless, they can significantly influence the range of actions and decisions that form part of military planning and targeting processes.

This post considers implications for the design and use of these tools in armed conflict, arising from international humanitarian law (IHL) obligations, particularly the rules governing the conduct of hostilities.

Taking ‘Constant Care’, How Might AI-DSS Help or Hinder?

Broadly, in the conduct of military operations, parties to an armed conflict must take constant care to spare the civilian population, civilians and civilian objects.

The obligation of constant care is an obligation of conduct, to mitigate risk and prevent harm. It applies across the planning or execution of military operation, and is not restricted to ‘attacks’ within the meaning of IHL (paras 2191, 1936, 1875). It includes, for example, ground operations, establishment of military installations, defensive preparations, quartering of troops, and search operations. It has been said that this requirement to take ‘constant care’ must “animate all strategic, operational and tactical decision-making.”

In assessing the risk to civilians that may arise from the use of an AI-DSS, a first step must be assessing whether the system is actually suitable for the intended task. Applying AI – particularly machine learning – to problems for which it is not well suited, has the potential to actually undermine decision-making (p 19). Automating processes that feed into decision-making can be advantageous where quality data is available and the system is given clear goals (p 12). In contrast, “militaries risk facing bad or tragic outcomes” where they provide AI systems with clear objectives but in uncertain circumstances, or where they use quality data but task AI systems with open-ended judgments. Uncertain circumstances abound in armed conflict, and the contextual, qualitative judgements required by IHL are notoriously difficult. Further, AI systems generally lack the ability to transfer knowledge from one context or domain to another (p 207), making it potentially problematic to apply an AI-DSS in a different armed conflict, or even in different circumstances in the same conflict. It is clear then, that whilst AI systems may be useful for some tasks in military operations (eg. in navigation and maintenance and supply chain management), they will be inappropriate for many others.

Predictions about enemy behaviour will likely be far less reliable than those about friendly forces, not only due to a lack of relevant quality data, but also because armed forces will often adopt tactics to confuse or mislead their enemy. Similarly, AI-DSS would struggle to infer something open-ended or ill-defined, like the purpose of a person’s act. A more suitable application could be in support of weaponeering processes, and the modelling of estimated effects, where such systems are already deployed, and where the DSS should have access to greater amounts of data derived from tests and simulations.

Artificial Intelligence to Gain the ‘Best Possible Intelligence’?

Across military planning and targeting processes, the general requirement is that decisions required by IHL’s rules on the conduct of hostilities must be based on an assessment of the information from all sources reasonably available at the relevant time. This includes an obligation to proactively seek out and collect relevant and reasonably available information (p 48). Many military manuals stress that the commander must obtain the “best possible intelligence,” which has been interpreted as requiring information on concentrations of civilian persons, important civilian objects, specifically protected objects and the environment (See Australia’s Manual on the Law of Armed Conflict (1994) §§548 and 549).

What constitutes the best possible intelligence will depend upon the circumstances, but generally commanders should be maximising their available intelligence, surveillance and reconnaissance assets to obtain up-to-date and reliable information.

Considering this requirement to seek out all reasonably available information, it is entirely possible that the use of AI DSS may assist parties to an armed conflict in satisfying their IHL obligations, by synthesising or otherwise processing certain available sources of information (p 203). Indeed, whilst precautionary obligations do not require parties to possess highly sophisticated means of reconnaissance (pp 797-8), it has been argued that (p 147), if they do possess AI-DSS and it is feasible to employ them, IHL might actually require their use.

In the context of urban warfare in particular, the ICRC has recommended (p 15) that information about factors such as the presence of civilians and civilian objects should include open-source repositories such as the internet. Further, specifically considering AI and machine learning, the ICRC has concluded that, to the extent that AI-DSS tools can facilitate quicker and more widespread collection and analysis of this kind of information, they could well enable better decisions by humans that minimize risks for civilians in conflict. The use of AI-DSS to support weaponeering, for example, may assist parties in choosing means and methods of attack that can best avoid, or at least minimize, incidental civilian harm.

Importantly, the constant care obligation and the duty to take all feasible precautions in attack are positive obligations, as opposed to other IHL rules which prohibit conduct (eg. the prohibitions on indiscriminate or disproportionate attacks). Accordingly, in developing and using AI-DSS, militaries should be considering not only how such tools can assist to achieve military objectives with less civilian harm, but how they might be designed and used specifically for the objective of civilian protection. This also means identifying or building relevant datasets that can support assessments of risks to, and impacts upon civilians and civilian infrastructure.

Practical Considerations for Those Using AI-DSS

When assessing the extent to which an AI-DSS output reflects current and reliable information sources, commanders must factor in AI’s limitations in terms of predictability, understandability and explainability (see further detail here). These concerns are likely to be especially acute with systems that incorporate machine learning algorithms that continue to learn, potentially changing their functioning during use.

Assessing the reliability of AI-DSS outputs also means accounting for the likelihood that an adversary will attempt to provide disinformation such as ruses and deception, or otherwise frustrate intelligence acquisition activities. AI-DSS currently remain vulnerable to hacking and spoofing techniques that can lead to erroneous outputs, often in ways that are unpredictable and undetectable to human operators.

Further, like any information source in armed conflict, the datasets on which AI-DSS rely may be imperfect, outdated or incomplete. For example, “No Strike Lists” (NSL) can contribute to a verification process by supporting faster identification of certain objects that must not be targeted. However, a NSL will only be effective so long as it is current and complete; the NSL itself is not the reality on the ground. More importantly, the NSL usually only consists of categories of objects that benefit from special protection or the targeting of which is otherwise restricted by policy. However, the protected status of objects in armed conflict can change – sometimes rapidly – and most civilian objects that will not appear on the list. In short then, the presence of an object on a NSL contributes to identifying protected objects when verifying the status of a potential target, but the absence of an object from the list does not imply that it is a military objective.

Parallels can be drawn with AI-DSS tools, which rely upon datasets to produce “a technological rendering of the world as a statistical data relationship” (p 10). The difference is that, whilst NSLs generally rely upon a limited number of databases, AI-DSS tools may be trained with, and may draw upon such a large volume of datasets that it may be impossible for the human user to verify their accuracy. This makes it especially important for AI-DSS users to be able to understand what underlying datasets are feeding the system, the extent to which this data is likely to be current and reliable, and the weighting given to particular data in the DSS output (paras 19-20). Certain critical datasets may need to be, by default, labelled with overriding prominent (eg. NSLs), whilst, for others, decision-makers may need to have the ability to adjust how they are factored in.

In certain circumstances, it may be appropriate for a decision-maker to seek out expert advice concerning the functioning or underlying data of an AI-DSS. As much has been suggested in the context of cyber warfare, in terms of seeking to understand the effects of a particular cyber operation (p 49).

In any event, it seems unlikely that it would be reasonable for a commander to rely solely on the output of one AI-DSS, especially during deliberate targeting processes where more time is available to gather and cross-check against different and varied sources. Militaries have already indicated that cross-checking of intelligence is standard practice when verifying targets and assessing proportionality, and an important aspect of minimising harm to civilians. This practice should equally be applied when employing AI-DSS, ideally using different kinds of intelligence to guard against the risks of embedded errors within an AI-DSS.

If a commander, planner or staff officer did rely solely on an AI-DSS, the reasonableness of their decision would need to be judged not only in light of the AI DSS output, but also taking account of other information that was reasonably available.

Conclusion

AI-DSS are often claimed to hold the potential to increase IHL compliance and to produce better outcomes for civilians in armed conflict. In certain circumstances, the use of AI DSS may well assist parties to an armed conflict in satisfying their IHL obligations, by providing an additional available source of information.

However, these tools may be ill-suited for certain tasks in the messy reality of warfare, especially noting their dependence on quality data and clear goals, and their limited capacity for transfer across different contexts. In some cases, drawing upon an AI-DSS could actually undermine the quality of decision-making, and pose additional risks to civilians.

Further, even though an AI-DSS can draw in and synthesise data from many different sources, this does not absolve a commander of their obligation to proactively seek out information from other reasonably available sources. Indeed, the way in which AI tools function – their limitations in terms of predictability, understandability and explainability – make it all the more important that their output be cross-checked.

Finally, AI-DSS must only be applied within legal, policy and doctrinal frameworks that ensure respect for international humanitarian law. Otherwise, these tools will only serve to replicate, and arguably exacerbate, unlawful or otherwise harmful outcomes at a faster rate and on a larger scale.