Showing posts sorted by date for query EQUINOX. Sort by relevance Show all posts
Showing posts sorted by date for query EQUINOX. Sort by relevance Show all posts

Friday, November 01, 2024

Samhain to Soulmass: The Pagan origins of familiar Halloween rituals


Beverley D'Silva
BBC
OCTOBER 30,2024


From outrageous costumes to trick or treat: the unexpected ancient roots of Halloween's most popular – and most esoteric – traditions.


With its goblins, goosebumps and rituals – from bobbing for apples to dressing up as vampires and ghosts – Halloween is one of the world's biggest holidays. It's celebrated across the world, from Poland to the Philippines, and nowhere as extravagantly as in the US, where in 2023 $12.2 billion (£9.4 billion) was spent on sweets, costumes and decorations. The West Hollywood Halloween Costume Carnival in the US is one of the biggest street parties of its kind; Hollywood parties such as George Clooney's tequila brand's bash make a big social splash; and at model Heidi Klum's party she is renowned for her bizarre disguises, such as her iconic giant squirming worm outfit.


Heidi Klum wore a worm costume for Halloween 2022 in NYC – scary disguises were originally intended to ward off evil spirits (Credit: Getty Images)

With US stars turning out again for the biggest dressing-up show after the Oscars' red carpet, it's no surprise Halloween is often viewed as a modern US invention. In fact, it dates back more than 2,000 years, to Ireland and an ancient Celtic fire festival called Samhain. The exact origins of Samhain predate written records but according to the Horniman Museum: "There are Neolithic tombs in Ireland that are aligned with the Sun on the mornings of Samhain and Imbolc [in February], suggesting these dates have been important for thousands of years".


Celebrated usually from 31 October to 1 November, the religious rituals of Samhain (pronounced "sow-win", meaning summer's end), focused on fire, as winter approached. Anthropologist and pagan Lyn Baylis tells the BBC: "Fire rituals to bring light into the darkness were vital to Samhain, which was the second most important fire festival in the Pagan Celtic world, the first being Beltane, on 1 May." Samhain and Beltane are part of the Wheel of the Year, an annual cycle of eight seasonal festivals observed in Paganism (a "polytheistic or pantheistic nature-worshipping religion", says the Pagan Federation).


The ancient Celtic festival of Samhain is still celebrated in some places, including Glastonbury Tor, pictured in 2017 (Credit: Getty Images)

Samhain was the pivotal point of the Celtic Pagan new year, a time of rebirth – and death. "Pagans had three harvests: Lammas, harvest of the corn, on 1 August; the one of fruit and vegetables at autumn equinox, 21 September; and Halloween, the third," says Baylis. At this time animals that couldn't survive winter were culled, to ensure the other animals' survival. "So there was a lot of death around that time, and people knew there would be deaths in their villages during the harsh winter months." Other countries, notably Mexico, celebrate The Day of the Dead around this time to honour the deceased.
Costumes and ugly masks were worn to scare away malevolent spirits believed to have been set free from the realm of the dead


At Samhain, Celtic Pagans in Ireland would put out their home fires and light one giant bonfire in the village, which they would dance around and act out stories of death, regeneration and survival. As the whole village joined in to dance, animals and crops were burned as sacrifices to Celtic deities, to thank them for the previous year's harvest and encourage their goodwill for the next.


It was believed that at this time the veil between this world and the spirit world was at its thinnest – allowing the spirits of the dead to pass through and mingle with the living. The sacred energy of the rituals, it was believed, allowed the living and the dead to communicate, and gave Druid priests and Celtic shamans heightened perception.

And this is where the dress-up factor came in – costumes and ugly masks were worn to scare away malevolent spirits believed to have been set free from the realm of the dead. This was also known as "mumming" or "guising".

Those early Samhain dressing-up rituals began to change when Pope Gregory 1 (590-604) arrived in Britain from Rome to convert Pagan Anglo-Saxons to Christianity. The Gregorian mission decreed that Samhain festivities must incorporate Christian saints "to ward off the sprites and evil creatures of the night", says Baylis. All Souls Day, 1 November, was created by the Church, "so people could still call on their dead to aid them"; also known as All Hallows, 31 October later became All Hallows' Eve, later known as Halloween.

"There is a long tradition of costuming of sorts that goes back to Hallow Mass when people prayed for the dead," explains Nicholas Rogers, a history professor at York University in Canada. "But they also prayed for fertile marriages." Centuries later boy choristers in the churches dressed up as virgins, he says. "So there was a certain degree of cross-dressing in the ceremony of All Hallow's Eve."


New York City Halloween parade participants in the early 1980s (Credit: Getty Images)


The Victorians loved a ghost story, and adopted non-religious Halloween costumes for adults. Later, after World War Two, the day centred on children dressing up, a ritual still alive today at trick-or-treating time. Since the 1970s, adults dressing up for Halloween has become widespread again, not just in creepy and ugly costumes, but also hyper-sexualised ones. According to Time, these risqué outfits emerged because of the "transgressive" mood of the occasion, when "you can get away with it without it being seen as particularly offensive". In the classic teen film Mean Girls, it's jokingly said that "in girl world" Halloween is the "one night a year when girls can dress like a total slut and no other girls can say anything about it". It's not just in "girl world" that Halloween has a disinhibiting effect – it is a hugely popular holiday in the LGBTQ+ community, and is often referred to as "Gay Christmas". In New York, the city famously comes alive every year with a Halloween parade featuring participants in elaborate and outlandish costumes.

Playing with fire

Echoes of Samhain also live on today in fire practices. Carving lanterns from root vegetables was one tradition, although turnips, not pumpkins, were first used. The practice is said to have grown from a Celtic myth, about a man named Jack who made a pact with the devil, but who was so deceitful that he was banned from heaven and hell – and condemned to roam the darkness, with only a burning coal in a carved-out turnip to light the way.


The ritual of carving lanterns out of pumpkins came from the myth of a man called Jack who made a pact with the devil (Credit: Getty Images)


In Ireland, people made lanterns, placing turnips with carved faces in their window to ward off an apparition called "Jack of the Lantern" or Jack-o'-Lantern. In the 19th Century, Irish immigrants took the custom with them to the US. In the small Somerset village of Hinton St George in the UK, turnips or mangolds are still used, and elaborately carved "punkies" are paraded on "punkie night", always the last Thursday of October. In the UK town of Ottery St Mary there is still an annual "flaming tar barrels" ritual – a custom once practised widely across Britain at the time of Samhain, where flaming barrels were carried through the streets to chase away evil spirits.
Soulers went door to door singing and saying prayers for souls in exchange for ale, cakes and apples

Leaving food and sweetly spiced "soul cakes" or "soulmass" cakes on the doorstep was said to ward off bad spirits. Households deemed less generous with their offerings would receive a "trick" played on them by bad spirits. This has translated into modern-day trick or treating. Whether soul cakes came from the ancient Celts or the Church is open to argument, but the idea was that, as they were eaten, prayers and blessings were said for the dearly departed. From Medieval times, "souling" was a Christian tradition in English towns at Halloween and Christmas; and soulers (mainly children and the poor) went door to door singing and saying prayers for souls in exchange for ale, cakes and apples.
Advertisement

More like this:

Britain's most chaotic rituals

The historic roots of goth

Is County Meath the birthplace of Halloween

Apple bobbing – dipping your face into water to bite an apple – dates back to the 14th Century, according to historian Lisa Morton: "An illuminated manuscript, The Luttrell Psalter, depicted it in a drawing." Others date the custom back further, to the Romans' conquest of Britain (from AD43) and the apple trees that they imported. Pomona was the Roman goddess of fruitful abundance and fertility, and hence, it is argued, apple bobbing's ties to love and romance. In one version, the bobber (usually female) tries to bite into an apple bearing her suitor's name; if she bites it on the first go, she is destined for love; two gos means her romance will start but falter; three means it will never get started.

It is thought that apple bobbing originated in the 14th Century – or possibly even further back (Credit: Getty Images)


British rituals, at the heart of Halloween traditions, are the subject of Ben Edge's book, Folklore Rising, illustrated with his mystical paintings. Edge says that he has observed a "resurgence of people becoming interested in ritual and folklore… I call it a folk renaissance, and I see it as a genuine movement led by younger people".


He cites such artists as Shovel Dance Collective, "non-binary, cross-dressing and singing traditional working men's songs of the land". There is also Weird Walk, a project "exploring the ancient paths, sacred sites and folklore of the British Isles… through walking, storytelling and mythologising." If interest in folk rituals is on the rise, so too are the numbers turning to such traditions as Paganism and Druidry, both adhering to the Wheel of the Year, and Samhain, "dedicated to remembering those who have passed on, connecting with the ancestors, and preparing ourselves spiritually and psychologically for the long nights of winter ahead".

Ben Edge
The Flaming Tar Barrels of Ottery St Mary (2020) is featured in artist Ben Edge's book about ancient traditions, Folklore Rising (Credit: Ben Edge)

Philip Carr-Gomm, a psychologist, author and practising druid, says that he has witnessed a "steady growth" in interest around Druidry over the past few decades. "We now have 30,000 members, across six languages," he tells the BBC.

The need for ritual, connectedness and community is at the heart of many Halloween traditions, says Baylis: "One of the most important aspects of Halloween for us is remembering loved ones. We light a candle, possibly say the name of the person or put a picture of them on an altar. It's a sacred time and ceremony, but you don't have to be a Pagan to be involved. The important thing is that it comes from a place of protection and love."

Tuesday, October 29, 2024

GM CEO Mary Barra on the politics of EVs, the future of AVs, and moving away from China

SAN FRANCISCO, CALIFORNIA - OCTOBER 29: (L-R) Matt Rosoff and Mary Barra, Chair & CEO of General Motors, speak onstage during TechCrunch Disrupt 2024 Day 2 at Moscone Center on October 29, 2024 in San Francisco, California.
 (Photo by Kimberly White/Getty Images for TechCrunch)

 TechCrunch
Rebecca Bellan
Tue, October 29, 2024 


“I never thought the propulsion of a vehicle would become a political issue,” GM Chairman and CEO Mary Barra said on stage at TechCrunch Disrupt on Tuesday.

While the executive didn’t expand on this statement, former President Trump has railed against EVs and claimed, wrongly, that there is a mandate to make and sell electric vehicles in the United States.


“General Motors’s goal is to just keep providing great vehicles, keep supporting the charging infrastructure to be more robust …and opening up the Tesla charging network, as well, so people choose it because it’s a great vehicle,” Barra continued. “And that’s the journey we’re working on, while we’re getting battery costs down. We’re still looking for battery innovation to get energy density up, cost down, all those things are going to be unlocks.”

Lowering battery costs could help lower the price of EVs. Affordability is top of mind for Barra who said it major factor for consumers.

“That’s why we’re so excited to have the Equinox and the Blazer on that because we’re getting into the affordable range, especially when you look at an Equinox EV that will be starting in that mid $30,000 range,” she said.

“But they want affordability with the right range,” she continued. And that sweet spot is really 300 miles before people start getting range anxiety.

Finally, accessibility to working, well lit, easy to pay for charging stations is what consumers want.

“Charging is just going to continue to get better,” she said, noting that GM has spent hundreds of millions of dollars to help boost charging infrastructure in partnership with companies like EVgo.

While EVs are a key component to Barra's vision for GM, autonomy, cybersecurity and a strategy for China are also important. Here are some of the highlights.
Cruise will still help GM transform the industry

“I became the CEO in 2014, and in 2015 we actually spent quite a bit of time in Silicon Valley, at Stanford and other places within the leadership team,” Barra said. “What were the technologies that were really going to transform our industry? We started looking at those. And autonomy was one.”

She noted that when GM acquired Cruise in 2016, the automaker kept its hands out of the AV startup’s business and let it develop as a startup would, which did lead Cruise to commercialize fully driverless robotaxi in San Francisco. Cruise’s permits to operate were suspended after a safety incident last October.

Nonetheless, Barra is still bullish on the possibilities for AVs to drive safety. She also believes the Cruise investment will continue to help GM one day provide personal autonomous vehicles.

GM will someday make a purpose-built AV

Following Cruise’s safety incident, the company scrapped its plans to produce the Origin, a purpose-built AV with no steering wheel or pedals. While Barra acknowledged that there are challenges to getting such a vehicle onto public roads – namely federal motor vehicle safety standards – she believes an AV built without human controls is still in the future for GM.

Keeping data secure

Electric vehicles today are computers on wheels, and as a result, they collect a lot of data, including the environment in which the car is driving, how the car is performing, and driving behavior. That data could contain sensitive information, which is something Barra says GM is prioritizing.

“I take cybersecurity really seriously from a vehicle perspective, because again, if something goes wrong, it can have dire consequences,” Barra said. “So that’s something we’ve been investing in for years. Privacy as well, treating data with respect and continuing to raise the bar on how to manage data and make sure we’re doing the right thing.”


It should be noted that GM is among a number of automakers who were sharing consumer driving data with insurance companies, the NYT reported earlier this year. GM has since stopped sharing that data with LexisNexis Risk Solution and Verisk, two data brokers that created risk profiles for the insurance industry, and also hired an executive to oversee customer privacy.

Moving away from China

Barra has referred to competing for EV market share in China as a “race to the bottom,” and she doubled down on those comments Tuesday.

“There’s a lot going on from a political perspective,” she said. “Our business in China is shifting.”

During the third quarter, GM’s China JV lost $137 million, compared with a $192 million profit a year ago. That’s because it’s tough to compete with domestic brands that have government backing to produce excellent vehicles at low costs.

“The competition is just continuing,” she said. “From a pricing perspective, it’s lower and lower and lower. And so you have to look at what’s the sustainable business? Because the situation that’s there right now is not sustainable. We have 100 or so companies, less than a handful are profitable.”

Sunday, October 13, 2024

What Ideas From the Paleolithic Are Still With Us in the Modern World?

An interview with renowned economic historian Michael Hudson on where our calendar comes from, his collaborations with the late intellectual David Graeber, and the long-lost practice of forgiving debt.
October 11, 2024
Source: Originally published by Z. Feel free to share widely.





Is the order of the modern alphabet connected to how our shared ancestors counted the phases of the moon and its effect on tides 50,000 years ago? Did the first stirrings of government and bureaucracy emerge from the efforts of early astronomers to reconcile solar and lunar calendars? These are the kinds of questions that have kept economic historian Michael Hudson up at night.

On the surface, learning about the origins of the methods people use to bring order to their lives—such as time, weights and measures, and our financial systems—seems like just another history lesson. One ancient practice leading to another, resulting in guesswork of what people did before the last Ice Age.

But it goes beyond interesting. It’s very useful. The more we can parse out and extrapolate the beliefs and attitudes of previous eras, the more we might be able to step out of present behavior patterns and perceive social problems we keep creating because we thought we had to.

A deeper reach into human history is now possible, thanks to a growing body of archaeological and scholarly research collected in recent decades. Many experts in related fields have speculated that this research will have a large social impact as it percolates through centers of influence and we become accustomed to relying on a wider, global human historical evidence base as a reference. Society will greatly benefit from minds that are trained to think in deeper timescales than a millennium or two—archaeology and biological sciences increasingly permit useful insights and pattern observations into humanities at a historical depth spanning millions of years.

Hudson’s research has already made inroads into modern life. Many contemporary economists rely on his understanding of financial history in the Ancient Near East. Hudson’s collaboration with the late anthropologist and activist David Graeber inspired his launch of the debt cancellation movement during Occupy Wall Street. Graeber’s book Debt: The First 5,000 Years is a popularized adaption of Hudson’s research on the early financial systems of the Near East, encouraging Graeber to follow up and coauthor the bestselling book The Dawn of Everything, an overview of new interpretations in archaeology and anthropology about the many paths society can take.

I reached out to Hudson for a conversation on these topics, starting with his reflections on what drew him into prehistory in the early 1970s, and his collaborations with Harvard prehistorian Alex Marshack.

Jan Ritch-Frel: Alex Marshack was well-known for his idea that many of the social institutions we live by today are derived in large part from the “thought matrix of the Paleolithic”—the ideas and attitudes, social systems, and means of recording and transmitting information developed over thousands of millennia until the most recent Ice Age. How did you two find each other?

Michael Hudson: I had read in the New York Times about Alex Marshack’s analysis of carvings on a bone found in France, made approximately 35,000 years ago with markings that he viewed as tracing the lunar month, not mere decorations. We became friends. He was living and working in New York City, with a housing arrangement between NYU and Harvard to provide housing for each other’s faculty.

Marshack was working from the Paleolithic forward, the time before the last Ice Age, to see how it shaped the Neolithic and Near Eastern Bronze Age. My approach was to study the Bronze Age because my study was about the origins of money and debt and its cancellation. And then to work back in time to see how these practices began.

Marshack was most focused on how the measurement of time began before there was any arithmetic. Counting began with a calendrical point of reference. Marshack showed that lunar months initially were pre-mathematical, indicating symbolic literacy proliferated in the Paleolithic. He developed the idea that a motive was to arrange meetings—groups separated by distance tracking the passage of time to convene at pre-agreed locations. I was interested in the calendar as an organizing principle of archaic society: its division into tribes, and as providing a model of the cosmos that guided the structuring of social organization.

I had been writing on ancient debt cancellations, and the idea of economic renewal on a periodic basis. We both had this basic question—how did this awareness of time turn into actual counting and provide a basis for ordering of other systems, from social organization to music? Marshack showed what I’d been writing to the head of the Peabody Museum at Harvard University, who invited me up for a meeting, and soon enough I was a research fellow there too.

I began my work on how order was created by trying to think about how the calendar became the basic organizing principle certainly for the entire Bronze Age, and no doubt leading up to it.

Ritch-Frel: The words “month,” “measure,” and “menstruation” are all derived from the word moon in Proto-Indo-European: “mehns” according to scholars of the early Bronze Age Language, which is ancestral to many of Eurasia’s major languages spoken today. Going back to Marshack’s research direction of looking at the thought matrix of the Paleolithic, what answers was he looking for with the evidence from the past?

Hudson: Marshack saw the centrality of social and prosocial behavior as a driver among separate groups—today’s humans thrive on the interaction between groups. The management of that, diplomatically and administratively through a calendar process had to be a key basis for survival across time; it had an ordering function. The need for dispersed populations to come together for trade and intermarriage.

Marshack believed that Paleolithic leaders would have understood that this lunar calendar and the notations associated with it were technologies of chieftains, of governance. Oftentimes, leadership comes down to organizing meetings and the rules these meetings have. The lunar calendar was the basis for figuring out when separate groups were all going to meet together at some annual interval, and maybe there were meetings at the monthly or seasonal interval, such as the equinoxes or solstices. And it was probably based on a new moon.

Here’s a case of the thought matrix of the Paleolithic shaping societies that we call ancestral: Marshack and I came to interpret that the key meeting date would be a new moon—time was thought of as a baby, the moon grows and becomes older. This goes right down to the Roman calendar. The new year was the shortest day of the year. When the year is born, it’s the smallest before it grows. The idea of a life course of a year, with weather, people, and animals traveling along with it was at the heart of the Paleolithic thought matrix. Marshack, for example, studied the amount of attention and care Paleolithic cave painters of Europe put into drawing animals to indicate a particular time of year. If there was a painting of a fish, it would have the long jaw that fish developed in the mating season. You could look at whether the animals were molting or not. Paleolithic artists across the world were always careful to note that.

To show you how the year’s 12 lunar months were a format often adopted for organizing other social structures, let’s consider the social models we see in the Near East and the Mediterranean that are recorded in the Bronze Age: As populations settled into increasingly sedentary communities, a typical form of association was the amphictyony, divided into 12, four or six “tribes” or regions. These tribal divisions enabled the rotation of chiefs by the month or season so that all members of the amphictyony would be equal. “Foreign relations” were standardized carefully to provide equality.

Ritch-Frel: I am mindful that when people elect to use an ordering system for some part of life, it’s based on good reputation and there being a convention that connected social groups share. If people decide to organize society into groups using a 12-month lunar calendar logic, it’s a measure of its latency in the wider human culture and is still with us today. This Paleolithic tradition organizes the backgammon board we play on today, designed by Sassanid Persians, it’s rooted in the lunar calendar logic of 12. We don’t pay much attention to ordering systems once they’re in place, as long as they work.

Hudson: Certainly by the Neolithic, people began to count everything. Even if they didn’t have systems of mathematics, they were counting—and trying to find correlations and associations with natural phenomena around them, from weather to the behavior of animals. For instance, an archaic cosmologist might count the number of teeth of a horse and attempt to correlate that with something that shared the same number.

The assumption was that maybe we could control things by taking some proxy that shared the same number or some other cosmological characteristic with another, and we could have a ritual on earth that would somehow manipulate the heavens and our environment in the way that we wanted to.

We might call that pseudoscience—confusing similarity with true correlation, confusing correlation with causation. While many of us might make a living in science using higher-grade scientific standards, there’s quite a lot of that still going on today—in conversations with family and friends, in sports and its statistics, and fortune telling is an industry that’s still going strong.

Ritch-Frel: We can regard this general instinct as leading to know-how and in some cases part of science, as the process gets refined.

Hudson: Think of it as experimentation: “Let’s see if we can do this and see what works.” They were experimenting, but the logic was to think in terms of a system, and I think that’s what made the Bronze Age societies work.

The key to archaic science was to think in terms of a cosmos, in which everything was interrelated. The so-called Astrological Diaries of Babylonia correlated grain prices, the level of the Euphrates, and other economic phenomena, including royal disturbances and behavior much as modern astrology seeks to do. They were seeking order, and they started by correlating everything they could, including the movements of the planets.

Today, we think in the decimal system. But it’s not automatic to assume 10 fingers as the basis for how hunter-gatherers are going to count; even in cases of using the body as a memory device. Some Indonesian societies, for example, counted across the span of their outstretched arms, with 28 spots. That would be a measure of using the body to follow the phases of the moon. I also noted that these tended to track with a range in the number of letters in the alphabet that we see in many languages today, in the mid-20s and 30s. It seems that before numbers, something like the alphabet was used to name the moon’s phases.

The number of letters in many early alphabets that we know of corresponded with the lunar months. And the most important characteristic of the alphabet is its sequential order. We don’t say AMD, we say ABC. They’re always in the same order. Does that contain an older pattern? The key is the fixed sequence, a pre-mathematical organizational system.

We know that many Paleolithic communities across Eurasia and the Americas were following the phases of the moon. And we know from Neolithic structures such as Stonehenge that people were also focusing on the key solar intervals, especially the solstices that were turning points for the birth of the year on the shortest day, and equinoxes that were the turning points.

There was a permanent need to combine a lunar calendar, which governed local social life, with a solar calendar, which told the story of the seasons, separated by solstices and equinoxes. And, of course, that was a big problem because imagine the frustration that they had when they realized that the lunar and solar months don’t correspond exactly: A lunar year has 354 days, and a solar one has 365. The mathematics of the form of solstices and equinoxes, and the time gap between the 354-day lunar year and the 365-day solar year (as well as the leap year) could lead to divergences in cosmology and social ritual using the calendar as a basic organizing principle. The solstices and the seasons, often highly social events with important rites and traditions, would be more complicated to schedule and would be pushed to different dates as the years went by.

Marshack thought that once arithmetic was developed, some priest-like individuals or chiefs began counting everything, looking for a pattern, an explanation. “Let’s see what works.”

I became curious about how Mesopotamia and others blended their cosmological calendars and kept their traditions on schedule and societies harmonized. We know that many of the lunar years remained the basis for many religions all the way from Mesopotamian practices to Jewish practices, down to today, and yet there was also the solar year.

Ritch-Frel: As Near Eastern societies became more complex in the 3rd and 4th Millennium BCE, how did they reconcile all this? And how did the calendrical system become imbued into an arithmetic basis of weights and measures and rations?

Hudson: The early Sumerian cities like Uruk or Lagash frequently experienced the upheavals of warfare and disease. That meant there were large numbers of widows, orphans, and slaves in these cities. The place they found for them was basically in large weaving workshops around the temples. A large, exploited workforce producing textiles required an administrative system to feed the labor pool over the course of the year—a new calendar system.

Leaders worked with their astronomers and cosmologists to develop this administrative calendar to feed this workforce population. It seems that the convention of 12 months per year borne out of the lunar calendar was assumed, the question came down to how many days are there in that month. Neither the 354-day lunar or 365-day solar calendar worked—for causes of variability in length, their need to be corrected to follow the seasons, or the inconvenience of the way the numbers couldn’t be divided by 12. There couldn’t be oversights in the administrative calendar that missed a day—mistakes made in provisioning food for people are quickly noticed.

It seems natural they’d want to land on a day that both served the administrative needs and could be correlated with the 354-day lunar calendar and the 365-day solar calendar. After trial and error, 30 rations per month, 12 months per year produced a social logic of 360, pretty close to the two ancient cosmologies.

The standard ancient daily ration in these early Mesopotamian cities for the workers and enslaved people was two cups of grain per day per person. Using the administrative 30-day calendar, 60 cups of grain was one month’s ration. A slave or a temple worker required 60 cups of grain a month—it became a rule of thumb for the city leaders and managers. One month’s rations, 60 cups, is a unit of weight, a bushel. That key weight, organized by the number 60 has a forcing effect on how the commodity grain is often exchanged for silver. It led to silver being organized in weight units of 60, called a mena, so that the trades for weights of grain and silver could correspond easily.

The palace calendar became the administrative ration calendar model, the 12-month, 30-day calendar. And there was administrative efficiency. They saw correspondence in the rations with the units they used for weights and measures, and for calculating loans and mercantile trade. Naturally, if silver and grain are organized on the basis of 60, it was convenient for minds trained to calculate on the basis of 60 to use it as the numbering structure for interest rates. You can see how units of measure, once they become convention, have an easy time traveling across categories of activity. To hammer it home, the time units for payment plan structures on early Mesopotamian debt were derived from Paleolithic time units: monthly, borrowing from the lunar calendar; quarterly, borrowing from the four annual seasons divided by solstice and equinox; or annually using the solar calendar.

That annual part is the next phase of this to discuss, as you’ll remember, the 360-day calendar is a social artifice that needed a process every year to correctly align with 354- and 365-day calendars. The incompatibility between these calendar years was treated as a time of anarchy, which required harmonization—long before the administrative one was invented. The process of bringing order to chaos was also brought over from the Paleolithic—it was as familiar a convention as the 12 lunar month calendar. The resumption of a new solar year was treated as an occasion for setting affairs back in order and clearing up old dues—not just getting the calendar to align, but the social imbalances and unresolved appeals to justice inside groups and among them. The cleaning of the slates, which listed debts and obligations in increasingly large settlements, would have drawn their justification from this Paleolithic process.

The importance of recording grain supplies and the related mercantile trades and the lending system around them, the palace administrative calendar, and forecasting lunar and solar cycles to find concordance dates for future calendar years put pressure on the astronomers and cosmologists of the Bronze and Iron ages to develop fuller arithmetic, quadratic equations, and even analogue computers with gears to determine the movement of the sun and the moon and other heavenly bodies that served as useful fixed points for their calculations.

Ritch-Frel: The process is important here, and so is this example for understanding how existing human social conventions like the Paleolithic lunar calendar form the basis for future ones. How did Bronze Age rulers adapt Neolithic and earlier traditions of resetting the annual calendar, old debts, and unresolved justice?

Hudson: Archaic societies knew well that social order required active intervention to restore order. Unlike the calendar, realignment in the social economy was not achieved automatically. The birth of a new year was a tool and natural marker to clean up debts and obligations from the year before. This became especially important with the spread of interest-bearing debt in trade and agriculture: It was necessary to prevent an oligarchy.

Cosmology is a system. And calendrical cosmology is a system with an inherent source of disorder: the gap between the solar and lunar years. Certainly, both in Mesopotamia and Egypt, the idea that the gap between the lunar year and the solar year was a time out of time—when repair of social inequality and imbalance could be addressed.

Debt cancellations were normal practice throughout the Bronze Age in the form of royal proclamations of clean slates. Not only were debts wiped out, but bondservants were free to return to their own families (and enslaved people were also returned to their debtor owners), and lands that had been lost through debt or other misfortune were returned to their former holders. The logic of the statements in the proclamations follows a thought line of, as above, so below; on earth as it is in heaven. It’s useful to cloak the ancient calendar convention of the Paleolithic chaos-into-order period into the social-economic principles that the new agricultural society lived by.

And while you’re dealing with this cosmology trying to create order and restore order in terms of time, how do you prevent the disorder from the increase in wealth that occurs as technology and population grow and societies become more and more productive and wealthy? That was a big challenge to civilization. The Asian societies met it very well. The Middle Eastern societies met it very well.

They had a system that was able to keep time, and generally prevent or remedy social polarization. They wanted to have a system that maintained order on a continuous basis without creating disorder. And that’s what led me to work with David Graeber and other people trying to think, well, how is it that you’d have some very archaic societies that very often lasted a lot longer than the ones we have today? And as Graeber pointed out in his more recent book, The Dawn of Everything, there are many Mesoamerican, and generally speaking, Native American communities that had a very careful standardization of social poles—you didn’t want there to be wealthy people, it creates egotism, it tends to be abusive to other people.

Ritch-Frel: Can you share a bit about your collaborations with David Graeber?

Hudson: Graeber’s basic aim was to show how some societies had avoided polarization and inequality as social wealth developed. How do we explain the origins of inequality and how do we prevent it? We had talked originally about economic historian Karl Polanyi and his circle’s attempt to go beyond the economic orthodoxy that social organization began with individuals bartering and lending money based on its rate of return. He took the viewpoint that there was a wider society in motion that was shaping our economic structures, not just merchants and customers.

Well, he had read my books, and I mean, we had long discussions and he said, he wrote Debt: The First 5,000 Years largely to popularize my work, and because he realized that debt was the great polarizing fact of antiquity. And that’s why he pushed the Occupy Wall Street movement to focus on debt cancellations.

One of David’s activist tactics was to buy defaulted debts of people for 1 cent on the dollar, which everybody thought was collectible. There are marketplaces for defaulted debt that lenders have given up on, and there’s a secondary market for debt-collecting divisions of banks that want to take their chances, buying the debt at very steep discounts. And Graeber wanted to raise money to buy these debts and tell the debtors, you don’t owe this money anymore. Look, we paid it all off for you.

What David and his friends couldn’t have bargained for is just how depraved and corrupt the banks were—the banks had sold the same collection rights to many different collectors. The debtors were still being harassed by debt collectors even after their loans were bought off.

The tactic didn’t work, but the idea was right. David and I both wanted to advocate debt cancellations here because that’s what’s destroying the economy today. Western civilization never developed the means of canceling debts in the way that the Near East and other parts of Asia did.

Today, we are smothered in a fake storyline, a fake origin myth for economics. Margaret Thatcher typifies this attitude. You have to pay the debts. You have to let the rich people take over because they get wealthy. And unequal wealth is what civilization is all about. The ability of wealthy people to crush and destroy civilization is Western progress.

The myth goes like this:

In the beginning, there were individual entrepreneurs who tried to make money, the government then stepped in and wouldn’t let them make money, canceled the debts, and nobody would lend money anymore, so economies couldn’t develop. But fortunately, our modern economy figured out how to grow: the payment of debts is a must, and that gives security to the creditors. We can’t have a free market, wealth-creating economy if the 1 percent can’t drive the 99 percent into debt. And that’s why the stock and bond market and the real estate market have gone up when the rest of the American population economy, the 99 percent since 2008 have gone down.

Meanwhile, if you look under the hood of the Bronze Age, the Neolithic that preceded it, and the Paleolithic before it—the evidence overwhelmingly points to a default: mutual aid, and common wealth.

Our leading economists say civilization couldn’t have begun this way: “If you began this way, how could you ever have the security of creditors to make the loans, to help everything develop?” They’ve just never lived in that world, so, therefore, it’s unimaginable for them.

Ritch-Frel: A fuller account of human history that stretches millions of years into the geological time scale, across a wider geographic area, is part of the infrastructure humans need to pave a road back to more resilient and equal societies. What have you gathered as you have followed the evolution of social insurance and mutual aid systems into government administration, modern banking, and finance? Did you spot paths not taken that lead to more humanistic outcomes?

Hudson: In my opinion, the key driver of Western economic history is the shifting and unstable political relationships that grew out of the financial dynamic of debts growing at compound interest faster than the economies can pay. Casting the net wider, we can see that it was a tenet of Chinese law, Indian law, and Middle Eastern law, to prevent an independent financial oligarchy from developing.

How did we lose all of that?

A series of historical events, of course, rooted in what we call the Classical Era in the Mediterranean. When Phoenician and neighboring sea traders expanded their trading posts into the Mediterranean and mixed with various colonies, they enforced the concept of charging interest on debts, and the chieftains of city-states and colonies adopted this policy without the debt cancellation cure that centralized rulers adopted across the Near East. The traders just wanted their silver, they weren’t terribly bothered by upheavals in the social order that occurs when you don’t cancel debt. The economies of Greece and Rome and their political heirs in Western Europe were all about creating a financial oligarchy and sanctifying debts instead of sanctifying the cancellation of debt.

By explaining the Mesopotamian and other Near Eastern royal proclamations canceling debts and reestablishing order, it is possible to show people another path—one that has worked for thousands of years, and emerged out of that Paleolithic thought matrix. What we call Western civilization and progress is a detour from the direction that human civilization had been traveling for a much longer time.

This whole detour of not being able to control the egotism borne by wealth and the development of a creditor class—who eventually gain control of the land and the basic needs of life—is a civilizational problem.

This article was produced by Human Bridges, a project of the Independent Media Institute.

Wednesday, October 02, 2024

COUNTERINTUITIVE

GM reports US sales dip, but says EVs grew



By AFP
October 1, 2024




General Motors reported lower overall sales but the introduction of the Chevrolet Equinox EV helped boost electric auto sales - Copyright AFP/File Geoff Robins

General Motors reported a dip in third-quarter US auto sales Tuesday, but pointed to growth in sales of electric vehicles and said retail pricing remained steady.

The big Detroit automaker reported 659,601 US sales during the period, down 2.2 percent from the year-ago but marking a slightly smaller decline than analysts projected.

Sales were mixed among the truck and SUV products that have supported GM profits in recent years.

Whereas GM scored an uptick in sales of GMC Sierra pickup trucks, its top-selling Silverado line experienced a dip.

GM described its EV portfolio as “growing faster than the market” with sales jumping 46 percent in the third quarter, topping 32,000.

GM and Ford have both slowed some investments in EVs due to moderating demand for the vehicles.

GM said average vehicle pricing of $49,349 was in line with its second quarter, with incentives also holding steady.

The automaker has 627,048 vehicles in inventory heading into the fourth quarter, which is much above the level a year-ago when Detroit automakers were contending with a labor strike. However, that level is still below pre-pandemic supplies.

Garrett Nelson, an analyst at CFRA Research, described GM’s sales as “broadly in line” with US auto industry performance in the period.

Cox Automotive predicted a 2.1 percent sales drop among US automakers in the period, with some volatility due to election season offset by a lift from lower interest rate cuts.

“We remain optimistic that new-vehicle sales could improve marginally through the final quarter of 2024,” said Charlie Chesbrough, senior economist at Cox.

Monday, September 09, 2024

 

‘Ice bucket challenge’ reveals that bacteria can anticipate the seasons


Bacteria use their internal 24-hour clocks to anticipate the arrival of new seasons



John Innes Centre

Bacteria can anticipate the seasons 

image: 

Dr Luisa Jabbur - "There is something very precious about looking at a set of plates with bacteria on them and realizing that in that moment you know something that nobody else knows.” 

view more 

Credit: John Innes Centre




Bacteria use their internal 24-hour clocks to anticipate the arrival of new seasons, according to research carried out with the assistance of an ‘ice bucket challenge.’ 

This discovery may have profound implications for understanding the role that circadian rhythms – a molecular version of a clock – play in adapting species to climate change, from migrating animals to flowering plants.  

The team behind the findings gave populations of blue-green algae (cyanobacteria) different artificial day lengths at a constant warm temperature. Samples on plates received either short days, equinox days (equal light and dark), or long days, for eight days.  

After this treatment, the blue-green algae were plunged into ice for two hours and survival rates monitored.   

Samples that had been exposed to a succession of short days (eight hours light and 16 hours dark) in preparation for the icy challenge achieved survival rates of 75%, up to three times higher than colonies that had not been primed in this way. 

One short day was not enough to increase the bacteria’s resistance to cold. Only after several short days, and optimally six to eight days, did the bacteria’s life chances significantly improve. 

In cyanobacteria which had genes that make up their biological clock removed, survival rates were the same regardless of day lengths. This indicates that photoperiodism (the ability to measure the day-night cycle and change one’s physiology in anticipation of the upcoming season) is critical in preparing bacteria for longer-term environmental changes such as a new season or shifts in climate. 

“The findings indicate that bacteria in nature use their internal clocks to measure day length and when the number of short days reaches a certain point, as they do in autumn/fall, they ‘switch’ to a different physiology in anticipation of the wintry challenges that lie ahead,” explained first author of the study, Dr Luísa Jabbur, who was a researcher at Vanderbilt University, Tennessee, in the laboratory of Prof. Carl Johnson when this study took place, and is now a BBSRC Discovery Fellow at the John Innes Centre.  

The Johnson lab has a long history of studying the circadian clock of cyanobacteria, both from a mechanistic and an ecological perspective. 

Previous studies have shown that bacteria have a version of a biological clock, which could allow them to measure differences in day-night length, offering an evolutionary advantage. 

This study, which appears in Science, is the first time that anyone has shown that photoperiodism in bacteria has evolved to anticipate seasonal cues.  

Based on these findings a whole new horizon of scientific exploration awaits. A key question is: how does an organism with a lifespan of between six and 24 hours evolve a mechanism that enables it not merely to react to, but to anticipate, future conditions? 

“It’s like they are signalling to their daughter cells and their granddaughter cells, passing information that the days are getting short, you need to do something,” said Dr Jabbur. 

Dr Jabbur and colleagues at the John Innes Centre will, as part of her BBSRC Discovery Fellowship, use cyanobacteria as a fast-reproducing model species to understand how photoperiodic responses might evolve in other species during climate change, with hopeful applications to major crops.  

A key part of this work will be to understand more about the molecular memory systems by which information is passed from generation to generation in species. Research will investigate the possibility that an accumulation of compounds during the night on short days acts as a molecular switch that triggers change to a different physiology or phenotype.  

For Dr Jabbur the findings amount to an early-career scientific breakthrough in the face of initial scepticism from her scientific mentor and the corresponding author of the paper, Professor Carl Johnson. 

“As well as being a fascinating person and an inspiration, Carl sings in the Nashville Symphony Chorus, and he has an operatic laugh! It echoed round the department when I first outlined my idea for the icy challenge, to see if photoperiod was a cue for cyanobacteria in their natural element,” said Dr Jabbur. 

“To be fair he told me to go away and try it, and as I went, he showed me a sign on his door with the Frank Westheimer quote: ‘Progress is made by young scientists who carry out experiments that old scientists say would not work.’ 

“It did work, first time. Then I repeated the experiments. There is something very precious about looking at a set of plates with bacteria on them and realizing that in that moment you know something that nobody else knows.” 

Bacteria can anticipate the seasons: Photoperiodism in cyanobacteria appears in Science.  

Monday, August 19, 2024

Why August 19, 2024's Super Blue Moon Is So Rare And When To See The Next One

August 2024's full moon is a special event, combining a Supermoon, a Blue Moon, and the Sturgeon Moon.

Outlook International Desk
Updated on: 19 August 2024 



Representative image Photo: Pinterest

Tonight’s full moon is not just a regular celestial event—it’s a rare combination of three fascinating phenomena: the Supermoon, the Blue Moon, and the Sturgeon Moon. Let’s break down what makes this August 19th full moon so special.


What Is A Supermoon?

A Supermoon happens when the moon is at its closest point to Earth, known as perigee, during its full moon phase. This means it looks bigger and brighter than usual. The term "Supermoon" was introduced by astrologer Richard Nolle in 1979. NASA explains that because the moon’s orbit around Earth is not a perfect circle, its distance from Earth varies. Supermoons occur three to four times a year, but they only make up about 25% of all full moons.


What About The Blue Moon?

The term "Blue Moon" has two meanings: monthly and seasonal. August's full moon is a seasonal Blue Moon, which is rare. Typically, there are three full moons in each astronomical season (from solstice to equinox or vice versa). When there are four, the third one is called a Blue Moon. The next seasonal Blue Moon won’t appear until May 2027.


Monthly Blue Moons, which happen roughly every 2-3 years, refer to the second full moon in a single calendar month. While Blue Moons are uncommon, a Supermoon that is also a Blue Moon is even rarer, with occurrences ranging from every 10 to 20 years. The next pairing of a Supermoon and Blue Moon will occur in January and March 2037.


The Sturgeon Moon

The name "Sturgeon Moon" comes from Native American tribes who used lunar names to track seasonal changes. August’s full moon was named after the large sturgeon fish found in the Great Lakes and Lake Champlain, which were most easily caught during this time of year. Other names for August’s full moon include the Black Cherries Moon, Corn Moon, and Mountain Shadows Moon. Unfortunately, sturgeon populations have declined due to overfishing and habitat loss.

Will The Moon Actually Appear Blue?

Despite the name, the moon won’t turn blue tonight. When you see images of a blue moon, the blue color is usually added through filters or photo editing. Real blue moons are incredibly rare and usually result from specific atmospheric conditions, such as volcanic ash. The term "Blue Moon" has been in use since at least 1528, but naturally occurring blue moons are few and far between.

What’s Happening In The Night Sky?

If you’re keen on stargazing this month, here are some highlights:



August 19: Watch the full moon.


August 20: The moon will move past Saturn, rising in the east and traveling west throughout the night.


August 27: A crescent moon will join Mars and Jupiter before sunrise for a spectacular trio in the eastern sky.


All Month: The Lagoon Nebula is visible with binoculars or a telescope in the constellation Sagittarius, near "The Teapot" star pattern.


Enjoy tonight's full moon—whether you’re a seasoned stargazer or just curious about the cosmos, this August full moon is a celestial event not to be missed!

Thursday, April 04, 2024

EU: AI Act fails to set gold standard for human rights

POSTED ON APRIL 04, 2024

.
As EU institutions are expected to conclusively adopt the EU Artificial Intelligence Act in April 2024, ARTICLE 19 joins those voicing criticism about how the Act fails to set a gold standard for human rights protection. Over the last three years of negotiation, together with the coalition of digital rights organisations, we called on lawmakers to demand that AI works for people and that regulation prioritises the protection of fundamental human rights. We believe that in several areas, the AI Act is a missed opportunity to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence and many other rights and freedoms are protected when it comes to artificial intelligence.

For the last three years, as part of the European Digital Rights (EDRi) coalition, ARTICLE 19 has demanded that artificial intelligence (AI) works for people and that its regulation prioritises the protection of fundamental human rights. We have put forward our collective vision for an approach where ‘human-centric’ is not just a buzzword, where people on the move are treated with dignity, and where lawmakers are bold enough to draw red lines against unacceptable uses of AI systems.

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. But while they celebrate, we take a much more critical stance. We want to highlight the many missed opportunities to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence, and many other rights and freedoms are protected when it comes to AI. Here’s our round-up of how the final law fares against our collective demands.

This analysis is based on the latest available version of the AI Act text, dated 6 March 2024. There may still be small changes made before the law’s final adoption.
First, we called on EU lawmakers to empower affected people by upholding a framework of accountability, transparency, accessibility, and redress. How did they do?

Some accessibility barriers have been broken down, but more needs to be done:Article 16 (ja) of the AI Act fulfills our call for accessibility by stating that high-risk AI systems must comply with accessibility requirements. However, we still believe that this should be extended to apply to low and medium-risk AI systems as well, in order to ensure that the needs of people with disabilities are central in the development of all AI systems which could impact them.

More transparency about certain AI deployments, but big loopholes for the private sector and security agencies:The AI Act establishes a publicly-accessible EU database to provide transparency about AI systems that pose higher risks to people’s rights or safety. While originally only providers of high-risk AI systems were subject to transparency requirements, we successfully persuaded decision-makers that deployers of AI systems – those who actually use the system – shall also be subject to transparency obligations.
Those providers and deployers will be subject to transparency obligations who put on the market or use AI systems in high-risk areas – such as in the areas of employment and education – as designated by Annex III. Providers will be required to register their high-risk system in the database and to enter information about it such as the description of its intended purpose, a concise description of the information used by the system, and its operating logic. Deployers of high-risk AI systems who are public authorities – or those acting on their behalf – will be obliged to register the use of the system. They will be required to enter information in the database such as a summary of the findings of a fundamental rights impact assessment (FRIA) and a summary of the data protection impact assessment. However, deployers of high-risk AI systems in the private sector area will not be required to register the use of high-risk systems – another critical issue;
The major shortcoming of the EU database is that negotiators agreed on a carve-out for law enforcement, migration, asylum, and border control authorities. Providers and deployers of high-risk systems in these areas will be requested to register only a limited amount of information, and only in a non-publicly accessible section of the database. Certain important pieces of information, such as the training data used, will not be disclosed at all. This will prevent affected people, civil society, journalists, watchdog organisations and academics from exercising public scrutiny in these high-stake areas which are prone to fundamental rights violations, and hold them accountable.

Fundamental rights impact assessments are included, but concerns remain about how meaningful they will be:We successfully convinced EU institutions of the need for fundamental rights impact assessments (FRIAs). However, based on the final AI Act text, we have doubts whether it will actually prevent human rights violations and serve as a meaningful tool of accountability. We see three primary shortcomings:
Lack of meaningful assessment and the obligation to prevent negative impacts: while the new rules require deployers of high-risk AI systems to list risks of harm to people, there is no explicit obligation to assess whether these risks are acceptable in light of fundamental rights law, nor to prevent them wherever possible. Regrettably, deployers only have to specify which measures will be taken once risks materialise, likely once the harm has already been done;
No mandatory stakeholder engagement: the requirement to engage external stakeholders, including civil society and people affected by AI, in the assessment process was also removed from the article at the last stages of negotiations. This means that civil society organisations will not have a direct, legally-binding way to contribute to impact assessments;
Transparency exceptions for law enforcement and migration authorities: while in principle, deployers of high-risk AI systems will have to publish the summary of the results of FRIAs, this will not be the case for law enforcement and migration authorities. The public will not even have access to mere information that an authority uses a high-risk AI system in the first place. Instead, all information related to the use of AI in law enforcement and migration will only be included in a non-public database, severely limiting constructive public oversight and scrutiny. This is a very concerning development as, arguably, the risks to human rights, civic space and rule of law are the most severe in these two areas. Moreover, while deployers are obliged to notify the relevant market surveillance authority of the outcome of their FRIA, there is an exemption to comply with this obligation to notify for ‘exceptional reasons of public security’. This excuse is often misused as a justification to carry on disproportionate policing and border management activities.

When it comes to complaints and redress, there some remedies, but no clear recognition of the ‘affected person’:Civil society has advocated for robust rights and redress mechanisms for individuals and groups affected by high-risk AI systems. We have demanded the creation of a new section titled ‘Rights of Affected Persons’, which would delineate specific rights and remedies for individuals impacted by AI systems. However, the section has not been created but instead, we have a ‘remedies’ chapter that includes only some of our demands;
This chapter of remedies includes the right to lodge complaints with a market surveillance authority, but lacks teeth, as it remains unclear how effectively these authorities will be able to enforce compliance and hold violators accountable. Similarly, the right to an explanation of individual decision-making processes, particularly for AI systems listed as high-risk, raises questions about the practicality and accessibility of obtaining meaningful explanations from deployers. Furthermore, the effectiveness of these mechanisms in practice remains uncertain, given the absence of provisions such as the right to representation of natural persons, or the ability for public interest organisations to lodge complaints with national supervisory authorities.

The Act allows a double standard when it comes to the human rights of people outside the EU:The AI Act falls short of civil society’s demand to ensure that EU-based AI providers whose systems impact people outside of the EU are subject to the same requirements as those inside the EU. The Act does not stop EU-based companies from exporting AI systems which are banned in the EU, therefore creating a huge risk of violating the rights of people in non-EU countries by EU-made technologies that are essentially incompatible with human rights. Additionally, the Act does not require exported high-risk systems to follow the technical, transparency or other safeguards otherwise required when AI systems are intended for use within the EU, again risking the violation of rights of people outside of the EU by EU-made technologies.
Second, we urged EU lawmakers to limit harmful and discriminatory surveillance by national security, law enforcement and migration authorities. How did they do?

The blanket exemption for national security risks undermining other rules:The AI Act and its safeguards will not apply to AI systems if they are developed or used solely for the purpose of national security, and regardless of whether this is done by a public authority or a private company. This exemption introduces a significant loophole that will automatically exempt certain AI systems from scrutiny and limit the applicability of human rights safeguards envisioned in the AI Act;
In practical terms, it would mean that governments could invoke national security to introduce biometric mass surveillance systems, without having to apply any safeguards envisioned in the AI Act, without conducting a fundamental rights impact assessment and without ensuring that the AI system meets high technical standards and does not discriminate against certain groups;
Such a broad exemption is not justified under EU treaties and goes against the established jurisprudence of the European Court of Justice. While national security can be a justified ground for exceptions from the AI Act, this has to be assessed case-by-case, in line with the EU Charter of Fundamental Rights. The adopted text, however, makes national security a largely digital rights-free zone. We are concerned about the lack of clear national-level procedures to verify if the national security threat invoked by the government is indeed legitimate and serious enough to justify the use of the system and if the system is developed and used with respect for fundamental rights. The EU has also set a worrying precedent regionally and globally; broad national security exemptions have now been introduced in the newly-adopted Council of Europe Convention on AI.

Predictive policing, live public facial recognition, biometric categorisation and emotion recognition are only partially banned, legitimising these dangerous practices:We called for comprehensive bans against any use of AI that isn’t compatible with rights and freedoms – such as proclaimed AI ‘mind reading’, biometric surveillance systems that treat us as walking bar-codes, or algorithms used to decide whether we are innocent or guilty. All of these examples are now partially banned in the AI Act, which is an important signal that the EU is prepared to draw red lines against unacceptably harmful uses of AI;
At the same time, all of these bans contain significant and disappointing loopholes, which means that they will not achieve their full potential. In some cases, these loopholes risk having the opposite effect from what a ban should: they give the signal that some forms of biometric mass surveillance and AI-fuelled discrimination are legitimate in the EU, which risks setting a dangerous global precedent;
For example, the fact that emotion recognition and biometric categorisation systems are prohibited in the workplace and in education settings, but are still allowed when used by law enforcement and migration authorities, signal that the EU’s will to test the most abusive and intrusive surveillance systems against the most marginalised in society;
Moreover, when it comes to live public facial recognition, the Act paves the way to legalise some specific uses of these systems for the first time ever in the EU – despite our analysis showing that all public-space uses of these systems constitute an unacceptable violation of everyone’s rights and freedoms.

The serious harms of retrospective facial recognition are largely ignored:When it comes to retrospective facial recognition, this practice is not banned at all by the AI Act. As we have explained, the use of retrospective (post) facial recognition and other biometric surveillance systems (called ‘remote biometric identification’, or ‘RBI’ in the text) are just as invasive and rights-violating as live (real-time) systems. Yet the AI Act makes a big error in claiming that the extra time for retrospective uses will mitigate possible harms;
While several lawmakers have argued that they managed to insert safeguards, our analysis is that the safeguards are not meaningful enough and could be easily circumvented by police. In one place, the purported safeguard even suggests that simply the suspicion of any crime having taken place would be enough to justify the use of a post RBI system – a lower threshold than we currently benefit from now under EU data protection law.

People on the move are not afforded the same rights as everyone else, with only weak – and at times absent – rules on the use of AI at borders and in migration contexts:In its final version, the EU AI Act sets a dangerous precedent for the use of surveillance technology against migrants, people on the move and marginalised groups. The legislation develops a separate legal framework for the use of AI by migration control authorities, in order to enable the testing and the use of dangerous surveillance technologies at the EU borders and disproportionately against racialised people;
None of the bans meaningfully apply to the migration context, and the transparency obligations present ad-hoc exemptions for migration authorities, allowing them to act with impunity and far away from public scrutiny;
The list of high-risk systems fails to capture the many AI systems used in the migration context, as it excludes dangerous systems such as non-remote biometric identification systems, fingerprint scanners, or forecasting tools used to predict, interdict, and curtail migration;
Finally, AI systems used as part of EU large-scale migration databases (e.g. Eurodac, the Schengen Information System, and ETIAS) will not have to be compliant with the Regulation until 2030, which gives plenty of time to normalise the use of surveillance technology.
Third, we urged EU lawmakers to push back on Big Tech lobbying and address environmental impacts. How did they do?

The risk classification framework has become a self-regulatory exercise:Initially, all use cases included in the list of high-risk applications would have had to follow specific obligations. However, as a result of heavy industry lobbying, providers of high-risk systems will be now able to decide if their systems is high-risk or not, as an additional ‘filter’ was added into that classification system;
Providers will still have to register sufficient documentation in the public database to explain why they don’t consider their system to be high-risk. However, this obligation will not apply when they are providing systems to law enforcement and migration authorities. This will paving the way for the free and deregulated procurement of surveillance systems in the policing and border contexts.

The Act takes only a tentative first step to address environmental impacts of AI:We have serious concerns about how the exponential use of AI systems can have severe impacts on the environment, including through resource consumption, extractive mining and energy-intensive processing. Today, information on the environmental impacts of AI is a closely-guarded corporate secret. This makes it difficult to assess the environmental harms of AI and to develop political solutions to reduce carbon emissions and other negative impacts;
The first draft of the AI Act completely neglected these risks, despite civil society and researchers repeatedly calling for the energy consumption of AI systems to be made transparent. To address this problem, the AI Act now requires that providers of GPAI models that are trained with large amounts of data and consume a lot of electricity must document their energy consumption. The Commission now has the task of developing a suitable methodology for measuring the energy consumption in a comparable and verifiable way;
The AI Act also requires that standardised reporting and documentation procedures must be created to ensure the efficient use of resources by some AI systems. These procedures should help to reduce the energy and other resource consumption of high-risk AI systems during their life cycle. These standards are also intended to promote the energy-efficient development of general-purpose AI models;
These reporting standards are a crucial first step to provide basic transparency about some ecological impacts of AI, first and foremost the energy use. But they can only serve as a starting point for more comprehensive policy approaches that address all environmental harms along the AI production process, such as water and minerals. We cannot rely on self-regulation, given how fast the climate crisis is evolving.
What’s next for the AI Act?

The coming year will be decisive for the EU’s AI Act, with different EU institutions, national lawmakers and even company representatives setting standards, publishing interpretive guidelines and driving the Act’s implementation across the EU’s member countries. Some parts of the law – the prohibitions – could become operational as soon as November. It is therefore vital that civil society groups are given a seat at the table, and that this work is not done in opaque settings and behind closed doors.

We urge lawmakers around the world who are also considering bringing in horizontal rules on AI to learn from the EU’s many mistakes outlined above. A meaningful set of protections must ensure that AI rules truly work for individuals, communities, society, rule of law, and the planet.

While this long chapter of lawmaking is now coming to a close, the next chapter of implementation – and trying to get as many wins out of this Regulation as possible – is just beginning. As a group, we are drafting an implementation guide for civil society, coming later this year. We want to express our thanks to the entire AI core group, who have worked tirelessly for over three years to analyse, advocate and mobilise around the EU AI Act. In particular, we thank the work, dedication and vision of Sarah Chander, of the Equinox Racial Justice Institute, for her leadership of this group in the last three years.

TECH & RIGHTS

Packed With Loopholes: Why the AI Act Fails to Protect Civic Space and the Rule of Law

The AI Act fails to effectively protect the rule of law and civic space. ECNL, Liberties and European Civic Forum (ECF) gives our analysis of its shortcomings.


by LibertiesEU
April 04, 2024



The unaccountable and opaque use of Artificial Intelligence (AI), especially by public authorities, can undermine civic space and the rule of law. In the European Union, we have already witnessed AI-driven technologies being used to surveil activists, assess whether airline passengers pose a terrorism risk or appoint judges to court cases. The fundamental rights framework as well as rule of law standards require that robust safeguards are in place to protect people and our societies from the negative impacts of AI.

For this reason, the European Centre for Not-for-Profit Law (ECNL), Liberties and the European Civic Forum (ECF) closely monitored and contributed to the discussions on the EU’s Artificial Intelligence Act (AI Act), first proposed in 2021. From the beginning, we advocated for strong protections for fundamental rights and civic space and called on European policymakers to ensure that the AI Act is fully coherent with rule of law standards.

The European Parliament approved the AI Act on 13 March 2024, thus marking the end of a three-year-long legislative process. Yet to come are guidelines and delegated acts to clarify the often vague requirements. In this article, we take stock of the extent to which fundamental rights, civic space and the rule of law will be safeguarded and provide an analysis of key AI Act provisions.
Far from a golden standard for a rights-based AI regulation

Our overall assessment is that the AI Act fails to effectively protect the rule of law and civic space, instead prioritising industry interests, security services and law enforcement bodies. While the Act requires AI developers to maintain high standards for the technical development of AI systems (e.g. in terms of documentation or data quality), measures intended to protect fundamental rights, including key civic rights and freedoms, are insufficient to prevent abuses. They are riddled with far-reaching exceptions, lowering protection standards, especially in the area of law enforcement and migration.

The AI Act was negotiated and finalised in a rush, leaving significant gaps and legal uncertainty, which the European Commission will have to clarify in the next months and years by issuing delegated acts and guidelines. Regulating emerging technology requires flexibility, but the Act leaves too much to the discretion of the Commission, secondary legislation or voluntary codes of conduct. These could easily undermine the safeguards established by the AI Act, further eroding the fundamental rights and rule of law standards in the long term.

CSOs’ contributions will be necessary for a rights-based implementation of the AI Act

The AI Act will enter into effect in stages, with full application expected in 2026. The European Commission will develop guidance and delegated acts specifying various requirements for the implementation, including guidance on the interpretation of prohibitions, as well as a template for conducting fundamental rights impact assessments. It will be crucial for civil society to actively contribute to this process with their expertise and real-life examples. In the next months, we will publish a map of key opportunities where these contributions can be made. We also call on the European Commission and other bodies responsible for the implementation and enforcement of the AI Act to proactively facilitate civil society participation and to prioritise diverse voices including those of people affected by various AI systems, especially those belonging to marginalised groups.

5 flaws of the AI Act from the perspective of civic space and the rule of law

1. Gaps and loopholes can turn prohibitions into empty declarations

2. AI companies’ self-assessment of risks jeopardises fundamental rights protections

3. Standards for fundamental rights impact assessments are weak

4. The use of AI for national security purposes will be a rights-free zone

5. Civic participation in the implementation and enforcement is not guaranteed
The AI Act limitations showcase the need for a European Civil Dialogue Agreement

The legislative process surrounding the AI Act was marred by a significant lack of civil dialogue - the obligation of the EU institutions to engage in an open, transparent, and regular process with representative associations and civil society. To date, there is no legal framework regulating the European civil dialogue, although civil society has been calling for it in various contexts. Since the announcement of the AI Act, civil society has made great efforts to coordinate horizontally to feed into the process, engaging diverse organisations at the national and European levels. In the absence of clear guidelines on how civil society input should be included ahead of the drafting of EU laws and policies, the framework proposed by the European Commission to address the widespread impact of AI technologies on society and fundamental rights was flawed. Throughout the preparatory and political stages, the process remained opaque, with limited transparency regarding decision-making and little opportunity for input from groups representing a rights-based approach, particularly in the Council and during trilogue negotiations. This absence of inclusivity raises concerns about the adopted text’s impact on society at large. It not only undermines people’s trust in the legislative process and the democratic legitimacy of the AI Act but also hampers its key objective to guarantee the safety and fundamental rights of all.

However, in contrast to public interest and fundamental rights advocacy groups, market and for-profit lobbyists and representatives of law enforcement authorities and security services had great influence in the legislative process of the AI Act. This imbalanced representation favoured commercial interests and the narrative of external security threats over the broader societal impacts of AI.

Read our analysis in full here.


Symposium on Military AI and the Law of Armed Conflict: Human-machine Interaction in the Military Domain and the Responsible AI Framework


04.04.24 | 

[Dr Ingvild Bode is Associate Professor at the Centre for War Studies, University of Southern Denmark. She is the Principal Investigator of the European Research Council-funded project AutoNorms: Weaponised Artificial Intelligence, Norms, and Order (08/2020-07/2025) and also serves as the co-chair of the IEEE-SA Research Group on AI and Autonomy in Defence Systems.

Anna Nadibaidze is a researcher for the European Research Council funded AutoNorms project based at the Center for War Studies, University of Southern Denmark.]


Artificial intelligence (AI) technologies are increasingly part of military processes. Militaries use AI technologies, for example, for decision support and in combat operations, including as part of weapon systems. Contrary to some previous expectations, especially notable popular culture depictions of ‘sentient’ humanoid machines willing to destroy humanity or ‘robot wars’ between machines, integrating AI into the military does not mean that AI technologies replace humans. Rather, military personnel interact with AI technologies, and likely at an increasing frequency, as part of their day-to-day activities, which include the targeting process. Some militaries have adapted the language of human-machine teaming to describe these instances of human-machine interaction. This term can refer to humans interacting with both uncrewed, (semi-)autonomous platforms, or AI-based software systems. Such developments are increasingly promoted as key trends in defence innovation. For instance, the UK Ministry of Defence considers the “effective integration of humans, AI and robotics into warfighting systems—human-machine teams” to be “at the core of future military advantage”.

At the same time, many states highlight that they intend to develop and use these technologies in a ‘responsible’ manner. The framework of Responsible AI in the military domain is growing in importance across policy and expert discourse, moving beyond the focus on autonomous weapon systems that can “select and apply force without human intervention”. Instead, this framework assumes that AI will be integrated into various military processes and interact with humans in different ways, and therefore it is imperative to find ways of doing so responsibly, for instance by ensuring understandability, reliability, and accountability.

Our contribution connects these intersecting trends in offering a preliminary examination of the extent to which the Responsible AI framework addresses challenges attached to changing human-machine interaction in the military domain. To do so, we proceed in two steps: first, we sketch the kind of challenges raised by instances of human-machine interaction in a military context. We argue that human-machine interaction may fundamentally change the quality of human agency, understood as the ability to make choices and act, in warfare. It does so by introducing a form of distributed agency in military decision-making, including in but not limited to the targeting process. Therefore, there is a need to examine the types of distributed agency that will emerge, or have already emerged, as computational techniques under the ‘AI’ umbrella term are increasingly integrated into military processes. Second, we consider the extent to which the emerging Responsible AI framework, as well as principles associated with it, demonstrates potential to address these challenges.

1. Human-machine Interaction and Distributed Agency

Appropriate forms of human agency and control over use-of-force decision-making are necessary on ethical, legal, and security grounds. (Western) military thinking on human-machine or human-AI teaming recognises this. Human-machine interaction involves sharing cognitive tasks with AI technologies as their use is chiefly associated with the speedy processing of large amounts of data/information. It follows that any decision made in the context of human-machine interaction implies a combination of ‘human’ and ‘machine’ decision-making. This interplay changes how human agency is exercised. Instead of producing zero-sum outcomes, we are likely to encounter a form of distributed agency in military decisions that rely on human-machine interaction. Above all, distributed agency involves a blurring of the distinction between instances of ‘human’ and ‘AI’ agency.

Understanding this distributed agency could, in the first place, consider particularities of how ‘human’ and ‘AI’ agents make choices and act and what this means for interaction dynamics. This is an evolving topic of interest as AI technologies are increasingly integrated into the military domain. The reality of distributed agency is not clear-cut. Any ‘AI agency’ results from human activity throughout the algorithmic design and training process that has become ‘invisible’ at the point of use. This human activity includes programmers who create the basic algorithmic parameters, workers who prepare the data that training machine learning algorithms requires through a series of iterative micro-tasks often subsumed as ‘labelling data’, but also the people whose data is used to train such algorithms. It is therefore important to think about ‘human’ and ‘AI’ agency as part of a relational, complex, socio-technical system. From the perspective of the many groups of humans that are part of this system, interacting with AI creates both affordances or action potentials and constraints. Studying different configurations of this complex system could then advance our understanding of distributed agency.

These initial insights into how technological affordances and constraints shape distributed agency matter in the military domain because they affect human decision-making, including in a warfare context. What does it actually mean for humans to work with AI technologies? The long-established literature in human-factor analysis describes numerous fundamental obstacles that people face when interacting with complex systems integrating automated and AI technologies. These include “poor understanding of what the systems are doing, high workload when trying to interact with AI systems, poor situation awareness (SA) and performance deficits when intervention is needed, biases in decision making based on system outputs, and degradation”. Such common operational challenges of human-machine interaction raise fundamental political, ethical, legal, social, and security concerns. There are particularly high stakes in the military domain because AI technologies used in this context have the potential to inflict severe harm, such as physical injury, human rights violations, death, and (large-scale) destruction.


2. Responsible AI and Challenges of Human-machine Interaction

The Responsible AI framework has been gaining prominence among policymaking and expert circles of different states, especially the US and its allies. In 2023, the US released its Political Declaration on Responsible Military Use of AI and Autonomy, endorsed by 50 other states as of January 2024. US Deputy Secretary of Defense Kathleen Hicks stated that the new Replicator Initiative, aimed at producing large numbers of all-domain, attritable autonomous systems, will be carried out “while remaining steadfast to [the DoD’s] responsible and ethical approach to AI and autonomous systems, where DoD has been a world leader for over a decade”. At the same time, the concept of responsible military AI use has also been entrenched by the Responsible AI in the Military Domain (REAIM) Summit co-hosted by the Netherlands and the Republic of Korea. More than 55 states supported the Summit’s Call to Action in February 2023, and a second edition of the event is expected in Seoul in 2024.

The Responsible AI framework broadens the debate beyond lethal autonomous weapon systems (LAWS), which have been the focus of discussions at the UN CCW in Geneva throughout the last decade. The effort to consider different uses of AI in the military, including in decision support, is a step towards recognising the challenges of human-machine interaction and potential new forms of distributed agency. These changes are happening in various ways and do not necessarily revolve around ‘full’ autonomy, weapon systems, or humans ‘out of the loop’. Efforts to consider military systems integrating autonomous and AI technologies as part of lifecycle frameworks underline this. Such frameworks demonstrate that situations of human-machine interaction need to be addressed and occur at various lifecycle stages from research & development, procurement & acquisition, TEVV, potential deployment, to retirement. Addressing such concerns therefore deserve the type of debate offered by the REAIM platform: a multi-stakeholder discussion representing global perspectives on (changing) human-machine interaction in the military.

At the same time, the Responsible AI framework is nebulous and imprecise in its guidance on ensuring that challenges of human-machine interaction are addressed. So far, it functions as a “floating signifier”, in the sense that the concept can be understood in different ways, often depending on the interests of those who interpret it. This was already visible during the first REAIM Summit in The Hague, where most participants agreed on the importance of being responsible, but not on how to get there. Some of the common themes among the REAIM and US initiatives include commitment to international law, accountability, and responsibility, ensuring global security and stability, human oversight over military AI capabilities, as well as appropriate training of personnel involved in interacting with the capabilities. But beyond these broad principles, it remains unclear what constitutes ‘appropriate’ forms of human-machine interaction, and the forms of agency these involve, in relation to acting responsibly and in conformity with international law – that, in itself, offers unclear guidance. It must be noted, however, that defining ‘Responsible AI’ is no easy task because it requires considering the various dimensions of a complex socio-technical system which includes not only the technical aspects but also political, legal, and social ones. It has already been a challenging exercise in the civilian domain to pinpoint the exact characteristics of this concept, although key terms such as explainability, transparency, privacy, and security are often mentioned in Responsible AI strategies.

Importantly, the Responsible AI framework allows for various interpretations of the form, or mechanism, of global governance needed to address the challenges of human-machine interaction in the military. There are divergent approaches on the appropriate direction to take. For instance, US policymakers seek to “codify norms” for the responsible use of AI through the US political declaration, a form of soft law, interpreted by some experts as a way for Washington to promote its vision in its perceived strategic competition with Beijing. Meanwhile, many states favour a global legal and normative framework in the form of hard law, such as a legally binding instrument establishing appropriate forms of human-machine interaction, especially in relation to targeting, including the use of force. The UN’s 2023 New Agenda for Peace urges states not only to develop national strategies on responsible military use of AI, but also to “develop norms, rules and principles…through a multilateral process” which would involve engagement with industry, academia, and civil society. Some states are trying to make steps into this direction, for instance Austria took the initiative in the form of co-sponsoring a UN General Assembly First Committee resolution on LAWS, which was adopted with overwhelming support in November 2023. Overall, the Responsible AI framework’s inherent ambiguity is an opportunity for those favouring a soft law approach, especially actors who promote political declarations or sets of guidelines and argue that these are enough. Broad Responsible AI guidelines might symbolise a certain commitment or obligations, but at this stage they are insufficient to address already existing challenges to human-machine interaction in a security and military context –not least because they may not be connected to a concrete pathway toward operationalisation and implementation.

Note: This essay outlines initial thinking that forms the basis of a new research project called “Human-Machine Interaction: The Distributed Agency of Humans and Machines in Military AI” (HuMach) funded by the Independent Research Fund Denmark. Led by Ingvild Bode, the project will start later in 2024.



Symposium on Military AI and the Law of Armed Conflict: A (Pre)cautionary Note About Artificial Intelligence in Military Decision Making


04.04.24 | 


[Georgia Hinds is a Legal Adviser with the ICRC in Geneva, working on the legal and humanitarian implications of autonomous weapons, AI and other new technologies of warfare. Before joining the ICRC, she worked in the Australian Government, advising on public international law including international humanitarian and human rights law, and international criminal law, and served as a Reservist Officer with the Australian Army. The views expressed on this blog are those of the author alone and do not engage the ICRC, or previous employers, in any form.]

Introduction

Most of us would struggle to define ‘artificial intelligence.’ Fewer still could explain how it functions. And yet AI technologies permeate our daily lives. They also pervade today’s battlefields. Over the past eighteen months, reports of AI-enabled systems being used to inform targeting decisions in contemporary conflicts have sparked debates (including on this platform) around legal, moral and operational issues.

Sometimes called ‘decision support systems’ (DSS), these are computerized tools that are designed to aid human decision-making by bringing together and analysing information, and in some cases proposing options as to how to achieve a goal [see, e.g., Bo and Dorsey]. Increasingly, DSS in the military domain are incorporating more complex forms of AI, and are being applied to a wider range of tasks.

These technologies do not actually make decisions, and they are not necessarily part of weapon systems that deliver force. Nevertheless, they can significantly influence the range of actions and decisions that form part of military planning and targeting processes.

This post considers implications for the design and use of these tools in armed conflict, arising from international humanitarian law (IHL) obligations, particularly the rules governing the conduct of hostilities.

Taking ‘Constant Care’, How Might AI-DSS Help or Hinder?

Broadly, in the conduct of military operations, parties to an armed conflict must take constant care to spare the civilian population, civilians and civilian objects.

The obligation of constant care is an obligation of conduct, to mitigate risk and prevent harm. It applies across the planning or execution of military operation, and is not restricted to ‘attacks’ within the meaning of IHL (paras 2191, 1936, 1875). It includes, for example, ground operations, establishment of military installations, defensive preparations, quartering of troops, and search operations. It has been said that this requirement to take ‘constant care’ must “animate all strategic, operational and tactical decision-making.”

In assessing the risk to civilians that may arise from the use of an AI-DSS, a first step must be assessing whether the system is actually suitable for the intended task. Applying AI – particularly machine learning – to problems for which it is not well suited, has the potential to actually undermine decision-making (p 19). Automating processes that feed into decision-making can be advantageous where quality data is available and the system is given clear goals (p 12). In contrast, “militaries risk facing bad or tragic outcomes” where they provide AI systems with clear objectives but in uncertain circumstances, or where they use quality data but task AI systems with open-ended judgments. Uncertain circumstances abound in armed conflict, and the contextual, qualitative judgements required by IHL are notoriously difficult. Further, AI systems generally lack the ability to transfer knowledge from one context or domain to another (p 207), making it potentially problematic to apply an AI-DSS in a different armed conflict, or even in different circumstances in the same conflict. It is clear then, that whilst AI systems may be useful for some tasks in military operations (eg. in navigation and maintenance and supply chain management), they will be inappropriate for many others.

Predictions about enemy behaviour will likely be far less reliable than those about friendly forces, not only due to a lack of relevant quality data, but also because armed forces will often adopt tactics to confuse or mislead their enemy. Similarly, AI-DSS would struggle to infer something open-ended or ill-defined, like the purpose of a person’s act. A more suitable application could be in support of weaponeering processes, and the modelling of estimated effects, where such systems are already deployed, and where the DSS should have access to greater amounts of data derived from tests and simulations.

Artificial Intelligence to Gain the ‘Best Possible Intelligence’?

Across military planning and targeting processes, the general requirement is that decisions required by IHL’s rules on the conduct of hostilities must be based on an assessment of the information from all sources reasonably available at the relevant time. This includes an obligation to proactively seek out and collect relevant and reasonably available information (p 48). Many military manuals stress that the commander must obtain the “best possible intelligence,” which has been interpreted as requiring information on concentrations of civilian persons, important civilian objects, specifically protected objects and the environment (See Australia’s Manual on the Law of Armed Conflict (1994) §§548 and 549).

What constitutes the best possible intelligence will depend upon the circumstances, but generally commanders should be maximising their available intelligence, surveillance and reconnaissance assets to obtain up-to-date and reliable information.

Considering this requirement to seek out all reasonably available information, it is entirely possible that the use of AI DSS may assist parties to an armed conflict in satisfying their IHL obligations, by synthesising or otherwise processing certain available sources of information (p 203). Indeed, whilst precautionary obligations do not require parties to possess highly sophisticated means of reconnaissance (pp 797-8), it has been argued that (p 147), if they do possess AI-DSS and it is feasible to employ them, IHL might actually require their use.

In the context of urban warfare in particular, the ICRC has recommended (p 15) that information about factors such as the presence of civilians and civilian objects should include open-source repositories such as the internet. Further, specifically considering AI and machine learning, the ICRC has concluded that, to the extent that AI-DSS tools can facilitate quicker and more widespread collection and analysis of this kind of information, they could well enable better decisions by humans that minimize risks for civilians in conflict. The use of AI-DSS to support weaponeering, for example, may assist parties in choosing means and methods of attack that can best avoid, or at least minimize, incidental civilian harm.

Importantly, the constant care obligation and the duty to take all feasible precautions in attack are positive obligations, as opposed to other IHL rules which prohibit conduct (eg. the prohibitions on indiscriminate or disproportionate attacks). Accordingly, in developing and using AI-DSS, militaries should be considering not only how such tools can assist to achieve military objectives with less civilian harm, but how they might be designed and used specifically for the objective of civilian protection. This also means identifying or building relevant datasets that can support assessments of risks to, and impacts upon civilians and civilian infrastructure.

Practical Considerations for Those Using AI-DSS

When assessing the extent to which an AI-DSS output reflects current and reliable information sources, commanders must factor in AI’s limitations in terms of predictability, understandability and explainability (see further detail here). These concerns are likely to be especially acute with systems that incorporate machine learning algorithms that continue to learn, potentially changing their functioning during use.

Assessing the reliability of AI-DSS outputs also means accounting for the likelihood that an adversary will attempt to provide disinformation such as ruses and deception, or otherwise frustrate intelligence acquisition activities. AI-DSS currently remain vulnerable to hacking and spoofing techniques that can lead to erroneous outputs, often in ways that are unpredictable and undetectable to human operators.

Further, like any information source in armed conflict, the datasets on which AI-DSS rely may be imperfect, outdated or incomplete. For example, “No Strike Lists” (NSL) can contribute to a verification process by supporting faster identification of certain objects that must not be targeted. However, a NSL will only be effective so long as it is current and complete; the NSL itself is not the reality on the ground. More importantly, the NSL usually only consists of categories of objects that benefit from special protection or the targeting of which is otherwise restricted by policy. However, the protected status of objects in armed conflict can change – sometimes rapidly – and most civilian objects that will not appear on the list. In short then, the presence of an object on a NSL contributes to identifying protected objects when verifying the status of a potential target, but the absence of an object from the list does not imply that it is a military objective.

Parallels can be drawn with AI-DSS tools, which rely upon datasets to produce “a technological rendering of the world as a statistical data relationship” (p 10). The difference is that, whilst NSLs generally rely upon a limited number of databases, AI-DSS tools may be trained with, and may draw upon such a large volume of datasets that it may be impossible for the human user to verify their accuracy. This makes it especially important for AI-DSS users to be able to understand what underlying datasets are feeding the system, the extent to which this data is likely to be current and reliable, and the weighting given to particular data in the DSS output (paras 19-20). Certain critical datasets may need to be, by default, labelled with overriding prominent (eg. NSLs), whilst, for others, decision-makers may need to have the ability to adjust how they are factored in.

In certain circumstances, it may be appropriate for a decision-maker to seek out expert advice concerning the functioning or underlying data of an AI-DSS. As much has been suggested in the context of cyber warfare, in terms of seeking to understand the effects of a particular cyber operation (p 49).

In any event, it seems unlikely that it would be reasonable for a commander to rely solely on the output of one AI-DSS, especially during deliberate targeting processes where more time is available to gather and cross-check against different and varied sources. Militaries have already indicated that cross-checking of intelligence is standard practice when verifying targets and assessing proportionality, and an important aspect of minimising harm to civilians. This practice should equally be applied when employing AI-DSS, ideally using different kinds of intelligence to guard against the risks of embedded errors within an AI-DSS.

If a commander, planner or staff officer did rely solely on an AI-DSS, the reasonableness of their decision would need to be judged not only in light of the AI DSS output, but also taking account of other information that was reasonably available.

Conclusion

AI-DSS are often claimed to hold the potential to increase IHL compliance and to produce better outcomes for civilians in armed conflict. In certain circumstances, the use of AI DSS may well assist parties to an armed conflict in satisfying their IHL obligations, by providing an additional available source of information.

However, these tools may be ill-suited for certain tasks in the messy reality of warfare, especially noting their dependence on quality data and clear goals, and their limited capacity for transfer across different contexts. In some cases, drawing upon an AI-DSS could actually undermine the quality of decision-making, and pose additional risks to civilians.

Further, even though an AI-DSS can draw in and synthesise data from many different sources, this does not absolve a commander of their obligation to proactively seek out information from other reasonably available sources. Indeed, the way in which AI tools function – their limitations in terms of predictability, understandability and explainability – make it all the more important that their output be cross-checked.

Finally, AI-DSS must only be applied within legal, policy and doctrinal frameworks that ensure respect for international humanitarian law. Otherwise, these tools will only serve to replicate, and arguably exacerbate, unlawful or otherwise harmful outcomes at a faster rate and on a larger scale.