Showing posts sorted by date for query ORCA ATTACKS. Sort by relevance Show all posts
Showing posts sorted by date for query ORCA ATTACKS. Sort by relevance Show all posts

Saturday, May 25, 2024

Orcas aren’t attacking boats — they’re just playful teens, scientists say

What might seem like killer whales orchestrating vengeful and coordinated attacks on ships is probably a playful fad among bored teen orcas, scientists say.


By María Luisa Paúl
May 25, 2024 at 12:04 a.m. EDT

Hundreds of dangerous boat-ramming incidents over the past five years have cast orcas as deep-sea villains plotting to take back the ocean.

But the killer whales causing mayhem off Europe’s Iberian Peninsula might actually just be bored teenagers — at least, that’s the leading theory among a group of more than a dozen orca experts who have spent years studying the incidents.

Since 2020, members of a small group of killer whales have rammed into at least 673 vessels off the coasts of Portugal, Spain and Morocco — causing some to sink. The Spanish and Portuguese governments responded by tasking a group of experts with determining what was causing the whales to strike rudders, which are used to steer ships, and how to stop it.

The group, which includes biologists, government officials and marine industry representatives, on Friday released a report outlining their hypothesis: The orcas just want to have fun, and in the vast — and rather empty — open waters, the boats’ rudders are a prime toy.

“This looks like play,” said Naomi Rose, a senior scientist at the Animal Welfare Institute who was part of the working group. “It’s a very dangerous game they’re playing, obviously. But it’s a game.”

In most cases, the scientists found, the orcas approaching the vessels come from a group of about 15, mostly juvenile, whales. They typically approach slowly, almost as if to just bump the rudders with their noses and heads. But even young orcas average between 9 and 14 feet long, so the rudders would often get damaged or destroyed when the whales touched them, said Alex Zerbini, who chairs the scientific committee at the International Whaling Commission, a global body focused on whale conservation.

“There’s nothing in the behavior of the animals that suggests that they’re being aggressive,” said Zerbini, who is also part of the working group. “As they play with the rudder, they don’t understand that they can damage the rudder and that damaging the rudder will affect human beings. It’s more playful than intentional.”

Though orcas are known for their whimsy antics — like using jellyfish, algae and prey as toys — the researchers believe their playfulness has reached new levels in the Iberian Peninsula because of the rebound in the bluefin tuna population, their main source of food. In past decades, when orcas faced a tuna shortage, much of their time was spent trying to hunt down food. But once the tuna population bounced back, whales suddenly “have all this leisure time on their hands because they don’t have to eat every fish they find,” Rose said.

The Rock of Gibraltar, seen from the Spanish city of La Linea in 2019. The ramming of a small boat by an orca in the Strait of Gibraltar earlier this year prompted authorities to recommend that small vessels stick to the coastline. (Javier Fergo/AP)

It’s not yet clear why the orcas are attracted to rudders or how they became fascinated by them in the first place. Still, Zerbini said it could have started with one curious, young killer whale that was perhaps enthralled by the bubbles surrounding a moving ship.

“Maybe that individual touched a rudder and felt that it was something fun to play with,” he said. “And, after playing, it began propagating the behavior among the group until it became as widespread as it is now.”

In other words, it became a ridiculous fad — not unlike, say, the viral Tide pod or cinnamon challenges.

It wouldn’t be the first time that killer whales mimicked a particular craze. In the past, some populations have taken to wearing dead salmon as hats or playing games of chicken, Rose said. And, just like human fads, the trends have a tendency to make comebacks years later, she added.

“My guess is that juveniles who see their older siblings or parents wearing salmon hats or doing some other fad sometimes remember these things as adults and think, ‘This is funny. Let’s do it again,’” she said. “These animals are cultural and sophisticated thinkers, and they’re just incredibly social.”

Since 2020, members of a small group of killer whales have rammed into at least 673 vessels off the coasts of Portugal, Spain and Morocco. (iStock)

Orcas, Rose said, are similar to people in many ways. For instance, each population has a particular culture, language and food staple. Orcas and people also mature at a similar pace and, much like humans, female whales do so faster than males.

When it comes to the rudder bumping, Rose said, most of the whales involved are male juveniles and teens, meaning they are between the ages of 5 and 18. Fully grown males — over the age of 25 — are not participating in the antics. And while some adult female whales have been spotted at the scene of the incidents, “they seem to be just sort of keeping an eye on their kids, who are doing the actual playing,” she added.

For sailors, though, the practice is no game. Rose said she worries about frustrated mariners launching flares or other devices to deter whales. Not only could those measures deafen or harm whales, they might backfire by “making the game even more fun for them,” she said.

“The more dangerous it is for the orcas, the more thrill they seem to get out of it,” she said.

So what’s a better way to stop the boat-ramming? According to researchers: taking away the orcas’ toys — or, at least, making them less fun to play with.

The working group proposed several methods that will be tested this summer, Zerbini said. One involves replacing rudders’ typically smooth surfaces with abrasive or bumpy materials. They will also test a device that makes banging sounds around vessels and have suggested that boats hang rows of weighted lines, which orcas dislike.

“We don’t want to see more boats being sunk and we don’t want to see people in distress,” Zerbini said. “But we also don’t want to see the animals being hurt. And we have to remember that this is their habitat and we’re in the way.”




By María Luisa PaúlMaría Luisa Paúl is a reporter on The Washington Post's Morning Mix team. She joined The Post as an intern on the General Assignment desk and has previously reported at the Miami Herald and el Nuevo Herald. Twitter

Wednesday, May 15, 2024

NATURE FIGHTS BACK!

Tanker Rescues Sailors off Gibraltar as Orcas Sink First Boat in 2024

ORCA LIBERATION FRONT 

Orcas attack
Orcas off Spain (file image courtesy MITMA)

PUBLISHED MAY 14, 2024 1:46 PM BY THE MARITIME EXECUTIVE

 

 

The now infamous pod of orcas swimming the waters near Gibraltar struck again this weekend sinking their first vessel of 2024. Scientists remain puzzled why this one pod of “killer whales” appears to have been repeatedly going after vessels in this region over the past four years. The supposition continues to be that this is “playful behavior.”

Two sailors issued a distress call on Sunday morning, May 12, while approximately 14 nautical miles from Cape Espartel near the southern entrance to the Strait of Gibraltar. They told the authorities that they had felt sudden blows to the hull and rudder of their 15-meter (49-foot) boat the Alboran Cognac. The vessel had begun taking on water and they feared additional impacts could cause a more severe inflow that would hasten the loss of the boat.

The Spanish Coast Guard sent a helicopter and contacted the product tanker Lascaux (11,674 dwt) which was sailing in the area. The tanker registered in Malta was sent to assist the sailors.

The vessel was in Moroccan waters at the time and those authorities instructed the sailors to don their life jackets and turn on their AIS signal. They were also told to prepare a radio beacon in case it was required. The tanker was able to locate the boat and took the two sailors aboard. Their boat sank after the rescue and the two individuals were taken to Gibraltar where they recounted their tale.

 

Spanish authorities issued a map warning sailors of the danger zone

 

According to reports, while it was the first incident in 2024, at least seven vessels have been wrecked over the past four years. Five sailboats and two Moroccan fishing boats have reported incidents with the orcas.

Scientists believe it is a single pod of maybe 15 animals that inhabit the waters between the north of the Iberian Peninsula and the region around Morocco and the Strait of Gibraltar. Some think that it is juveniles swimming with two adults and part of a larger herd of approximately 35 orcas. 

The scientists point out that the animals are highly intelligent. Although media reports have called these revenge attacks it is more likely instinct and playful behavior. 

Spanish authorities warn boats to not enter a zone around the strait between April and August when most of the encounters take place. In the event of interaction, whether it is a motor boat or a sailing boat, they advise do not stop the boat and navigate towards the coast, to shallower waters. 



Wednesday, April 17, 2024

OLF (ORCA LIBERATION FRONT)
Infamous boat-sinking orcas spotted hundreds of miles from where they should be, baffling scientist

Harry Baker
Tue, April 16, 2024 

Orcas swimming near a boat.

Orcas that have been terrorizing boats in southwest Europe since 2020 were recently spotted circling a vessel in Spain for the first time this year. The close encounter, which took place hundreds of miles from where the cetaceans should currently be, hints that this group is switching up its tactics — and scientists have no idea why.

The Iberian subpopulation of orcas (Orcinus orca) is a small group of around 40 individuals that lives off the coast of Spain and Portugal, as well as in the Strait of Gibraltar — a narrow body of water between southern Spain and North Africa that separates the Atlantic Ocean and Mediterranean Sea.

Since 2020, individuals from this group have been approaching and occasionally attacking boats, sometimes causing serious damage to the vessels and even sinking them. The most recent sinking occurred on Oct. 31, 2023, but the orcas have sent at least three other boats to the bottom of the sea. However, no humans have been injured or killed.

Related: Orcas are learning terrifying new behaviors. Are they getting smarter?

On April 10, three of these orcas were spotted persistently swimming near a large yacht off the coast of Malpica in Galicia, northern Spain, local news site Diario de Pontevedra reported. The trio did not attack the vessel, but local conservation group Orca Ibérica GTOA, which has been closely monitoring the Iberian subpopulation, warned boaters to "take caution when passing through" the area.

The encounter was surprising as the orcas don't normally venture this far north until mid to late summer, Spanish science news site gCiencia reported.

"Theoretically, they are in the Strait [of Gibraltar] in the spring and should reach the north [of Spain] at the end of the summer," Alfredo López Fernandez, a biologist at the University of Aveiro in Portugal and representative of the Atlantic Orca Working Group, told gCiencia in the translated article. "There is an absolute lack of knowledge" about why this is happening, he added.

A map showing how far the orcas have had to swim to get to Spain

Other orcas have also been spotted further east along the Spanish coastline toward Biscay and further south in Portuguese waters over the last few weeks, gCiencia reported. The orcas normally only enter these areas to follow tuna, their preferred prey. It is unclear if the tuna have arrived early this year.

So far, the orcas have not attacked any boats. But López Fernandez believes this could start within the next few months. However, he says it is hard to predict when and where these encounters will occur.

Scientists still don't know exactly why these attacks started. Some researchers believe that the first attacks may have been perpetrated by a lone female named "White Gladis," who may have been pregnant when she started harassing the boats. But regardless of how it started, the behavior quickly spread among the group.

So far, at least 16 different individuals have attacked boats. Eyewitnesses also claim to have seen orcas teaching other individuals how to attack boats, with an emphasis on attacking vessels' rudders to immobilize them.


A juvenile orca swims away from the yacht with a large piece of fiberglass from the rudder in its mouth.

There is also a suggestion that the behavior may have spread outside the population after a boat in Scotland was attacked by a different group in June 2023. However, it is impossible to prove this attack was connected to the others.

related stories

11 ways orcas show their terrifying intelligence

How often do orcas attack humans?

How orcas gained their 'killer' reputation

As the number of attacks has increased, boat owners have started using firecrackers and even guns to scare off the orcas, gCiencia reported. However, scientists like López Fernandez have urged for restraint because the subpopulation is "in danger of extinction."

"We want to transmit real and truthful information," López Fernandez said. "We're not going to hide that the orcas can touch the boats and sometimes break something, but we also have to be aware that what we have in front of us is not a monster."

Tuesday, February 27, 2024

 

‘Emergent’ AI Behavior and Human Destiny

Reprinted from TomDispatch:

Make no mistake, artificial Intelligence (AI) has already gone into battle in a big-time way. The Israeli military is using it in Gaza on a scale previously unknown in wartime. They’ve reportedly been employing an AI target-selection platform called (all too unnervingly) “the Gospel” to choose many of their bombing sites. According to a December report in the Guardian, the Gospel “has significantly accelerated a lethal production line of targets that officials have compared to a ‘factory.’” The Israeli Defense Forces (IDF) claim that it “produces precise attacks on infrastructure associated with Hamas while inflicting great damage to the enemy and minimal harm to noncombatants.” Significantly enough, using that system, the IDF attacked 15,000 targets in Gaza in just the first 35 days of the war. And given the staggering damage done and the devastating death toll there, the Gospel could, according to the Guardian, be thought of as an AI-driven “mass assassination factory.”

Meanwhile, of course, in the Ukraine War, both the Russians and the Ukrainians have been hustling to develop, produce, and unleash AI-driven drones with deadly capabilities. Only recently, in fact, Ukrainian President Volodymyr Zelensky created a new branch of his country’s armed services specifically focused on drone warfare and is planning to produce more than one million drones this year.  According to the Independent, “Ukrainian forces are expected to create special staff positions for drone operations, special units, and build effective training. There will also be a scaling-up of production for drone operations, and inclusion of the best ideas and top specialists in the unmanned aerial vehicles domain, [Ukrainian] officials have said.”

And all of this is just the beginning when it comes to war, AI-style, which is going to include the creation of “killer robots” of every imaginable sort. But as the U.S., Russia, China, and other countries rush to introduce AI-driven battlefields, let TomDispatch regular Michael Klare, who has long been focused on what it means for the globe’s major powers to militarize AI, take you into a future in which (god save us all!) robots could be running (yes, actually running!) the show. ~ Tom Engelhardt


“Emergent” AI Behavior and Human Destiny

What Happens When Killer Robots Start Communicating with Each Other?

by Michael Klare

Yes, it’s already time to be worried — very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice-versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it’s anything but unimaginable, given the work on artificial intelligence (AI) and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.

By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided “autonomous” weapons systems — combat drones that can employ lethal force independently of any human officers meant to command them. Called “killer robots” by critics, such devices include a variety of uncrewed or “unmanned” planes, tanks, ships, and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its “collaborative combat aircraft,” an unmanned aerial vehicle (UAV) intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry for the battlefields of the future.

The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.

For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield and so should be banned in order to protect noncombatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.

However, neither side in this debate has addressed the most potentially unnerving aspect of using them in battle: the likelihood that, sooner or later, they’ll be able to communicate with each other without human intervention and, being “intelligent,” will be able to come up with their own unscripted tactics for defeating an enemy — or something else entirely. Such computer-driven groupthink, labeled “emergent behavior” by computer scientists, opens up a host of dangers not yet being considered by officials in Geneva, Washington, or at the U.N.

For the time being, most of the autonomous weaponry being developed by the American military will be unmanned (or, as they sometimes say, “uninhabited”) versions of existing combat platforms and will be designed to operate in conjunction with their crewed counterparts. While they might also have some capacity to communicate with each other, they’ll be part of a “networked” combat team whose mission will be dictated and overseen by human commanders. The Collaborative Combat Aircraft, for instance, is expected to serve as a “loyal wingman” for the manned F-35 stealth fighter, while conducting high-risk missions in contested airspace. The Army and Navy have largely followed a similar trajectory in their approach to the development of autonomous weaponry.

The Appeal of Robot “Swarms”

However, some American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs — all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary.

“Emerging robotic technologies will allow tomorrow’s forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today’s networked forces,” predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). “Networked, cooperative autonomous systems,” he wrote then, “will be capable of true swarming — cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole.”

As Scharre made clear in his prophetic report, any full realization of the swarm concept would require the development of advanced algorithms that would enable autonomous combat systems to communicate with each other and “vote” on preferred modes of attack. This, he noted, would involve creating software capable of mimicking ants, bees, wolves, and other creatures that exhibit “swarm” behavior in nature. As Scharre put it, “Just like wolves in a pack present their enemy with an ever-shifting blur of threats from all directions, uninhabited vehicles that can coordinate maneuver and attack could be significantly more effective than uncoordinated systems operating en masse.”

In 2014, however, the technology needed to make such machine behavior possible was still in its infancy. To address that critical deficiency, the Department of Defense proceeded to fund research in the AI and robotics field, even as it also acquired such technology from private firms like Google and Microsoft. A key figure in that drive was Robert Work, a former colleague of Paul Scharre’s at CNAS and an early enthusiast of swarm warfare. Work served from 2014 to 2017 as deputy secretary of defense, a position that enabled him to steer ever-increasing sums of money to the development of high-tech weaponry, especially unmanned and autonomous systems.

From Mosaic to Replicator

Much of this effort was delegated to the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s in-house high-tech research organization. As part of a drive to develop AI for such collaborative swarm operations, DARPA initiated its “Mosaic” program, a series of projects intended to perfect the algorithms and other technologies needed to coordinate the activities of manned and unmanned combat systems in future high-intensity combat with Russia and/or China.

“Applying the great flexibility of the mosaic concept to warfare,” explained Dan Patt, deputy director of DARPA’s Strategic Technology Office, “lower-cost, less complex systems may be linked together in a vast number of ways to create desired, interwoven effects tailored to any scenario. The individual parts of a mosaic are attritable [dispensable], but together are invaluable for how they contribute to the whole.”

This concept of warfare apparently undergirds the new “Replicator” strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. “Replicator is meant to help us overcome [China’s] biggest advantage, which is mass. More ships. More missiles. More people,” she told arms industry officials last August. By deploying thousands of autonomous UAVs, USVs, UUVs, and UGVs, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China’s military, the People’s Liberation Army (PLA). “To stay ahead, we’re going to create a new state of the art… We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

To obtain both the hardware and software needed to implement such an ambitious program, the Department of Defense is now seeking proposals from traditional defense contractors like Boeing and Raytheon as well as AI startups like Anduril and Shield AI. While large-scale devices like the Air Force’s Collaborative Combat Aircraft and the Navy’s Orca Extra-Large UUV may be included in this drive, the emphasis is on the rapid production of smaller, less complex systems like AeroVironment’s Switchblade attack drone, now used by Ukrainian troops to take out Russian tanks and armored vehicles behind enemy lines.

At the same time, the Pentagon is already calling on tech startups to develop the necessary software to facilitate communication and coordination among such disparate robotic units and their associated manned platforms. To facilitate this, the Air Force asked Congress for $50 million in its fiscal year 2024 budget to underwrite what it ominously enough calls Project VENOM, or “Viper Experimentation and Next-generation Operations Model.” Under VENOM, the Air Force will convert existing fighter aircraft into AI-governed UAVs and use them to test advanced autonomous software in multi-drone operations. The Army and Navy are testing similar systems.

When Swarms Choose Their Own Path

In other words, it’s only a matter of time before the U.S. military (and presumably China’s, Russia’s, and perhaps those of a few other powers) will be able to deploy swarms of autonomous weapons systems equipped with algorithms that allow them to communicate with each other and jointly choose novel, unpredictable combat maneuvers while in motion. Any participating robotic member of such swarms would be given a mission objective (“seek out and destroy all enemy radars and anti-aircraft missile batteries located within these [specified] geographical coordinates”) but not be given precise instructions on how to do so. That would allow them to select their own battle tactics in consultation with one another. If the limited test data we have is anything to go by, this could mean employing highly unconventional tactics never conceived for (and impossible to replicate by) human pilots and commanders.

The propensity for such interconnected AI systems to engage in novel, unplanned outcomes is what computer experts call “emergent behavior.” As ScienceDirect, a digest of scientific journals, explains it, “An emergent behavior can be described as a process whereby larger patterns arise through interactions among smaller or simpler entities that themselves do not exhibit such properties.” In military terms, this means that a swarm of autonomous weapons might jointly elect to adopt combat tactics none of the individual devices were programmed to perform — possibly achieving astounding results on the battlefield, but also conceivably engaging in escalatory acts unintended and unforeseen by their human commanders, including the destruction of critical civilian infrastructure or communications facilities used for nuclear as well as conventional operations.

At this point, of course, it’s almost impossible to predict what an alien group-mind might choose to do if armed with multiple weapons and cut off from human oversight. Supposedly, such systems would be outfitted with failsafe mechanisms requiring that they return to base if communications with their human supervisors were lost, whether due to enemy jamming or for any other reason. Who knows, however, how such thinking machines would function in demanding real-world conditions or if, in fact, the group-mind would prove capable of overriding such directives and striking out on its own.

What then? Might they choose to keep fighting beyond their preprogrammed limits, provoking unintended escalation — even, conceivably, of a nuclear kind? Or would they choose to stop their attacks on enemy forces and instead interfere with the operations of friendly ones, perhaps firing on and devastating them (as Skynet does in the classic science fiction Terminator movie series)? Or might they engage in behaviors that, for better or infinitely worse, are entirely beyond our imagination?

Top U.S. military and diplomatic officials insist that AI can indeed be used without incurring such future risks and that this country will only employ devices that incorporate thoroughly adequate safeguards against any future dangerous misbehavior. That is, in fact, the essential point made in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” issued by the State Department in February 2023. Many prominent security and technology officials are, however, all too aware of the potential risks of emergent behavior in future robotic weaponry and continue to issue warnings against the rapid utilization of AI in warfare.

Of particular note is the final report that the National Security Commission on Artificial Intelligence issued in February 2021. Co-chaired by Robert Work (back at CNAS after his stint at the Pentagon) and Eric Schmidt, former CEO of Google, the commission recommended the rapid utilization of AI by the U.S. military to ensure victory in any future conflict with China and/or Russia. However, it also voiced concern about the potential dangers of robot-saturated battlefields.

“The unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” the report noted. This could occur for a number of reasons, including “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems [that is, emergent behaviors] on the battlefield.” Given that danger, it concluded, “countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems.”

When the leading advocates of autonomous weaponry tell us to be concerned about the unintended dangers posed by their use in battle, the rest of us should be worried indeed. Even if we lack the mathematical skills to understand emergent behavior in AI, it should be obvious that humanity could face a significant risk to its existence, should killing machines acquire the ability to think on their own. Perhaps they would surprise everyone and decide to take on the role of international peacekeepers, but given that they’re being designed to fight and kill, it’s far more probable that they might simply choose to carry out those instructions in an independent and extreme fashion.

If so, there could be no one around to put an R.I.P. on humanity’s gravestone.

Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Books, John Feffer’s new dystopian novel, Songlands (the final one in his Splinterlands series), Beverly Gologorsky’s novel Every Body Has a Story, and Tom Engelhardt’s A Nation Unmade by War, as well as Alfred McCoy’s In the Shadows of the American Century: The Rise and Decline of U.S. Global Power, John Dower’s The Violent American Century: War and Terror Since World War IIand Ann Jones’s They Were Soldiers: How the Wounded Return from America’s Wars: The Untold Story.

Michael T. Klare, a TomDispatch regular, is the five-college professor emeritus of peace and world security studies at Hampshire College and a senior visiting fellow at the Arms Control Association. He is the author of 15 books, the latest of which is All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change.

Copyright 2024 Michael Klare

Wednesday, February 21, 2024

What Happens When Killer Robots Start Communicating with Each Other?

 
 FEBRUARY 21, 2024
Facebook

Photo by Thierry K

Yes, it’s already time to be worried — very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice-versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it’s anything but unimaginable, given the work on artificial intelligence (AI) and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.

By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided “autonomous” weapons systems — combat drones that can employ lethal force independently of any human officers meant to command them. Called “killer robots” by critics, such devices include a variety of uncrewed or “unmanned” planes, tanks, ships, and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its “collaborative combat aircraft,” an unmanned aerial vehicle (UAV) intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry for the battlefields of the future.

The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.

For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield and so should be banned in order to protect noncombatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.

However, neither side in this debate has addressed the most potentially unnerving aspect of using them in battle: the likelihood that, sooner or later, they’ll be able to communicate with each other without human intervention and, being “intelligent,” will be able to come up with their own unscripted tactics for defeating an enemy — or something else entirely. Such computer-driven groupthink, labeled “emergent behavior” by computer scientists, opens up a host of dangers not yet being considered by officials in Geneva, Washington, or at the U.N.

For the time being, most of the autonomous weaponry being developed by the American military will be unmanned (or, as they sometimes say, “uninhabited”) versions of existing combat platforms and will be designed to operate in conjunction with their crewed counterparts. While they might also have some capacity to communicate with each other, they’ll be part of a “networked” combat team whose mission will be dictated and overseen by human commanders. The Collaborative Combat Aircraft, for instance, is expected to serve as a “loyal wingman” for the manned F-35 stealth fighter, while conducting high-risk missions in contested airspace. The Army and Navy have largely followed a similar trajectory in their approach to the development of autonomous weaponry.

The Appeal of Robot “Swarms”

However, some American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs — all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary.

“Emerging robotic technologies will allow tomorrow’s forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today’s networked forces,” predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). “Networked, cooperative autonomous systems,” he wrote then, “will be capable of true swarming — cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole.”

As Scharre made clear in his prophetic report, any full realization of the swarm concept would require the development of advanced algorithms that would enable autonomous combat systems to communicate with each other and “vote” on preferred modes of attack. This, he noted, would involve creating software capable of mimicking ants, bees, wolves, and other creatures that exhibit “swarm” behavior in nature. As Scharre put it, “Just like wolves in a pack present their enemy with an ever-shifting blur of threats from all directions, uninhabited vehicles that can coordinate maneuver and attack could be significantly more effective than uncoordinated systems operating en masse.”

In 2014, however, the technology needed to make such machine behavior possible was still in its infancy. To address that critical deficiency, the Department of Defense proceeded to fund research in the AI and robotics field, even as it also acquired such technology from private firms like Google and Microsoft. A key figure in that drive was Robert Work, a former colleague of Paul Scharre’s at CNAS and an early enthusiast of swarm warfare. Work served from 2014 to 2017 as deputy secretary of defense, a position that enabled him to steer ever-increasing sums of money to the development of high-tech weaponry, especially unmanned and autonomous systems.

From Mosaic to Replicator

Much of this effort was delegated to the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s in-house high-tech research organization. As part of a drive to develop AI for such collaborative swarm operations, DARPA initiated its “Mosaic” program, a series of projects intended to perfect the algorithms and other technologies needed to coordinate the activities of manned and unmanned combat systems in future high-intensity combat with Russia and/or China.

“Applying the great flexibility of the mosaic concept to warfare,” explained Dan Patt, deputy director of DARPA’s Strategic Technology Office, “lower-cost, less complex systems may be linked together in a vast number of ways to create desired, interwoven effects tailored to any scenario. The individual parts of a mosaic are attritable [dispensable], but together are invaluable for how they contribute to the whole.”

This concept of warfare apparently undergirds the new “Replicator” strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. “Replicator is meant to help us overcome [China’s] biggest advantage, which is mass. More ships. More missiles. More people,” she told arms industry officials last August. By deploying thousands of autonomous UAVs, USVs, UUVs, and UGVs, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China’s military, the People’s Liberation Army (PLA). “To stay ahead, we’re going to create a new state of the art… We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

To obtain both the hardware and software needed to implement such an ambitious program, the Department of Defense is now seeking proposals from traditional defense contractors like Boeing and Raytheon as well as AI startups like Anduril and Shield AI. While large-scale devices like the Air Force’s Collaborative Combat Aircraft and the Navy’s Orca Extra-Large UUV may be included in this drive, the emphasis is on the rapid production of smaller, less complex systems like AeroVironment’s Switchblade attack drone, now used by Ukrainian troops to take out Russian tanks and armored vehicles behind enemy lines.

At the same time, the Pentagon is already calling on tech startups to develop the necessary software to facilitate communication and coordination among such disparate robotic units and their associated manned platforms. To facilitate this, the Air Force asked Congress for $50 million in its fiscal year 2024 budget to underwrite what it ominously enough calls Project VENOM, or “Viper Experimentation and Next-generation Operations Model.” Under VENOM, the Air Force will convert existing fighter aircraft into AI-governed UAVs and use them to test advanced autonomous software in multi-drone operations. The Army and Navy are testing similar systems.

When Swarms Choose Their Own Path

In other words, it’s only a matter of time before the U.S. military (and presumably China’s, Russia’s, and perhaps those of a few other powers) will be able to deploy swarms of autonomous weapons systems equipped with algorithms that allow them to communicate with each other and jointly choose novel, unpredictable combat maneuvers while in motion. Any participating robotic member of such swarms would be given a mission objective (“seek out and destroy all enemy radars and anti-aircraft missile batteries located within these [specified] geographical coordinates”) but not be given precise instructions on how to do so. That would allow them to select their own battle tactics in consultation with one another. If the limited test data we have is anything to go by, this could mean employing highly unconventional tactics never conceived for (and impossible to replicate by) human pilots and commanders.

The propensity for such interconnected AI systems to engage in novel, unplanned outcomes is what computer experts call “emergent behavior.” As ScienceDirect, a digest of scientific journals, explains it, “An emergent behavior can be described as a process whereby larger patterns arise through interactions among smaller or simpler entities that themselves do not exhibit such properties.” In military terms, this means that a swarm of autonomous weapons might jointly elect to adopt combat tactics none of the individual devices were programmed to perform — possibly achieving astounding results on the battlefield, but also conceivably engaging in escalatory acts unintended and unforeseen by their human commanders, including the destruction of critical civilian infrastructure or communications facilities used for nuclear as well as conventional operations.

At this point, of course, it’s almost impossible to predict what an alien group-mind might choose to do if armed with multiple weapons and cut off from human oversight. Supposedly, such systems would be outfitted with failsafe mechanisms requiring that they return to base if communications with their human supervisors were lost, whether due to enemy jamming or for any other reason. Who knows, however, how such thinking machines would function in demanding real-world conditions or if, in fact, the group-mind would prove capable of overriding such directives and striking out on its own.

What then? Might they choose to keep fighting beyond their preprogrammed limits, provoking unintended escalation — even, conceivably, of a nuclear kind? Or would they choose to stop their attacks on enemy forces and instead interfere with the operations of friendly ones, perhaps firing on and devastating them (as Skynet does in the classic science fiction Terminator movie series)? Or might they engage in behaviors that, for better or infinitely worse, are entirely beyond our imagination?

Top U.S. military and diplomatic officials insist that AI can indeed be used without incurring such future risks and that this country will only employ devices that incorporate thoroughly adequate safeguards against any future dangerous misbehavior. That is, in fact, the essential point made in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” issued by the State Department in February 2023. Many prominent security and technology officials are, however, all too aware of the potential risks of emergent behavior in future robotic weaponry and continue to issue warnings against the rapid utilization of AI in warfare.

Of particular note is the final report that the National Security Commission on Artificial Intelligence issued in February 2021. Co-chaired by Robert Work (back at CNAS after his stint at the Pentagon) and Eric Schmidt, former CEO of Google, the commission recommended the rapid utilization of AI by the U.S. military to ensure victory in any future conflict with China and/or Russia. However, it also voiced concern about the potential dangers of robot-saturated battlefields.

“The unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” the report noted. This could occur for a number of reasons, including “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems [that is, emergent behaviors] on the battlefield.” Given that danger, it concluded, “countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems.”

When the leading advocates of autonomous weaponry tell us to be concerned about the unintended dangers posed by their use in battle, the rest of us should be worried indeed. Even if we lack the mathematical skills to understand emergent behavior in AI, it should be obvious that humanity could face a significant risk to its existence, should killing machines acquire the ability to think on their own. Perhaps they would surprise everyone and decide to take on the role of international peacekeepers, but given that they’re being designed to fight and kill, it’s far more probable that they might simply choose to carry out those instructions in an independent and extreme fashion.

If so, there could be no one around to put an R.I.P. on humanity’s gravestone.

This column is distributed by TomDispatch.

LA REVUE GAUCHE - Left Comment: Search results for KILLER ROBOTS 

LA REVUE GAUCHE - Left Comment: Search results for GOTHIC CAPITALISM