Showing posts sorted by relevance for query AUTONOMOUS WEAPONS. Sort by date Show all posts
Showing posts sorted by relevance for query AUTONOMOUS WEAPONS. Sort by date Show all posts

Saturday, February 01, 2020

Opposition bids to ban 'killer robots' foiled by Merkel's coalition

Opposition calls for Germany to seek an international ban on fully autonomous weapons systems have been sunk in parliament by members of Angela Merkel's coalition. The pleas came from the Greens and ex-communist Left.
March 2019: Campaign to Stop Killer Robots at Berlin's Brandenburg Gate
Coalition parties used their Bundestag majority on Friday to scupper a set of pleas from opposition parties to work towards a global ban on autonomous weaponry with no human input.
The opposition Greens had demanded that Merkel's coalition press for progress on stalled talks — via the United Nations' 1980 Convention on Certain Conventional Weapons (CCWC) — with a view to developing a ban on "lethal autonomous weapons systems" and avoiding a potential new arms race.
Since 2014, eight meetings have been held in Geneva with no headway, largely due, says Human Rights Watch (HRW), to US and Russian insistence that definitions be first clarified. Favoring a ban via 11 guiding principles are more than 120 nations, with a follow-on conference due in 2021.
HRW's Mary Wareham, who heads the Campaign to Stop Killer Robots, told DW that "unacceptable" Russian and US standpoints amounted to the superpowers not wanting "to see any legal outcome, a new treaty or protocol."
"You could program the weapon system to go out and to select and attack an entire group or category of people, which is a very dangerous proposition," said Wareham, adding that the US had already looked at "targeting military age males in Yemen."
No research funding from EU, urge Greens
In its motion, the opposition Left party had demanded that Germany itself institute a moratorium on such autonomous weapons development, coupled with a push for an international ban.
The Greens, in another defeated motion, had also demanded that Germany seek an amendment to the European Defense Fund, created by the EU in 2017, to block EU research spending on such weapons.
That motion was also rejected in parliament by Merkel's coalition, which had said it did not want such weaponry in its so-called "coalition contract" of 2018, a document setting out the parties' combined plans for this period of government. 
Loophole for artificial intelligence
In committee stages, Merkel's conservative Christian Democratic Union (CDU) and Bavarian sister party the Christian Social Union (CSU) said they wanted existing international law upheld but were "open to the use of artificial intelligence, also in the military area."
Her coalition partners, the center-left Social Democrats (SPD), parliament was told, wanted lethal autonomous weapons prohibited but warned against "too hasty" decisions. 
Instead, the SPD preferred a public hearing on what are often euphemistically called "killer robots" in Germany. Critics say German arms manufacturers have been hawking new weapons with autonomous functions at defense sales expos.  
Kyiv, 2016: Ukrainian-made combat robot 'Piranya' at defense trade fair
Greens parliamentarian Katja Keul told parliament in Berlin Friday that since 2016 a government expert group had merely mulled over "whether" and "how" to regulate such weapons.
Through automation, out of direct control by soldiers, said Keul, lethal capability would be put "in the hands of private IT companies."
It violated human dignity as a basic right when a human life became merely the "object" of a machine-based decision, said Keul.
Deutschland Berlin Bundestag Katja Keul (picture-alliance/dpa/B. von Jutrczenka)
Coalition of the willing is needed, says Keul
"What a horrific vision, machines killing people en masse, without resistance, self-determined and efficient," said Left parliamentarian Kathrin Vogler, adding that this scenario was becoming a "very concrete" prospect. She called on Merkel's coalition to ensure that a European Parliament resolution on abolishing automated weapons systems "be implemented."
'Sober' scrutiny, says coalition
Christian Schmidt, speaking for Merkel's CDU-CSU parliamentary group, referred to Germany's past experience of the 1970s when former East Germany used automated devices to shoot Germans trying to flee to the West.
"Those were offensive weapons of the NVA, the border troops of the GDR [East Germany]," said Schmidt, who also referred to World War One mechanized warfare and insisted that modern weaponry required "sober" scrutiny via a "different, stronger ethos."
"Offensive weapon systems [are] what we don't want whatsoever," said Schmidt, a former state secretary in Germany's Defense Ministry.
Analysts say military robots are no longer confined to science fiction but are fast emerging from design desks to development in engineering laboratories and could be ready for deployment within a few years. Semiautomated weaponry, most notably aerial drones, has already become a core component in modern militaries — but still with a human operator in control remotely.

Tuesday, April 30, 2024

Vienna conference urges regulation of AI weapons


By AFP
April 30, 2024

Austrian Foreign Minister Alexander Schallenberg warned autonomous weapons systems would 'soon fill the world's battlefields' - Copyright AFP/File STR

The world should establish a set of rules to regulate AI weapons while they’re still in their infancy, a global conference said on Tuesday, calling the issue an “Oppenheimer moment” of the time.

Like gunpowder and the atomic bomb, artificial intelligence (AI) has the capacity to revolutionise warfare, analysts say, making human disputes unimaginably different — and a lot more deadly.

“This is our generation’s ‘Oppenheimer moment’ where geopolitical tensions threaten to lead a major scientific breakthrough down a very dangerous path for the future of humanity,” read the summary at the end of the two-day conference in Vienna.

US physicist Robert Oppenheimer helped invent nuclear weapons during World War II.

Austria organised and hosted the two-day conference in Vienna, which brought together some 1,000 participants, including political leaders, experts and members of civil society, from more than 140 countries.

A final statement said the group “affirms our strong commitment to work with urgency and with all interested stakeholders for an international legal instrument to regulate autonomous weapons systems”.

“We have a responsibility to act and to put in place the rules that we need to protect humanity… Human control must prevail in the use of force”, said the summary, which is to be sent to the UN secretary general.

Using AI, all sorts of weapons can be transformed into autonomous systems, thanks to sophisticated sensors governed by algorithms that allow a computer to “see”.

This will enable the locating, selecting and attacking human targets — or targets containing human beings — without human intervention.

Most weapons are still in the idea or prototype stages, but Russia’s war in Ukraine has offered a glimpse of their potential.

Remotely piloted drones are not new, but they are becoming increasingly independent and are being used by both sides.

“Autonomous weapons systems will soon fill the world’s battlefields,” Austrian Foreign Minister Alexander Schallenberg said on Monday when opening the conference.

He warned now was the “time to agree on international rules and norms to ensure human control”.

Austria, a neutral country keen to promote disarmament in international forums, in 2023 introduced the first UN resolution to regulate autonomous weapons systems, which was supported by 164 states.

Wednesday, July 07, 2021

Israel’s Drone Swarm Over Gaza Should Worry Everyone

It’s time global leaders set new rules for these future weapons already being using to kill.



A drone views of the ruins of buildings in Gaza city that was levelled by an Israeli air strike during the recent military conflict between Israel and Palestinian ruled by Hamas on June 11, 2021. 
MAJDI FATHI/NURPHOTO VIA GETTY IMAGES


BY ZAK KALLENBORN
JULY 7, 2021 

DEFENSEONE.COM


In a world first, Israel used a true drone swarm in combat during the conflict in May with Hamas in Gaza. It was a significant new benchmark in drone technology, and it should be a wakeup call for the United States and its allies to mitigate the risk these weapons create for national defense and global stability.

Israel’s use of them is just the beginning. Reporting does not suggest the Israeli Defense Forces deployed any particularly sophisticated capability. It seems a small number of drones manufactured by Elbit Systems coordinated searches, but they were used in coordination with mortars and ground-based missiles to strike “dozens” of targets miles away from the border, reportedly. The drones helped expose enemy hiding spots, relayed information back to an app, which processed the data along with other intelligence information. Future swarms will not be so simple.

Often the phrase “drone swarm” means multiple drones being used at once. But in a true drone swarm, the drones communicate and collaborate, making collective decisions about where to go and what to do. In a militarized drone swarm, instead of 10 or 100 distinct drones, the swarm forms a single, integrated weapon system guided by some form of artificial intelligence.

So, drone swarms are here, and we should be worried. But how best to reduce the risk these weapons pose?

The United States should lead the global community in a new conversation to discuss and debate whether new norms or international treaties are needed specifically to govern and limit the use of drone swarms. Current proposals to ban autonomous weapons outright would cover autonomous drone swarms; however, such a treaty would not likely cover the drone swarm Israel used. Despite some media reports to the contrary, there is no indication the swarm made autonomous decisions on who to kill (whether a small, human-controlled swarm like this should be banned is a different issue). And it’s unlikely the great powers will agree to a broad prohibition autonomous weapons. Narrow restrictions on high-risk autonomous weapons like anti-personnel drone swarms may have more appeal, particularly if they create asymmetric effects that threaten, but not help, great powers.

Related articles


The Pentagon Wants AI-Driven Drone Swarms for Search and Rescue Ops

The U.S. Military’s Drone Swarm Strategy Just Passed a Key Test

Pentagon Wants More Money for Lasers To Defend Against Missiles, Drone Swarms

Global militaries should expand work to develop, test, and share counter-swarm technology. Effective counter-drone systems need to be low cost, quick recharging, and able to hit multiple targets at once. Such systems should be deployed around high-risk target areas, like airports, critical infrastructure, and heads of state. As the threat is fundamentally international, states should also provide their cutting-edge counter-swarm capabilities to partners and allies who are at risk.

Keeping drone swarms from the hands of terrorists will require a separate effort. States may adopt measures akin to United Nations Security Council Resolution 1540 on preventing terrorist acquisition of chemical, biological, radiological, and nuclear weapons that apply to drone swarms (or just expand UNSCR 1540). Local, national, and international law enforcement agencies should also search for indicators of terrorists seeking drone swarm capabilities, such as large drone purchases and known extremist work to develop or modify drone control systems.

In recent years, the threat of drone swarms has grown alongside their increasing sophistication. In 2016, the Department of Defense launched 103 Perdix drones out of three F/A-18 Super Hornets. The drones operated using a “collective brain,” gathering into various formations, flying across a test battlefield, and reforming into new configurations. Notably, the system was designed by students at the Massachusetts Institute of Technology. If drone swarms are simple enough students can make them, conflict zones across the world can expect to see them soon. In the past year, China, France, India, Spain, South Africa, the United States, and the United Kingdom have all unveiled or tested new drone swarm programs.

Global proliferation of drone swarms creates risks of instability. In the Nagorno-Karabakh conflict last year, Azeri use of drones contributed significantly to a rapid Armenian surrender (other factors no doubt helped too). A swarm amplifies such effects with more drones, using more complex tactics that can overwhelm existing defenses. It’s a concern the U.S. military has studied for a decade already. A 2012 study by the Naval Postgraduate School simulated eight drones attacking a U.S. Navy destroyer, finding four drones would hit the ship. Terrorists may also see great appeal in drone swarms as a more accessible air force to overcome ground-based defenses, and carry out attacks on critical infrastructure and VIPs.

Drone swarms create risks akin to traditional weapons of mass destruction. As drone swarms scale into super-swarms of 1,000 or even up to a million drones, no human could plausibly have meaningful control. That’s a problem, autonomous weapons can only make limited judgments on the civilian or military nature of their targets. The difference of a single pixel can change a stealth bomber into a dog. Errors may mean dead civilians or friendly soldiers, and accidental conflict escalation.

The reality is that virtually no current counter-drone systems are designed for counter-swarm operations. Current detections systems cannot necessarily accommodate multiple drones. They could overwhelm interdiction systems, which contain limited or slow-to-shoot interceptors. And the drone swarm may simply be too spread out. Of course, new counter-drone systems like the Air Force’s microwave-based THOR system, low cost per shot defenses like lasers, and counter-swarm swarms may eventually prove effective. While these defenses may protect great powers, smaller states and civilians are likely to be more vulnerable.

The increased autonomy of a drone swarm allows states to use many more drones at once. Human cognition limits simultaneous drone operation, because it is difficult to monitor operations of many drones, ensure they do not collide, and still achieve mission objectives. But the military is working to overcome human limitations. In one 2008 study, a single operator could handle only four drones without significant losses to mission effectiveness. By 2018, the U.S. military’s Defense Advanced Research Projects Agency, or DARPA, confirmed a human could control an entire drone swarm telepathically, using a single microchip implanted in their brain.

The military value of drone swarms stems from enabling complexity and flexibility. Current swarms use typically small, homogenous drones. Future swarms may be of different sizes, equipped with an array of different interchangeable sensors, weapons, and other payloads. That enables combined armed tactics, where drones strike with multiple weapons from multiple angles: one may spray bullets, while another sprays a chemical weapons agent. Swarms may also have adaptive properties such as self-healing, where the swarm modifies itself to accommodate the loss of some members, or self-destruction, to complete one-way missions. Drone swarms will also likely be increasingly integrated into some form of drone mothership (and perhaps integrated into an even larger mothership in a “turducken of lethality.”)

Drone swarms are not science fiction. The technology is here, and spreading fast.

Zachary Kallenborn is a national / homeland security consultant, specializing in unmanned systems, drone swarms, homeland security, weapons of mass destruction (WMD), and WMD terrorism.

Sunday, August 30, 2020

Killer robots could wipe out humanity, report says in terrifying AI warning


ARTIFICIAL INTELLIGENCE (AI) and killer robots could wipe out humanity, a new report has terrifyingly warned

PUBLISHED:Mon, Aug 10, 2020

The research by Human Rights Watch found 30 countries had expressed a desire for an international treaty to be introduced banning the use of autonomous weapons. The weapons can engage with targets without human control.

The report, ‘Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control’, looked at policies from 97 countries opposed to the machines.

Although not naming the UK, it says British policy is there must always be “human oversight” when such weapons are being used.

However Britain is developing some weapons with “autonomous solutions”, the report found.
Mary Wareham, arms division advocacy director at Human Rights Watch and coordinator of the Campaign to Stop Killer Robots, said: “Removing human control from the use of force is now widely regarded as a grave threat to humanity that, like climate change, deserves urgent multilateral action.


Fears over killer robots have been raised (Image: Getty)


Human Rights Watch urge for international ban (Image: Getty)


“An international ban treaty is the only effective way to deal with the serious challenges raised by fully autonomous weapons.

“It’s abundantly clear that retaining meaningful human control over the use of force is an ethical imperative, a legal necessity and a moral obligation.

“All countries need to respond with urgency by opening negotiations on a new international ban treaty.”

Although the reports suggests a number of international organisations have backed the ban, a small number of military powers have rejected the proposals.

READ MORE: Rogue artificial intelligence could harm us if we don't act - expert


Fears over killer robots raised by organisation (Image: Getty)

RELATED ARTICLES
Robots could complete ‘all human tasks’ by 2050 claims transhumanist

These countries include the US and Russia.

Ms Wareham continued: “Many governments share the same serious concerns over permitting machines to take human lief on the battlefield, and their desire for human control provides a sound basis for collective action.

“While the pandemic has delayed diplomacy, it shows the importance of being prepared and responding with urgency to existential threats to humanity, such killer robots.”

Global tensions have risen over recent weeks following the outbreak of coronavirus and fears of World War 3 have been raised.

DON'T MISS
Most household chores will be fully automated by 2040, experts believe [COMMENT]
AI armageddon: Age of killer robots is closer than you think [REVEAL]
Science breakthrough: Experts reveal stunning 'living' robot [INSIGHT]


The largest militaries in the world (Image: Express)


China has been widely criticised on a global scale and accused of deliberately starting the deadly pandemic.

Tensions between Britain and Beijing have grown increasingly strained after China enforced the controversial Hong Kong security law.

The new legislation has been globally slammed and Prime Minister Boris Johnson publicly condemned the move by Chinese authorities.

Washington has also seen its relationship with Beijing deteriorate over recent weeks.


Autonomous weapons called to be banned (Image: Getty)

President Donald Trump and continually blamed the Communist nation for the deadly pandemic and criticised the World Health Organisation for being to “China-centric”.

Beijing and Washington have also increased their military presence in the South China Sea region amid fears of an outbreak of war.

While the likes of Moscow has come under scrutiny following the Russian report which suggested the Kremlin has had some involvement with UK democracy.

Although claiming it would "difficult - if not impossible - to prove" allegations Moscow tried to influence the Brexit vote from 2016, the report lashed out at the government for failing to recognise a threat posed by the Kremlin.

The Investigation and Security Committee (ISC) report said: “It is nonetheless the Committee's view that the UK Intelligence Community should produce an analogous assessment of potential Russian interference in the EU referendum and that an unclassified summary of it be published."

Andy Barratt, UK managing director of cybersecurity consultancy Coalfire, previously told Express.co.uk: “While ‘election tampering’ makes for good headlines, it’s almost certainly not the most critical cyber threat we face from foreign powers.

“There is a clear need for the government to drive the adoption of better security standards, not just in the public sector but across the private businesses that make up so much of the country’s critical infrastructure as well.

“As a country, we have to find a balance between being openly critical of other nations’ use of offensive cyber tactics while simultaneously pushing forward the capabilities we need to defend ourselves.”

Monday, May 31, 2021

DAMN RIGHT IT HAS
The age of killer robots may have already begun




Bryan Walsh
AXIOS
Sat, May 29, 2021, 

A drone that can select and engage targets on its own attacked soldiers during a civil conflict in Libya.

Why it matters: If confirmed, it would likely represent the first-known case of a machine-learning-based autonomous weapon being used to kill, potentially heralding a dangerous new era in warfare.

Driving the news: According to a recent report by the UN Panel of Experts on Libya, a Turkish-made STM Kargu-2 drone may have "hunted down and ... engaged" retreating soldiers fighting with Libyan Gen. Khalifa Haftar last year.

It's not clear whether any soldiers were killed in the attack, although the UN experts — which call the drone a "lethal autonomous weapons system" — imply they likely were.

Such an event, writes Zachary Kallenborn — a research affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism — would represent "a new chapter in autonomous weapons, one in which they are used to fight and kill human beings based on artificial intelligence."




How it works: The Kargu is a loitering drone that uses computer vision to select and engage targets without a connection between the drone and its operator, giving it "a true 'fire, forget and find' capability," the UN report notes.

Between the lines: Recent conflicts — like those between Armenia and Azerbaijan and Israel and Hamas in Gaza — have featured an extensive use of drones of all sorts.

The deployment of truly autonomous drones could represent a military revolution on par with the introduction of guns or aircraft — and unlike nuclear weapons, they're likely to be easily obtainable by nearly any military force.

What they're saying: "If new technology makes deterrence impossible, it might condemn us to a future where everyone is always on the offense," the economist Noah Smith writes in a frightening post on the future of war.

The bottom line: Humanitarian organizations and many AI experts have called for a global ban on lethal autonomous weapons, but a number of countries — including the U.S. — have stood in the way.


SEE MY GOTHIC CAPITALI$M

ALSO SEE 



Wednesday, February 21, 2024

What Happens When Killer Robots Start Communicating with Each Other?

 
 FEBRUARY 21, 2024
Facebook

Photo by Thierry K

Yes, it’s already time to be worried — very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice-versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it’s anything but unimaginable, given the work on artificial intelligence (AI) and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.

By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided “autonomous” weapons systems — combat drones that can employ lethal force independently of any human officers meant to command them. Called “killer robots” by critics, such devices include a variety of uncrewed or “unmanned” planes, tanks, ships, and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its “collaborative combat aircraft,” an unmanned aerial vehicle (UAV) intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry for the battlefields of the future.

The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.

For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield and so should be banned in order to protect noncombatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.

However, neither side in this debate has addressed the most potentially unnerving aspect of using them in battle: the likelihood that, sooner or later, they’ll be able to communicate with each other without human intervention and, being “intelligent,” will be able to come up with their own unscripted tactics for defeating an enemy — or something else entirely. Such computer-driven groupthink, labeled “emergent behavior” by computer scientists, opens up a host of dangers not yet being considered by officials in Geneva, Washington, or at the U.N.

For the time being, most of the autonomous weaponry being developed by the American military will be unmanned (or, as they sometimes say, “uninhabited”) versions of existing combat platforms and will be designed to operate in conjunction with their crewed counterparts. While they might also have some capacity to communicate with each other, they’ll be part of a “networked” combat team whose mission will be dictated and overseen by human commanders. The Collaborative Combat Aircraft, for instance, is expected to serve as a “loyal wingman” for the manned F-35 stealth fighter, while conducting high-risk missions in contested airspace. The Army and Navy have largely followed a similar trajectory in their approach to the development of autonomous weaponry.

The Appeal of Robot “Swarms”

However, some American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs — all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary.

“Emerging robotic technologies will allow tomorrow’s forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today’s networked forces,” predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). “Networked, cooperative autonomous systems,” he wrote then, “will be capable of true swarming — cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole.”

As Scharre made clear in his prophetic report, any full realization of the swarm concept would require the development of advanced algorithms that would enable autonomous combat systems to communicate with each other and “vote” on preferred modes of attack. This, he noted, would involve creating software capable of mimicking ants, bees, wolves, and other creatures that exhibit “swarm” behavior in nature. As Scharre put it, “Just like wolves in a pack present their enemy with an ever-shifting blur of threats from all directions, uninhabited vehicles that can coordinate maneuver and attack could be significantly more effective than uncoordinated systems operating en masse.”

In 2014, however, the technology needed to make such machine behavior possible was still in its infancy. To address that critical deficiency, the Department of Defense proceeded to fund research in the AI and robotics field, even as it also acquired such technology from private firms like Google and Microsoft. A key figure in that drive was Robert Work, a former colleague of Paul Scharre’s at CNAS and an early enthusiast of swarm warfare. Work served from 2014 to 2017 as deputy secretary of defense, a position that enabled him to steer ever-increasing sums of money to the development of high-tech weaponry, especially unmanned and autonomous systems.

From Mosaic to Replicator

Much of this effort was delegated to the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s in-house high-tech research organization. As part of a drive to develop AI for such collaborative swarm operations, DARPA initiated its “Mosaic” program, a series of projects intended to perfect the algorithms and other technologies needed to coordinate the activities of manned and unmanned combat systems in future high-intensity combat with Russia and/or China.

“Applying the great flexibility of the mosaic concept to warfare,” explained Dan Patt, deputy director of DARPA’s Strategic Technology Office, “lower-cost, less complex systems may be linked together in a vast number of ways to create desired, interwoven effects tailored to any scenario. The individual parts of a mosaic are attritable [dispensable], but together are invaluable for how they contribute to the whole.”

This concept of warfare apparently undergirds the new “Replicator” strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. “Replicator is meant to help us overcome [China’s] biggest advantage, which is mass. More ships. More missiles. More people,” she told arms industry officials last August. By deploying thousands of autonomous UAVs, USVs, UUVs, and UGVs, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China’s military, the People’s Liberation Army (PLA). “To stay ahead, we’re going to create a new state of the art… We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

To obtain both the hardware and software needed to implement such an ambitious program, the Department of Defense is now seeking proposals from traditional defense contractors like Boeing and Raytheon as well as AI startups like Anduril and Shield AI. While large-scale devices like the Air Force’s Collaborative Combat Aircraft and the Navy’s Orca Extra-Large UUV may be included in this drive, the emphasis is on the rapid production of smaller, less complex systems like AeroVironment’s Switchblade attack drone, now used by Ukrainian troops to take out Russian tanks and armored vehicles behind enemy lines.

At the same time, the Pentagon is already calling on tech startups to develop the necessary software to facilitate communication and coordination among such disparate robotic units and their associated manned platforms. To facilitate this, the Air Force asked Congress for $50 million in its fiscal year 2024 budget to underwrite what it ominously enough calls Project VENOM, or “Viper Experimentation and Next-generation Operations Model.” Under VENOM, the Air Force will convert existing fighter aircraft into AI-governed UAVs and use them to test advanced autonomous software in multi-drone operations. The Army and Navy are testing similar systems.

When Swarms Choose Their Own Path

In other words, it’s only a matter of time before the U.S. military (and presumably China’s, Russia’s, and perhaps those of a few other powers) will be able to deploy swarms of autonomous weapons systems equipped with algorithms that allow them to communicate with each other and jointly choose novel, unpredictable combat maneuvers while in motion. Any participating robotic member of such swarms would be given a mission objective (“seek out and destroy all enemy radars and anti-aircraft missile batteries located within these [specified] geographical coordinates”) but not be given precise instructions on how to do so. That would allow them to select their own battle tactics in consultation with one another. If the limited test data we have is anything to go by, this could mean employing highly unconventional tactics never conceived for (and impossible to replicate by) human pilots and commanders.

The propensity for such interconnected AI systems to engage in novel, unplanned outcomes is what computer experts call “emergent behavior.” As ScienceDirect, a digest of scientific journals, explains it, “An emergent behavior can be described as a process whereby larger patterns arise through interactions among smaller or simpler entities that themselves do not exhibit such properties.” In military terms, this means that a swarm of autonomous weapons might jointly elect to adopt combat tactics none of the individual devices were programmed to perform — possibly achieving astounding results on the battlefield, but also conceivably engaging in escalatory acts unintended and unforeseen by their human commanders, including the destruction of critical civilian infrastructure or communications facilities used for nuclear as well as conventional operations.

At this point, of course, it’s almost impossible to predict what an alien group-mind might choose to do if armed with multiple weapons and cut off from human oversight. Supposedly, such systems would be outfitted with failsafe mechanisms requiring that they return to base if communications with their human supervisors were lost, whether due to enemy jamming or for any other reason. Who knows, however, how such thinking machines would function in demanding real-world conditions or if, in fact, the group-mind would prove capable of overriding such directives and striking out on its own.

What then? Might they choose to keep fighting beyond their preprogrammed limits, provoking unintended escalation — even, conceivably, of a nuclear kind? Or would they choose to stop their attacks on enemy forces and instead interfere with the operations of friendly ones, perhaps firing on and devastating them (as Skynet does in the classic science fiction Terminator movie series)? Or might they engage in behaviors that, for better or infinitely worse, are entirely beyond our imagination?

Top U.S. military and diplomatic officials insist that AI can indeed be used without incurring such future risks and that this country will only employ devices that incorporate thoroughly adequate safeguards against any future dangerous misbehavior. That is, in fact, the essential point made in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” issued by the State Department in February 2023. Many prominent security and technology officials are, however, all too aware of the potential risks of emergent behavior in future robotic weaponry and continue to issue warnings against the rapid utilization of AI in warfare.

Of particular note is the final report that the National Security Commission on Artificial Intelligence issued in February 2021. Co-chaired by Robert Work (back at CNAS after his stint at the Pentagon) and Eric Schmidt, former CEO of Google, the commission recommended the rapid utilization of AI by the U.S. military to ensure victory in any future conflict with China and/or Russia. However, it also voiced concern about the potential dangers of robot-saturated battlefields.

“The unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” the report noted. This could occur for a number of reasons, including “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems [that is, emergent behaviors] on the battlefield.” Given that danger, it concluded, “countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems.”

When the leading advocates of autonomous weaponry tell us to be concerned about the unintended dangers posed by their use in battle, the rest of us should be worried indeed. Even if we lack the mathematical skills to understand emergent behavior in AI, it should be obvious that humanity could face a significant risk to its existence, should killing machines acquire the ability to think on their own. Perhaps they would surprise everyone and decide to take on the role of international peacekeepers, but given that they’re being designed to fight and kill, it’s far more probable that they might simply choose to carry out those instructions in an independent and extreme fashion.

If so, there could be no one around to put an R.I.P. on humanity’s gravestone.

This column is distributed by TomDispatch.

LA REVUE GAUCHE - Left Comment: Search results for KILLER ROBOTS 

LA REVUE GAUCHE - Left Comment: Search results for GOTHIC CAPITALISM