Thursday, April 07, 2022

Are Lethal Autonomous Weapons Inevitable?  It Appears So

The technology and potential uses for killer robots are multiplying and progressing too fast — and international consensus is too fractured — to hope for a moratorium.


Illustration by Paul Lachine
Kyle Hiebert
January 27, 2022

There exists no more consistent theme within the canon of modern science fiction than the fear of the “killer robot,” from Isaac Asimov’s 1950 collection of short stories, I, Robot, to Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep? (the inspiration for the Blade Runner movies). Later came Skynet’s murder machines in the Terminator franchise, the cephalopod Sentinels in The Matrix and the android Gunslingers of Westworld.

In the world of imagination beyond the page and screen, savants Stephen Hawking and Bill Gates also saw a looming threat in real-life killer robots, technically classified as lethal autonomous weapons systems (LAWS). They raised alarms, as have American philosophers Sam Harris and Noam Chomsky, and tech magnate Elon Musk.

A major investor in artificial intelligence (AI), Musk told students at the Massachusetts Institute of Technology in 2014 that AI was the biggest existential threat to humanity. Three years later, he was among 116 experts in AI and robotics that signed an open letter to the United Nations warning that LAWS threaten to “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

It appears that this thesis could soon be tested.

The Evolution of Automated Weapons


In December 2021, the Sixth Review Conference of the UN Convention on Certain Conventional Weapons (CCW), a 125-member intergovernmental forum that discusses nascent trends in armed conflict and munitions, was unable to progress talks on new legal mechanisms to rein in the development and use of LAWS. The failure continues eight years of unsuccessful efforts toward either regulation or an outright ban. “At the present rate of progress, the pace of technological development risks overtaking our deliberations,” warned Switzerland’s representative as the latest conference wrapped up in Geneva. No date is set for the forum’s next meeting.

Semi-autonomous weapons like self-guided bombs, military drones or Israel’s famed Iron Dome missile defence system have existed for decades. In each case, a human operator determines the target, but a machine completes the attack. On the other hand, LAWS — derided by critics as “slaughterbots” — empower AI to identify, select and kill targets absent human oversight and control. The Future of Life Institute, a think tank based in Cambridge, Massachusetts, that is focused on threats to humanity posed by AI and which organized the 2017 open letter to the United Nations, makes the distinction by saying, “In the case of autonomous weapons the decision over who lives and who dies is made solely by algorithms.”

Myriad concepts of LAWS for air, ground, sea and space use have long been speculated about. The difference now is that some models are ready to be field tested. At the US Army’s latest annual convention in Washington, DC, in October 2021, attendees were treated to prototypes of robotic combat dogs that could be built with rifles attached. Australian robotics maker GaardTech announced in November an agreement with the Australian army to demonstrate the Jaeger-C uncrewed vehicle some time this year. Described as a “mobile robotic mine” or “beetle tank,” the bulletproof autonomous four-wheeled combat unit can be outfitted with an armour-piercing large-calibre machine gun and sniper rifle and carry up to 100 pounds of explosives for use in suicide attacks.

In The Kill Chain: Defending America in the Future of High-Tech Warfare, Christian Bose, who served as top adviser to US Senator John McCain and staff director of the Senate Armed Services Committee, tells of how China intends to develop fully autonomous swarms of intelligent combat drones. Recent actions bear this out. In addition to China’s rapid expansion of its own domestic drone industry, last September two state-owned Chinese companies were linked to a Hong Kong firm that acquired a 75 percent stake in an Italian company that manufactures military-grade drones for the North Atlantic Treaty Organization. The Hong Kong firm reportedly paid 90 times the valuation of the Italian company to execute the takeover.

Meanwhile, a report prepared for the Pentagon’s Joint Artificial Intelligence Center in 2021 by CNA, a non-profit research and analysis institute located in Arlington, Virginia, describes how Chinese technology is enabling Russia’s military to integrate autonomous AI into dozens of its platforms. According to the report, this technology includes anthropomorphic robots capable of carrying multiple weapons and, possibly, of driving vehicles. Russian media quoted defence minister Sergei Shoigu confirming last May that Russia has commenced with the manufacturing of killer robots, saying, “What has emerged are not simply experimental, but robots that can be really shown in science-fiction films as they are capable of fighting on their own.”

Yet the world’s first true test case of a fully autonomous killer robot may have already taken place, in Libya in March 2020. According to a report submitted by a panel of experts to the UN Security Council in March 2021, drones produced by Turkish state-owned defence conglomerate STM were allegedly sent to track down a convoy of retreating forces loyal to renegade military general Khalifa Haftar after they abandoned a months-long siege of the capital, Tripoli.

Turkey’s intervention into Libya to prop up the Tripoli-based Government of National Accord, the war-torn country’s UN-recognized government faction, has opened up Libya’s vast deserts to be used as a giant test theatre for Turkey’s booming military drone industry. Turkish drones have recently altered the trajectory of civil wars in favour of Turkey’s government clients in both Libya and Ethiopia, and delivered a decisive victory for Azerbaijan during a violent flare-up with Armenia in late 2020 over the disputed territory of Nagorno-Karabakh. Over the past two years Ukraine has purchased dozens of Turkish drones in response to Russia’s military buildup on Ukraine’s eastern border.

The experts’ report claims Haftar’s forces “were hunted down and remotely engaged” by a Turkish Kargu-2 drone and other “loitering munitions” — those with the ability to hover over targets for hours — that “were programmed to attack targets without requiring data connectivity between the operator and the munition.” In other words, the machines were apparently capable of identifying, selecting and killing targets without communication from a human handler.

In many ways, the evolution of military drones is a canary in the coal mine, bridging eras between semi-autonomous and autonomous weapons and perhaps foreshadowing the way in which fully independent killer robots might proliferate in the future. In the 2000s, military drones were a very expensive and hard-to-operate weapons system possessed almost exclusively by the United States. Less than two decades later, they have become a low-cost, widely available technology being manufactured and exported worldwide — not only by China, Turkey and the United States, but by Iran, the United Arab Emirates and others, each motivated by not only geopolitical interests but the lucrative commercial stakes involved.

By some estimates, more than 100 countries now have active military drone programs — all springing up without any sort of international regulatory structure in place.

Autonomous weapons systems may be able to assess a target’s legitimacy and make decisions faster, and with more accuracy and objectivity than fallible human actors could.

More Just War — or Just More War?


Rapid advances in autonomous weapons technologies and an increasingly tense global order have brought added urgency to the debate over the merits and risks of their use.

Proponents include Robert Work, a former US deputy secretary of defence under the Obama and Trump administrations, who has argued the United States has a “moral imperative” to pursue autonomous weapons. The chief benefit of LAWS, Work and others say, is that their adoption would make warfare more humane by reducing civilian casualties and accidents through decreasing “target misidentification” that results in what the US Department of Defense labels “unintended engagements.”

Put plainly: Autonomous weapons systems may be able to assess a target’s legitimacy and make decisions faster, and with more accuracy and objectivity than fallible human actors could, either on a chaotic battlefield or through the pixelated screen of a remote-control centre thousands of miles away. The outcome would be a more efficient use of lethal force that limits collateral damage and saves innocent lives through a reduction in human error and increased precision of munitions use.

Machines also cannot feel stress, fatigue, vindictiveness or hate. If widely adopted, killer robots could, in theory, lessen the opportunistic sexual violence, looting and vengeful razing of property and farmland that often occurs in war — especially in ethnically driven conflicts. These atrocities tend to create deep-seated traumas and smouldering intergenerational resentments that linger well after the shooting stops, destabilizing societies over the long term and inviting more conflict in the future.

But critics and prohibition advocates feel differently. They say the final decision over the use of lethal force should always remain in the hands of a human actor who can then be held accountable for that decision. Led by the Campaign to Stop Killer Robots, which launched in 2013 and is now comprised of more than 180 member organizations across 66 countries and is endorsed by over two dozen Nobel Peace laureates, the movement is calling for a pre-emptive, permanent international treaty banning the development, production and use of fully autonomous weaponry.

Dozens of countries support a pre-emptive ban as well. This briefly included Canada, when the mandate letter issued by Prime Minister Justin Trudeau in 2019 to then foreign affairs minister François-Philippe Champagne requested he assist international efforts to achieve prohibition. That directive has since disappeared from the mandates given to Champagne’s successors, Marc Garneau and now Mélanie Joly.

For those calling for a ban, the risks of LAWS outweigh their supposed benefits by ultimately incentivizing war through eliminating some of its human cost. The unavoidable casualties that result from armed conflict, and the political blowback that can produce, has always moderated the willingness of governments to participate in wars. If this deterrent is minimalized by non-human combatants over time, it may render military action more appealing for leaders — especially for unpopular ones, given the useful distraction that foreign adventurism can sometimes inject into domestic politics.

Other risks are that autonomous weapons technology could fall into the hands of insurgent groups and terrorists. At the peak of its so-called caliphate in Iraq and Syria, the Islamic State was launching drone strikes daily. Despotic regimes may impulsively unleash autonomous weapons on their own populations to quell a civilian uprising. Killer robots’ neural networks could also be susceptible to being hacked by an adversary and turned against their owners.

Yet, just as the debate intensifies, a realistic assessment of the state of the killer robots being developed confirms what the Swiss ambassador to the CCW feared — technological progress is far outpacing deliberations over containment. But even if it weren’t, amid a splintering international order, plenty of nation-states are readily violating humanitarian laws and treaties anyway, while others are seeking new ways to gain a strategic edge in an increasingly hostile, multipolar geopolitical environment.
National Interests Undermine Collective Action

While Turkey may have been the first to allegedly deploy live killer robots, their wide-ranging use is likely to be driven by Beijing, Moscow and Washington. Chinese President Xi Jinping and Russian President Vladimir Putin both openly loathe the Western-oriented human rights doctrines that underpin calls to ban killer robots. And despite America’s domestic division and dysfunction, its political class still has a bipartisan desire for the United States to remain the world’s global military hegemon.

With a GDP just slightly larger than that of the state of Florida, Russia’s inability to compete in a great power competition economically renders it reliant on exploiting asymmetric power imbalances wherever possible, including through furthering its AI capability for military and espionage purposes. Autonomous weapons could be well-suited to secure the resource-rich but inhospitable terrain of the Arctic, a region where the Kremlin is actively trying to assert Russia’s primacy. The country is also the world’s second-largest arms exporter behind the United States, accounting for one-fifth of global arms sales since 2016 — a key source of government revenue and foreign influence. Its recent anti-satellite weapons test underscores the Kremlin’s willingness to explore controversial weapons technologies too, even in the face of international condemnation.

President Xi Jinping, meanwhile, has pinned China’s ambitions of remaking the global order in favour of autocracies on the domination of key emerging technologies. On track by some estimates to becoming the world’s biggest economy by 2028, China is pouring spectacular amounts of money and resources into everything from AI, nanotechnology and quantum computing to genetics and synthetic biology, and has a stranglehold on the market for rare earth metals. After tendering his resignation in September out of frustration, the Pentagon’s ex-software chief, Nicolas Chaillan, declared in an interview with the Financial Times a month later that the United States will have “no competing fighting chance against China in 15 to 20 years.”

China is also notably keen on state-sponsored intellectual property theft to accelerate its innovation cycles. The more that others demonstrably advance on killer robots, the more that China will attempt to steal that technology — and inevitably succeed to a degree. This could create a self-reinforcing feedback loop that hastens the killer robot arms race among military powers.

This race of course includes the United States. The New York Times reported back in 2005 that the Pentagon was mulling ways to integrate killer robots into the US military. And much to the dismay of progressives, even Democrat-led administrations exhibit no signs whatsoever of winding down military spending any time soon — the Biden administration released a decidedly hawkish Global Posture Review at the end of November just as a massive US$770 billion defence bill sailed through Congress. The US military has already begun training drills to fight enemy robots, while deploying autonomous weapons systems could uphold its capacities for foreign intervention and power projection overseas, now that nation-building projects have fallen out of fashion.

Most important of all, mass production of killer robots could offset America’s flagging enlistment numbers. The US military requires 150,000 new recruits every year to maintain its desired strength and capability. And yet Pentagon data from 2017 revealed that more than 24 million of the then 34 million Americans between the ages of 17 and 24 — over 70 percent — would have been disqualified from serving in the military if they applied, due to obesity, mental health issues, inadequate education or a criminal record. Michèle Flournoy, a career defence official who served in senior roles in both the Clinton and the Obama administrations, told the BBC in December that “one of the ways to gain some quantitative mass back and to complicate adversaries’ defence planning or attack planning is to pair human beings and machines.”

Other, smaller players are nurturing an affinity for LAWS too. Israel assassinated Iran’s top nuclear scientist, Mohsen Fakhrizadeh, outside of Tehran in November 2020 using a remote-controlled, AI-assisted machine gun mounted inside a parked car, and is devising more remote ways to strike back against Hamas in the Gaza Strip. Since 2015, South Korea has placed nearly fully autonomous sentry guns on the edge of its demilitarized zone with North Korea, selling the domestically built robot turrets to customers throughout the Middle East. Speaking at a defence expo in 2018, Prime Minister Narendra Modi of India — the world’s second-largest arms buyer — told the audience: “New and emerging technologies like AI and Robotics will perhaps be the most important determinants of defensive and offensive capabilities for any defence force in the future.”

Banning hardware without including the underlying software would arguably be a half measure at best.

Finding the Middle Ground: Responsible Use

Even in the event that a ban on killer robots could be reached and somehow enforced, the algorithms used by autonomous weapons systems to identify, select and surveil targets are already streamlining and enhancing the use of lethal force by human actors. Banning hardware without including the underlying software would arguably be a half measure at best. But governments are badly struggling with how to regulate AI — and expanding the scope of the proposed ban would add enormous complexity to an already stalled process.

Instead, establishing acceptable norms around their use — what one US official has called a non-binding code of conduct — in advance of broad adoption may represent an alternative means to harness the potential positives of LAWS while avoiding the most-feared outcomes. These norms could be based primarily on a shared commitment to avoid so-called unintended consequences.

According to Robert Work, the former US defence official, LAWS should be totally excluded from systems that can independently launch pre-emptive or retaliatory attacks, especially those involving nuclear weapons. A code of conduct could include an expectation as well to keep autonomous weapons technology out of the hands of non-state actors. Numerous countries party to the CCW also believe that there are grounds to extend established international human rights law, such as the Geneva Convention, to cover autonomous weapons systems, by applying the law to the human authority that ordered their use. Some proponents of LAWS agree.

These are imperfect solutions — but they may prevent dystopian sci-fi fantasies from becoming reality. One way or another, killer robots are coming.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

ABOUT THE AUTHOR
Kyle Hiebert is a researcher and analyst formerly based in Cape Town and Johannesburg, South Africa, as deputy editor of the Africa Conflict Monitor.


No comments:

Post a Comment