Showing posts sorted by relevance for query KILLER ROBOTS. Sort by date Show all posts
Showing posts sorted by relevance for query KILLER ROBOTS. Sort by date Show all posts

Saturday, February 01, 2020

Opposition bids to ban 'killer robots' foiled by Merkel's coalition

Opposition calls for Germany to seek an international ban on fully autonomous weapons systems have been sunk in parliament by members of Angela Merkel's coalition. The pleas came from the Greens and ex-communist Left.
March 2019: Campaign to Stop Killer Robots at Berlin's Brandenburg Gate
Coalition parties used their Bundestag majority on Friday to scupper a set of pleas from opposition parties to work towards a global ban on autonomous weaponry with no human input.
The opposition Greens had demanded that Merkel's coalition press for progress on stalled talks — via the United Nations' 1980 Convention on Certain Conventional Weapons (CCWC) — with a view to developing a ban on "lethal autonomous weapons systems" and avoiding a potential new arms race.
Since 2014, eight meetings have been held in Geneva with no headway, largely due, says Human Rights Watch (HRW), to US and Russian insistence that definitions be first clarified. Favoring a ban via 11 guiding principles are more than 120 nations, with a follow-on conference due in 2021.
HRW's Mary Wareham, who heads the Campaign to Stop Killer Robots, told DW that "unacceptable" Russian and US standpoints amounted to the superpowers not wanting "to see any legal outcome, a new treaty or protocol."
"You could program the weapon system to go out and to select and attack an entire group or category of people, which is a very dangerous proposition," said Wareham, adding that the US had already looked at "targeting military age males in Yemen."
No research funding from EU, urge Greens
In its motion, the opposition Left party had demanded that Germany itself institute a moratorium on such autonomous weapons development, coupled with a push for an international ban.
The Greens, in another defeated motion, had also demanded that Germany seek an amendment to the European Defense Fund, created by the EU in 2017, to block EU research spending on such weapons.
That motion was also rejected in parliament by Merkel's coalition, which had said it did not want such weaponry in its so-called "coalition contract" of 2018, a document setting out the parties' combined plans for this period of government. 
Loophole for artificial intelligence
In committee stages, Merkel's conservative Christian Democratic Union (CDU) and Bavarian sister party the Christian Social Union (CSU) said they wanted existing international law upheld but were "open to the use of artificial intelligence, also in the military area."
Her coalition partners, the center-left Social Democrats (SPD), parliament was told, wanted lethal autonomous weapons prohibited but warned against "too hasty" decisions. 
Instead, the SPD preferred a public hearing on what are often euphemistically called "killer robots" in Germany. Critics say German arms manufacturers have been hawking new weapons with autonomous functions at defense sales expos.  
Kyiv, 2016: Ukrainian-made combat robot 'Piranya' at defense trade fair
Greens parliamentarian Katja Keul told parliament in Berlin Friday that since 2016 a government expert group had merely mulled over "whether" and "how" to regulate such weapons.
Through automation, out of direct control by soldiers, said Keul, lethal capability would be put "in the hands of private IT companies."
It violated human dignity as a basic right when a human life became merely the "object" of a machine-based decision, said Keul.
Deutschland Berlin Bundestag Katja Keul (picture-alliance/dpa/B. von Jutrczenka)
Coalition of the willing is needed, says Keul
"What a horrific vision, machines killing people en masse, without resistance, self-determined and efficient," said Left parliamentarian Kathrin Vogler, adding that this scenario was becoming a "very concrete" prospect. She called on Merkel's coalition to ensure that a European Parliament resolution on abolishing automated weapons systems "be implemented."
'Sober' scrutiny, says coalition
Christian Schmidt, speaking for Merkel's CDU-CSU parliamentary group, referred to Germany's past experience of the 1970s when former East Germany used automated devices to shoot Germans trying to flee to the West.
"Those were offensive weapons of the NVA, the border troops of the GDR [East Germany]," said Schmidt, who also referred to World War One mechanized warfare and insisted that modern weaponry required "sober" scrutiny via a "different, stronger ethos."
"Offensive weapon systems [are] what we don't want whatsoever," said Schmidt, a former state secretary in Germany's Defense Ministry.
Analysts say military robots are no longer confined to science fiction but are fast emerging from design desks to development in engineering laboratories and could be ready for deployment within a few years. Semiautomated weaponry, most notably aerial drones, has already become a core component in modern militaries — but still with a human operator in control remotely.

Wednesday, August 18, 2021

Lethal autonomous weapons and World War III: it’s not too late to stop the rise of ‘killer robots’


The STM Kargu attack drone. STM


August 11, 2021 10.12pm EDT

Last year, according to a United Nations report published in March, Libyan government forces hunted down rebel forces using “lethal autonomous weapons systems” that were “programmed to attack targets without requiring data connectivity between the operator and the munition”. The deadly drones were Turkish-made quadcopters about the size of a dinner plate, capable of delivering a warhead weighing a kilogram or so.

Artificial intelligence researchers like me have been warning of the advent of such lethal autonomous weapons systems, which can make life-or-death decisions without human intervention, for years. A recent episode of 4 Corners reviewed this and many other risks posed by developments in AI.

Around 50 countries are meeting at the UN offices in Geneva this week in the latest attempt to hammer out a treaty to prevent the proliferation of these killer devices. History shows such treaties are needed, and that they can work.

The lesson of nuclear weapons

Scientists are pretty good at warning of the dangers facing the planet. Unfortunately, society is less good at paying attention.

Listen to ‘Don’t Call Me Resilient,’ a provocative new podcast about raceFind out more

In August 1945, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki, killing up to 200,000 civilians. Japan surrendered days later. The second world war was over, and the Cold War began.

Read more: World politics explainer: The atomic bombings of Hiroshima and Nagasaki

The world still lives today under the threat of nuclear destruction. On a dozen or so occasions since then, we have come within minutes of all-out nuclear war.

Well before the first test of a nuclear bomb, many scientists working on the Manhattan Project were concerned about such a future. A secret petition was sent to President Harry S. Truman in July 1945. It accurately predicted the future:

The development of atomic power will provide the nations with new means of destruction. The atomic bombs at our disposal represent only the first step in this direction, and there is almost no limit to the destructive power which will become available in the course of their future development. Thus a nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.

If after this war a situation is allowed to develop in the world which permits rival powers to be in uncontrolled possession of these new means of destruction, the cities of the United States as well as the cities of other nations will be in continuous danger of sudden annihilation. All the resources of the United States, moral and material, may have to be mobilized to prevent the advent of such a world situation …

Billions of dollars have since been spent on nuclear arsenals that maintain the threat of mutually assured destruction, the “continuous danger of sudden annihilation” that the physicists warned about in July 1945.

A warning to the world


Six years ago, thousands of my colleagues issued a similar warning about a new threat. Only this time, the petition wasn’t secret. The world wasn’t at war. And the technologies weren’t being developed in secret. Nevertheless, they pose a similar threat to global stability.

Read more: Open letter: we must stop killer robots before they are built

The threat comes this time from artificial intelligence, and in particular the development of lethal autonomous weapons: weapons that can identify, track and destroy targets without human intervention. The media often like to call them “killer robots”.

Our open letter to the UN carried a stark warning.



The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable. The endpoint of such a technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

Read more: World's deadliest inventor: Mikhail Kalashnikov and his AK-47

Strategically, autonomous weapons are a military dream. They let a military scale its operations unhindered by manpower constraints. One programmer can command hundreds of autonomous weapons. An army can take on the riskiest of missions without endangering its own soldiers.
Nightmare swarms

There are many reasons, however, why the military’s dream of lethal autonomous weapons will turn into a nightmare. First and foremost, there is a strong moral argument against killer robots. We give up an essential part of our humanity if we hand to a machine the decision of whether a person should live or die.

Beyond the moral arguments, there are many technical and legal reasons to be concerned about killer robots. One of the strongest is that they will revolutionise warfare. Autonomous weapons will be weapons of immense destruction.

Previously, if you wanted to do harm, you had to have an army of soldiers to wage war. You had to persuade this army to follow your orders. You had to train them, feed them and pay them. Now just one programmer could control hundreds of weapons.


Organised swarms of drones can produce dazzling lightshows - but similar technology could make a cheap and devastating weapon.
Yomiuri Shimbun / AP

In some ways lethal autonomous weapons are even more troubling than nuclear weapons. To build a nuclear bomb requires considerable technical sophistication. You need the resources of a nation state, skilled physicists and engineers, and access to scarce raw materials such as uranium and plutonium. As a result, nuclear weapons have not proliferated greatly.

Autonomous weapons require none of this, and if produced they will likely become cheap and plentiful. They will be perfect weapons of terror.

Can you imagine how terrifying it will be to be chased by a swarm of autonomous drones? Can you imagine such drones in the hands of terrorists and rogue states with no qualms about turning them on civilians? They will be an ideal weapon with which to suppress a civilian population. Unlike humans, they will not hesitate to commit atrocities, even genocide.
Time for a treaty

We stand at a crossroads on this issue. It needs to be seen as morally unacceptable for machines to decide who lives and who dies. And for the diplomats at the UN to negotiate a treaty limiting their use, just as we have treaties to limit chemical, biological and other weapons. In this way, we may be able to save ourselves and our children from this terrible future.

Author
Toby Walsh

Professor of AI at UNSW, Research Group Leader, UNSW
Disclosure statement

Toby Walsh is a Laureate Fellow and Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, Australia. He is a Fellow of the Australian Academy of Science and author of the recent book, “2062: The World that AI Made” that explores the impact AI will have on society, including the impact on war.
Partners



Thursday, April 07, 2022

Robot Dogs Policing America’s Southern Border? It’s Coming Sooner than You Think

In a world destabilized by climate change, governments will need new ways of managing a rise in irregular migration, beyond doubling down on the use of force.

Kyle Hiebert
April 5, 2022
Illustration by Paul Lachine.


Desolate border areas tend to be governments’ preferred test theatre for cutting-edge security and military technologies. These now include semi-autonomous and autonomous weapons systems powered by artificial intelligence (AI) — the latter often referred to as killer robots.

But determining what is a credible target for these systems to attack is a fluid and ever-changing technical and political process. It’s also one that will soon be made much more complex by runaway climate change, which will blur the lines between security risks, humanitarian needs and national interests.

As this scenario unfolds, more fearsome hardware alone will be insufficient to address emergent border security challenges — irregular migration, in particular.
Machines Are Tailor-Made for Border Defence

On February 1, the US Department of Homeland Security (DHS) revealed its intentions to deploy 100-pound robot dogs to America’s southern border with Mexico as a force multiplier for patrols by human agents from US Customs and Border Protection. Manufactured by Philadelphia-based company Ghost Robotics, the units’ rollout will come after more than two years of testing and refinement by the research and development arm of DHS.

The robot dogs will carry “payloads” consisting of 360-degree cameras with thermal and night vision, as well as sensors that can detect trace levels of various chemical, biological, radiological and nuclear substances. All of this data is to be continuously fed back to border surveillance centres. Authorities involved in the project have said the robot dogs, which can be programmed for autonomous mode, will be unarmed.

Yet there’s reason to doubt this will remain the case in the long term. America’s border security efforts have undergone a serious military makeover since the Obama administration, and Ghost Robotics revealed another robotic dog unit equipped with an assault rifle at the US Army’s annual convention in October 2021. The American military is also a leading proponent of killer robots, and the US Department of Defense’s 1033 Program has encouraged the cut-rate sale and transfer of billions of dollars’ worth of surplus military tactical gear to other US federal and local law enforcement agencies in the post-9/11 era.

Moreover, intelligent weapons systems seem well-suited to safeguard sovereign boundaries in a hotter world, insofar as machines can better withstand the types of scorching temperatures and bleak landscapes that will spread because of climate change. Referring to the US-Mexico border, the program manager for the DHS robot dog initiative has argued the region “can be an inhospitable place for man and beast, and that is exactly why a machine may excel there.”

Some observers contend that killer robots can play a constructive role in militarized border disputes as well, by raising the threshold for war; South Korea, for example, has installed domestically produced, nearly fully autonomous sentry guns along the perimeter of its demilitarized zone with North Korea. The units are accurate from a distance of several kilometres away, virtually ensuring any ground assault ordered by Pyongyang would fail.

Seoul has reportedly permitted the sale and export of these robot turrets to government clients throughout the Middle East and elsewhere for the better part of a decade. Israel, India, China and Turkey have already adopted similar intelligent weapons as part of their border defences as well, ranging from sentry guns of their own to aerial drones and unmanned ground vehicles.

However, a latent dark side of this technology is how killer robots — rather than serving as a bulwark against legitimate armed threats — could be co-opted to suppress heightened levels of irregular migration that occur as the world becomes destabilized by climate change.

Wealthy Nations’ Climate Refugee Complex

With average global temperatures set to soar past 2 degrees Celsius of warming above pre-industrial levels as early as 2034, and the mass displacement from the extreme weather, rise in armed conflict, and local and global economic fragility expected to accompany this change, conditions will combine to shatter any previous records for population movement. The World Bank forecasts that absent aggressive, coordinated action on climate change, 216 million people could be forced to migrate within their own countries by 2050, mostly in Sub-Saharan Africa, Asia and the Pacific. Other experts estimate as many as 1.2 billion people could be displaced.

The number of international refugees from the Global South using the climate crisis as a reason to try to reach the Global North via unauthorized means will be minuscule by comparison. But this flow could still amount to millions of people — a large number when viewed in isolation or distorted through disinformation and politically motivated rhetoric. And when it comes to unauthorized border crossings, perception always carries more weight than reality. Negative perceptions will be amplified as migrant flows become more desperate, leading inevitably to more violent clashes with local law enforcement in transit countries.

Global resettlement spaces for refugees offered by host nations have dropped by more than half since 2011 according to data from the UN Refugee Agency, with anxieties over immigration in the developed world being a major factor. Meanwhile, a significant hardening of borders has taken place — a dynamic that has been accelerated by the adoption of intelligent border security technologies. Nowhere has this been more apparent than in Europe after its 2015 migrant crisis, which saw 1.3 million migrants, refugees and asylum seekers enter the continent over the course of several months.

While seemingly a dramatic influx of people, 1.3 million amounted to less than 0.3 percent of the bloc’s total population of 508 million at the time.

Yet, in reaction to the xenophobia and ultra-nationalism this sparked across the continent — which paved the way for Brexit in 2016 and have roiled European politics ever since — member states of the European Union, in conjunction with the European Border and Coast Guard Agency (Frontex), have invested heavily in high-tech, military-grade deterrents. These include aerial surveillance drones used in Austria, Croatia, Italy and Malta, as well as AI-powered lie-detector units used at processing centres in Greece, Hungary and Latvia, and sound cannons deployed in the eastern Mediterranean. During talks on reforms to Europe’s migration and asylum systems held by the European Commission in September 2020, Germany’s then minister of the interior, building and community, Horst Seehofer — who often disagreed with Angela Merkel on immigration and two years earlier claimed migration was “the mother of all problems” — stated that “Europe’s fate will be determined by its migration policy.”

The imminent deployment of autonomous robot dogs to the United States’ southern border likewise comes as Republicans and moderate Democrats sound the alarm over a spike in irregular movement that has occurred there since President Joe Biden took office. Government data shows that during the 2021 fiscal year, American border officers had almost 1.7 million encounters with people attempting to cross into the United States from Mexico, edging out the previous record-high set in 2000.

The neglected context is that while border encounters between migrants and agents are up, the estimated number of undetected irregular southern border crossings have plummeted — decreasing 92 percent since 2000 according to DHS data. An enhanced interdepartmental dragnet throughout the area means that the number of border crossers now apprehended and turned away, or detained and later deported, is vastly greater than the number of those who reach sanctuary cities. Rather than experiencing a supposed migrant crisis, America is, arguably, witnessing the successful outcome of a two-decade, bipartisan effort to wall off the United States from Latin America.

Nevertheless, the parallel rise in climate change–induced conflict, infectious diseases, terrorism and migration, emanating from places such as West Africa’s embattled Sahel region — described by the United Nations’ humanitarian chief in 2020 as a “canary in the coalmine of our warming planet” — may add to the popularity of what has been coined eco-bordering. Promoted by far-right groups in Europe and North America, this nativist, ethno-nationalist movement pushes a false environmental ideology that says cutting off immigration is necessary to preserve domestic ecosystems.

Hardening Borders Is Not a Long-Term Solution

It may seem drastic to suggest governments could someday use killer robots as an instrument to deter climate refugees. But Russia’s invasion of Ukraine provides graphic confirmation of how the world has entered a dangerous new interregnum where international norms and humanitarian law appear no longer capable of constraining the determined use of force. A consequence of this will be nation-states everywhere reassessing their own border security and finding new ways to fortify their sovereignty.

The humanitarian fallout of the Russian invasion has also underscored how most governments already stratify migrants into different tiers of “worthiness” based on minimalist interpretations of who qualifies as a refugee under international law.

European countries have rightfully thrown the door open for Ukrainian refugees fleeing Russian atrocities. Canada’s government has likewise said it will waive most standard visa requirements for an “unlimited number” of Ukrainians seeking safety. Contrast this to the general reluctance of the developed world to take in individuals and families fleeing violence, conflict or persecution in Africa, Asia, Latin America and the Middle East.

The Biden administration struck an undisclosed deal in April 2021 with Guatemala, Honduras and Mexico for their security forces to forcibly block Central American migrants from reaching the United States. The European Union has for years funded and outfitted abusive paramilitary forces in Sudan and Niger, and militia-controlled coast guard groups in Libya, to suppress movement via migrant gateways in Sub-Saharan Africa. This, despite extensive evidence that European support is enabling massive human-rights abuses. Europe’s leading powers, France and Germany, are both opposed to a ban on killer robots. As is Australia, where multiple governments since 2013 have upheld policies of detaining irregular migrants in offshore prisons.

In January 2022, the British government proposed a policy to task the Royal Navy with deterring migrant dinghies from crossing the English Channel from France. A record 28,300 people made the journey in 2021 — more than triple the total from the year before. The controversial “pushback” plan was tellingly devised as part of “Operation Red Meat,” a host of policies announced by Prime Minister Boris Johnson and aimed at conservative voters to distract from Johnson’s premiership being embroiled in scandal at the time.

Ruthless state actors are also beginning to weaponize migration itself — most recently, Belarus’s president, Aleksandr Lukashenko. In mid-2021, in reaction to European sanctions over his brutal crushing of a pro-democracy opposition movement the year before, Lukashenko orchestrated a scheme whereby thousands of refugees, mostly from Syria, Iraq and Afghanistan, were duped into paying sizable fees to fly into Belarus under the false impression that they were headed to safe-haven countries in Western Europe. Instead, they were left stranded at the borders of neighbouring EU member states Latvia, Lithuania and Poland. The three Baltic countries — each generally averse to accepting irregular migrants — quickly triggered states of emergency and mobilized soldiers to their borders.

Going forward, while killer robots will no doubt offer advantages in detecting, deterring and confronting armed threats from state and non-state actors, they will do nothing to address the root causes of rising global levels of climate displacements and irregular migration. Additional solutions will be necessary to manage the unprecedented movement of people, beyond just doubling down on the use of lethal force at borders.


The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

ABOUT THE AUTHOR
Kyle Hiebert  is a researcher and analyst formerly based in Cape Town and Johannesburg, South Africa, as deputy editor of the Africa Conflict Monitor.





Thursday, December 01, 2022

San Francisco considers allowing law enforcement robots to use lethal force

November 28, 2022
Heard on All Things Considered

ARI SHAPIRO

BRIANNA SCOTT


Law enforcement has used robots to investigate suspicious packages. Now, the San Francisco Board of Supervisors is considering a policy proposal that would allow SFPD's robots to use deadly force against a suspect.
Joe Raedle/Getty Images

Should robots working alongside law enforcement be used to deploy deadly force?

The San Francisco Board of Supervisors is weighing that question this week as they consider a policy proposal that would allow the San Francisco Police Department (SFPD) to use robots as a deadly force against a suspect.

A new California law became effective this year that requires every municipality in the state to list and define the authorized uses of all military-grade equipment in their local law enforcement agencies.

The original draft of SFPD's policy was silent on the matter of robots.

Aaron Peskin, a member of the city's Board of Supervisors, added a line to SFPD's original draft policy that stated, "Robots shall not be used as a Use of Force against any person."

The SFPD crossed out that sentence with a red line and returned the draft.

Their altered proposal outlines that "robots will only be used as a deadly force option when risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to the SFPD."

The SFPD currently has 12 functioning robots. They are remote controlled and typically used to gain situational awareness and survey specific areas officers may not be able to reach. They are also used to investigate and defuse potential bombs, or aide in hostage negotiations.

Peskin says much of the military-grade equipment sold to cities for police departments to use was issued by the federal government, but there's not a lot of regulation surrounding how robots are to be used. "It would be lovely if the federal government had instructions or guidance. Meanwhile, we are doing our best to get up to speed."


Some leading robot makers are pledging not to weaponize them


The idea of robots being legally allowed to kill has garnered some controversy. In October, a number of robotics companies – including Hyundai's Boston Dynamics – signed an open letter, saying that general purpose robots should not be weaponized.

Ryan Calo is a law and information science professor at the University of Washington and also studies robotics. He says he's long been concerned about the increasing militarization of police forces, but that police units across the country might be attracted to utilizing robots because "it permits officers to incapacitate a dangerous individual without putting themselves in harm's way."

Robots could also keep suspects safe too, Calo points out. When officers use lethal force at their own discretion, often the justification is that the officer felt unsafe and perceived a threat. But he notes, "you send robots into a situation and there just isn't any reason to use lethal force because no one is actually endangered."
Sponsor Message



The first time a robot was reported being used by law enforcement as a deadly force in the United States was in 2016 when the Dallas Police Department used a bomb-disposal robot armed with an explosive device to kill a suspect who had shot and killed five police officers.


THE TWO-WAY
Bomb Robots: What Makes Killing In Dallas Different And What Happens Next?


In a statement to technology news site The Verge, SFPD Officer Eve Laokwansathitaya said "SFPD does not have any sort of specific plan in place as the unusually dangerous or spontaneous operations where SFPD's need to deliver deadly force via robot would be a rare and exceptional circumstance."

Paul Scharre is author of the book Army Of None: Autonomous Weapons And The Future Of War. He helped create the U.S. policy for autonomous weapons used in war.

Scharre notes there is an important distinction between how robots are used in the military versus law enforcement. For one, robots used by law enforcement are not autonomous, meaning they are still controlled by a human.

"For the military, they're used in combat against an enemy and the purpose of that is to kill the enemy. That is not and should not be the purpose for police forces," Scharre says. "They're there to protect citizens, and there may be situations where they need to use deadly force, but those should be absolutely a last resort."


ALL TECH CONSIDERED
Autonomous Weapons Would Take Warfare To A New Domain, Without Humans


What is concerning about SFPD's proposal, Scharre says, is that it doesn't seem to be well thought out.

"Once you've authorized this kind of use, it can be very hard to walk that back." He says that this proposal sets up a false choice between using a robot for deadly force or putting law enforcement officers at risk. Scharre suggests that robots could instead be sent in with a non-lethal weapon to incapacitate a person without endangering officers.

As someone who studies robotics, Ryan Calo says that the idea of 'killer robots' is a launchpad for a bigger discussion about our relationship to technology and AI.

When it comes to robots being out in the field, Calo thinks about what happens if the technology fails and a robot accidentally kills or injures a person.

"It becomes very difficult to disentangle who is responsible. Is it the people using the technology? Is it the people that design the technology?" Calo asks.

With people, we can unpack the social and cultural dynamics of a situation, something you can't do with a robot.

"They feel like entities to us in a way that other technology doesn't," Calo says. "And so when you have a robot in the mix, all of a sudden not only do you have this question about who is responsible, which humans, you also have this strong sense that the robot is a participant."

Even if robots could be used to keep humans safe, Calo raises one more question: "We have to ask ourselves do we want to be in a society where police kill people with robots? It feels so deeply dehumanizing and militaristic."

The San Francisco Board of Supervisors meets Tuesday to discuss how robots could be used by the SFPD.

Monday, December 05, 2022

US police rarely deploy deadly robots to confront suspects

By JANIE HAR and CLAUDIA LAUER
today

A police officer uses a robot to investigate a bomb threat in San Francisco, on July 25, 2008. The liberal city of San Francisco became the unlikely proponent of weaponized police robots on Tuesday, Nov. 29, 2022, after supervisors approved limited use of the remote-controlled devices, addressing head-on an evolving technology that has become more widely available even if it is rarely deployed to confront suspects. 
(Michael Macor/San Francisco Chronicle via AP)

SAN FRANCISCO (AP) — The unabashedly liberal city of San Francisco became the unlikely proponent of weaponized police robots last week after supervisors approved limited use of the remote-controlled devices, addressing head-on an evolving technology that has become more widely available even if it is rarely deployed to confront suspects.

The San Francisco Board of Supervisors voted 8-3 on Tuesday to permit police to use robots armed with explosives in extreme situations where lives are at stake and no other alternative is available. The authorization comes as police departments across the U.S. face increasing scrutiny for the use of militarized equipment and force amid a years-long reckoning on criminal justice.

The vote was prompted by a new California law requiring police to inventory military-grade equipment such as flashbang grenades, assault rifles and armored vehicles, and seek approval from the public for their use.

So far, police in just two California cities — San Francisco and Oakland — have publicly discussed the use of robots as part of that process. Around the country, police have used robots over the past decade to communicate with barricaded suspects, enter potentially dangerous spaces and, in rare cases, for deadly force.

Dallas police became the first to kill a suspect with a robot in 2016, when they used one to detonate explosives during a standoff with a sniper who had killed five police officers and injured nine others.

The recent San Francisco vote, has renewed a fierce debate sparked years ago over the ethics of using robots to kill a suspect and the doors such policies might open. Largely, experts say, the use of such robots remains rare even as the technology advances.

Michael White, a professor in the School of Criminology and Criminal Justice at Arizona State University, said even if robotics companies present deadlier options at tradeshows, it doesn’t mean police departments will buy them. White said companies made specialized claymores to end barricades and scrambled to equip body-worn cameras with facial recognition software, but departments didn’t want them.

“Because communities didn’t support that level of surveillance. It’s hard to say what will happen in the future, but I think weaponized robots very well could be the next thing that departments don’t want because communities are saying they don’t want them,” White said.

Robots or otherwise, San Francisco official David Chiu, who authored the California bill when in the state legislature, said communities deserve more transparency from law enforcement and to have a say in the use of militarized equipment.

San Francisco “just happened to be the city that tackled a topic that I certainly didn’t contemplate when the law was going through the process, and that dealt with the subject of so-called killer robots,” said Chiu, now the city attorney.

In 2013, police maintained their distance and used a robot to lift a tarp as part of a manhunt for the Boston Marathon bombing suspect, finding him hiding underneath it. Three years later, Dallas police officials sent a bomb disposal robot packed with explosives into an alcove of El Centro College to end an hours-long standoff with sniper Micah Xavier Johnson, who had opened fire on officers as a protest against police brutality was ending.

Police detonated the explosives, becoming the first department to use a robot to kill a suspect. A grand jury declined charges against the officers, and then-Dallas Police Chief David O. Brown was widely praised for his handling of the shooting and the standoff.

“There was this spray of doom about how police departments were going to use robots in the six months after Dallas,” said Mark Lomax, former executive director of the National Tactical Officers Association. “But since then, I had not heard a lot about that platform being used to neutralize suspects ... until the San Francisco policy was in the news.”

The question of potentially lethal robots has not yet cropped up in public discourse in California as more than 500 police and sheriffs departments seek approval for their military-grade weapons use policy under the new state law. Oakland police abandoned the idea of arming robots with shotguns after public backlash, but will outfit them with pepper spray.

Many of the use policies already approved are vague as to armed robots, and some departments may presume they have implicit permission to deploy them, said John Lindsay-Poland, who has been monitoring implementation of the new law as part of the American Friends Service Committee.

“I do think most departments are not prepared to use their robots for lethal force,” he said, “but if asked, I suspect there are other departments that would say, ‘we want that authority.’”

San Francisco Supervisor Aaron Peskin first proposed prohibiting police from using robot force against any person. But the department said while it would not outfit robots with firearms, it wanted the option to attach explosives to breach barricades or disorient a suspect.

The approved policy allows only a limited number of high-ranking officers to authorize use of robots as a deadly force — and only when lives are at stake and after exhausting alternative force or de-escalation tactics, or concluding they would not be able to subdue the suspect through alternate means.

San Francisco police say the dozen functioning ground robots the department already has have never been used to deliver an explosive device, but are used to assess bombs or provide eyes in low visibility situations.

“We live in a time when unthinkable mass violence is becoming more commonplace. We need the option to be able to save lives in the event we have that type of tragedy in our city,” San Francisco Police Chief Bill Scott said in a statement.

Los Angeles Police Department does not have any weaponized robots or drones, said SWAT Lt. Ruben Lopez. He declined to detail why his department did not seek permission for armed robots, but confirmed they would need authorization to deploy one.

“It’s a violent world, so we’ll cross that bridge when we come to it,” he said.

There are often better options than robots if lethal force is needed, because bombs can create collateral damage to buildings and people, said Lomax, the former head of the tactical officers group. “For a lot of departments, especially in populated cities, those factors are going to add too much risk,” he said.

Last year, the New York Police Department returned a leased robotic dog sooner than expected after public backlash, indicating that civilians are not yet comfortable with the idea of machines chasing down humans.

Police in Maine have used robots at least twice to deliver explosives meant to take down walls or doors and bring an end to standoffs.

In June 2018, in the tiny town of Dixmont, Maine, police had intended to use a robot to deliver a small explosive that would knock down an exterior wall, but instead collapsed the roof of the house.

The man inside was shot twice after the explosion, survived and pleaded no contest to reckless conduct with a firearm. The state later settled his lawsuit against the police challenging that they had used the explosives improperly.

In April 2020, Maine police used a small charge to blow a door off of a home during a standoff. The suspect was fatally shot by police when he exited through the damaged doorway and fired a weapon.

As of this week, the state attorney general’s office had not completed its review of the tactics used in the 2018 standoff, including the use of the explosive charge. A report on the 2020 incident only addressed the fatal gunfire.

—-

Lauer reported from Philadelphia. AP reporter David Sharp contributed from Portland, Maine.

Saturday, March 16, 2024

 

Terminator-style robots more likely to be blamed for civilian deaths



UNIVERSITY OF ESSEX




Advanced killer robots are more likely to blamed for civilian deaths than military machines, new research has revealed.

The University of Essex study shows that high-tech bots will be held more responsible for fatalities in identical incidents.

Led by the Department of Psychology’s Dr Rael Dawtry it highlights the impact of autonomy and agency.

And showed people perceive robots to be more culpable if described in a more advanced way.

It is hoped the study – published in The Journal of Experimental Social Psychology – will help influence lawmakers as technology advances.

Dr Dawtry said: “As robots are becoming more sophisticated, they are performing a wider range of tasks with less human involvement.

“Some tasks, such as autonomous driving or military uses of robots, pose a risk to peoples’ safety, which raises questions about how - and where - responsibility will be assigned when people are harmed by autonomous robots.

“This is an important, emerging issue for law and policy makers to grapple with, for example around the use of autonomous weapons and human rights.

“Our research contributes to these debates by examining how ordinary people explain robots’ harmful behaviour and showing that the same processes underlying how blame is assigned to humans also lead people to assign blame to robots.”

As part of the study Dr Dawtry presented different scenarios to more than 400 people.

One saw them judge whether an armed humanoid robot was responsible for the death of a teenage girl.

During a raid on a terror compound its machine guns “discharged” and fatally hit the civilian.

When reviewing the incident, the participants blamed a robot more when it was described in more sophisticated terms despite the outcomes being the same.

Other studies showed that simply labelling a variety of devices ‘autonomous robots’ lead people to hold them accountable compared to when they were labelled ‘machines’.

Dr Dawtry added: “These findings show that how robots’ autonomy is perceived– and in turn, how blameworthy robots are – is influenced, in a very subtle way, by how they are described.

“For example, we found that simply labelling relatively simple machines, such as those used in factories, as ‘autonomous robots’, lead people to perceive them as agentic and blameworthy, compared to when they were labelled ‘machines’.

“One implication of our findings is that, as robots become more objectively sophisticated, or are simply made to appear so, they are more likely to be blamed.”

Tuesday, January 04, 2022

Humanity's Final Arms Race: UN Fails to Agree on 'Killer Robot' Ban

The world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.












A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square on April 23, 2013 in London, England. The 'Campaign to Stop Killer Robots' is calling for a pre-emptive ban on lethal robot weapons that could attack targets without human intervention. (Photo: Oli Scarff/Getty Images)


JAMES DAWES

December 30, 2021
 by The Conversation

Autonomous weapon systems—commonly known as killer robots—may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity's final one.

The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn't reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world's cruelest conventional weapons, including land mines, booby traps and incendiary weapons.

Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks, and because they could be combined with chemical, biological, radiological and nuclear weapons themselves.

As a specialist in human rights with a focus on the weaponization of artificial intelligence, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world—for example, the U.S. president's minimally constrained authority to launch a strike—more unsteady and more fragmented. Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.
Lethal errors and black boxes

I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat?


The problem here is not that machines will make such errors and humans won't. It's that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems—ruled by one targeting algorithm, deployed across an entire continent—could make misidentifications by individual humans like a recent U.S. drone strike in Afghanistan seem like mere rounding errors by comparison.

Autonomous weapons expert Paul Scharre uses the metaphor of the runaway gun to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard.

Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms—operating as designed—can generate internally correct outcomes that nonetheless spread terrible errors rapidly across populations.

For example, a neural net designed for use in Pittsburgh hospitals identified asthma as a risk-reducer in pneumonia cases; image recognition software used by Google identified Black people as gorillas; and a machine-learning tool used by Amazon to rank job candidates systematically assigned negative scores to women.

The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don't know why they did and, therefore, how to correct them. The black box problem of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems.
The proliferation problems

The next two dangers are the problems of low-end and high-end proliferation. Let's start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to contain and control the use of autonomous weapons. But if the history of weapons technology has taught the world anything, it's this: Weapons spread.

Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. "Kalashnikov" autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.

The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people. 
Ministry of Defense of Ukraine, CC BY

High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of mounting chemical, biological, radiological and nuclear arms. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.

High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one's own soldiers. The weapons are likely to be equipped with expensive ethical governors designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the "myth of a surgical strike" to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one's own soldiers, dramatically altering the cost-benefit analysis that nations undergo while launching and maintaining wars.

Asymmetric wars—that is, wars waged on the soil of nations that lack competing technology—are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the blowback experienced around the world today. Multiply that by every country currently aiming for high-end autonomous weapons.
Undermining the laws of war

Finally, autonomous weapons will undermine humanity's final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 Geneva Convention, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is Slobodan Milosevic, former president of the Federal Republic of Yugoslavia, who was indicted on charges of crimes against humanity and war crimes by the U.N.'s International Criminal Tribunal for the Former Yugoslavia.

But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier's commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious accountability gap.

To hold a soldier criminally responsible for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great.

The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate meaningful human control of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.
A new global arms race

Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.

In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.

This is an updated version of an article originally published on September 29, 2021.
This work is licensed under a Creative Commons Attribution 4.0 International License



JAMES DAWES

James Dawes conducts research in human rights. He is the author of The Novel of Human Rights (Harvard University Press, 2018); Evil Men (Harvard University Press, 2013), winner of the International Human Rights Book Award; That the World May Know: Bearing Witness to Atrocity (Harvard University Press, 2007), Independent Publisher Book Award Finalist; and The Language of War (Harvard University Press, 2002).

Monday, June 07, 2021

Germany warns: AI arms race already underway

The world is entering a new era of warfare, with artificial intelligence taking center stage. AI is making militaries faster, smarter and more efficient. But if left unchecked, it threatens to destabilize the world.



'Loitering munitions' with a high degree of autonomy are already seeing action in conflict



An AI arms race is already underway. That's the blunt warning from Germany's foreign minister, Heiko Maas.

"We're right in the middle of it. That's the reality we have to deal with," Maas told DW, speaking in a new DW documentary, "Future Wars — and How to Prevent Them."

It's a reality at the heart of the struggle for supremacy between the world's greatest powers.

"This is a race that cuts across the military and the civilian fields," said Amandeep Singh Gill, former chair of the United Nations group of governmental experts on lethal autonomous weapons. "This is a multi-trillion dollar question."


Great powers pile in


This is apparent in a recent report from the United States' National Security Commission on Artificial Intelligence. It speaks of a "new warfighting paradigm" pitting "algorithms against algorithms," and urges massive investments "to continuously out-innovate potential adversaries."

And you can see it in China's latest five-year plan, which places AI at the center of a relentless ramp-up in research and development, while the People's Liberation Army girds for a future of what it calls "intelligentized warfare."

As Russian President Vladimir Putin put it as early as 2017, "whoever becomes the leader in this sphere will become the ruler of the world."

But it's not only great powers piling in.

Much further down the pecking order of global power, this new era is a battle-tested reality.


German Foreign Minister Heiko Maas: 'We have to forge international treaties on new weapons technologies'


Watershed war

In late 2020, as the world was consumed by the pandemic, festering tensions in the Caucasus erupted into war.

It looked like a textbook regional conflict, with Azerbaijan and Armenia fighting over the disputed region of Nagorno-Karabakh. But for those paying attention, this was a watershed in warfare.

"The really important aspect of the conflict in Nagorno-Karabakh, in my view, was the use of these loitering munitions, so-called 'kamikaze drones' — these pretty autonomous systems," said Ulrike Franke, an expert on drone warfare at the European Council on Foreign Relations.


'Loitering munitions' saw action in the 2020 Nagorno-Karabakh war


Bombs that loiter in the air

Advanced loitering munitions models are capable of a high degree of autonomy. Once launched, they fly to a defined target area, where they "loiter," scanning for targets — typically air defense systems.

Once they detect a target, they fly into it, destroying it on impact with an onboard payload of explosives; hence the nickname "kamikaze drones."

"They also had been used in some way or form before — but here, they really showed their usefulness," Franke explained. "It was shown how difficult it is to fight against these systems."

Research by the Center for Strategic and International Studies showed that Azerbaijan had a massive edge in loitering munitions, with more than 200 units of four sophisticated Israeli designs. Armenia had a single domestic model at its disposal.

Other militaries took note.

"Since the conflict, you could definitely see a certain uptick in interest in loitering munitions," said Franke. "We have seen more armed forces around the world acquiring or wanting to acquire these loitering munitions."

AI-driven swarm technology will soon hit the battlefield


Drone swarms and 'flash wars'


This is just the beginning. Looking ahead, AI-driven technologies such as swarming will come into military use — enabling many drones to operate together as a lethal whole.

"You could take out an air defense system, for example," said Martijn Rasser of the Center for a New American Security, a think tank based in Washington, D.C.

"You throw so much mass at it and so many numbers that the system is overwhelmed. This, of course, has a lot of tactical benefits on a battlefield," he told DW. "No surprise, a lot of countries are very interested in pursuing these types of capabilities."

The scale and speed of swarming open up the prospect of military clashes so rapid and complex that humans cannot follow them, further fueling an arms race dynamic.

As Ulrike Franke explained: "Some actors may be forced to adopt a certain level of autonomy, at least defensively, because human beings would not be able to deal with autonomous attacks as fast."

This critical factor of speed could even lead to wars that erupt out of nowhere, with autonomous systems reacting to each other in a spiral of escalation. "In the literature we call these 'flash wars'," Franke said, "an accidental military conflict that you didn't want."

Experts warn that AI-driven systems could lead to 'flash wars' erupting beyond human control


A move to 'stop killer robots'

Bonnie Docherty has made it her mission to prevent such a future. A Harvard Law School lecturer, she is an architect of the Campaign to Stop Killer Robots, an alliance of nongovernmental organizations demanding a global treaty to ban lethal autonomous weapons.

"The overarching obligation of the treaty should be to maintain meaningful human control over the use of force," Docherty told DW. "It should be a treaty that governs all weapons operating with autonomy that choose targets and fire on them based on sensor's inputs rather than human inputs."

The campaign has been focused on talks in Geneva under the umbrella of the UN Convention on Certain Conventional Weapons, which seeks to control weapons deemed to cause unjustifiable suffering.

It has been slow going. The process has yielded a set of "guiding principles," including that autonomous weapons be subject to human rights law, and that humans have ultimate responsibility for their use. But these simply form a basis for more discussions.

Docherty fears that the consensus-bound Geneva process may be thwarted by powers that have no interest in a treaty.

"Russia has been particularly vehement in its objections," Docherty said.

But it's not alone. "Some of the other states developing autonomous weapon systems such as Israel, the US, the United Kingdom and others have certainly been unsupportive of a new treaty."

TECHNOLOGIES THAT REVOLUTIONIZED WARFARE
AI: 'Third revolution in warfare'
Over 100 AI experts have written to the UN asking them to ban lethal autonomous weapons — those that use AI to act independently. No so-called "killer robots" currently exist, but advances in artificial intelligence have made them a real possibility. Experts said these weapons could be "the third revolution in warfare," after gunpowder and nuclear arms. PHOTOS 12345678910


Time for a rethink?


Docherty is calling for a new approach if the next round of Geneva talks due later this year makes no progress. She has proposed "an independent process, guided by states that actually are serious about this issue and willing to develop strong standards to regulate these weapon systems."

But many are wary of this idea. Germany's foreign minister has been a vocal proponent of a ban, but he does not support the Campaign to Stop Killer Robots.

"We don't reject it in substance — we're just saying that we want others to be included," Heiko Maas told DW. "Military powers that are technologically in a position not just to develop autonomous weapons but also to use them."

Maas does agree that a treaty must be the ultimate goal. "Just like we managed to do with nuclear weapons over many decades, we have to forge international treaties on new weapons technologies," he said. "They need to make clear that we agree that some developments that are technically possible are not acceptable and must be prohibited globally."

Germany's Heiko Maas: 'We're moving toward a situation with cyber or autonomous weapons where everyone can do as they please'

What next?

But for now, there is no consensus. For Franke, the best the world can hope for may be norms around how technologies are used. "You agree, for example, to use certain capabilities only in a defensive way, or only against machines rather than humans, or only in certain contexts," she said.

Even this will be a challenge. "Agreeing to that and then implementing that is just much harder than some of the old arms control agreements," she said.

And while diplomats tiptoe around these hurdles, the technology marches on.

"The world must take an interest in the fact that we're moving toward a situation with cyber or autonomous weapons where everyone can do as they please," said Maas. "We don't want that."


SEE KILLER ROBOTS IN MY GOTHIC CAPITALI$M
 The Horror Of Accumulation And The Commodification Of Humanity 

For more, watch the full documentary Future Wars on YouTube.