Showing posts sorted by date for query KILLER ROBOTS. Sort by relevance Show all posts
Showing posts sorted by date for query KILLER ROBOTS. Sort by relevance Show all posts

Monday, September 09, 2024

 

A $1.2 million Rosetta stone for honeybees



W.M. Keck grant helps scientists decipher bee language




University of California - Riverside

Boris Baer and Barbara Baer-Imhoof 

image: 

UCR entomologists Barbara Baer-Imhoof and Boris Baer at the apiary.

view more 

Credit: Stan Lim/UCR





If you upset one bee, what determines whether the entire hive decides to avenge her grievance? A $1.2 million grant will support UC Riverside scientists in answering questions like these about how honeybees communicate.

Every third bite of food you eat has been pollinated by a bee. They are central to worldwide food production, but there have been an alarming number of die-offs recorded since 2006.  One solution to this issue is the use of special survivor bees that are more resistant to pests and diseases that are killing managed honeybees. 

Commonly found in Southern California, the survivor bees appear to be tolerant of deadly mites as well as extreme heat and drought. Genetically, they are the most diverse honeybees in the world, with a mix of African and European genes. However, they tend to behave with more defensiveness than the European-origin honeybees currently used for agriculture. 

Defensive behaviors can include bumping beekeeper veils, chasing, or stinging entities perceived as threats. To breed these behaviors out of the bees, scientists need to know what triggers them.

“If we understand what stresses out the survivor bees, that can inform different beekeeping strategies, as well as a breeding program to help unravel the defensiveness,” said UCR entomologist Barbara Baer-Imhoof, who is co-leading this program alongside UCR colleagues, entomologist Boris Baer and insect neuroscientist Ysabel Giraldo.

Baer and Baer-Imhoof run CIBER, the Center for Integrative Bee Research at UCR, where they study stressors responsible for the decline in bee health, and work on solutions to those problems, including new tools for monitoring the health of bees in managed hives. 

For this grant, the researchers will determine how environmental threats are perceived and processed by individual bees, and then eventually how they are communicated to other members of the hive. This communication chain is a fundamental but still unsolved challenge in science. 

Another aspect to this grant from the W.M. Keck Foundation is learning whether scientists ought to reconsider how they view bee societies. In addition to inter-bee communication, the project will ascertain how honeybees transmit information to subsequent generations of progeny, beyond the lifespan of any one generation. 

Because hives can retain information, the researchers argue there should be a paradigm shift in the way bees are studied. “The fact that they are able to do this can be considered a cultural achievement,” said Baer. 

Like human societies, there is a lot of variation amongst individual members.

“Some bees have different personalities. They’re not like little robots that give the same predictable response to every smell or situation. Why? That’s part of what we want to know,” Baer said. 

As the bees employ a combination of vibrations, chemicals, smells, sounds, and movements to communicate, Giraldo’s laboratory will use genetic tools to learn about the brain cells controlling these interactions.

“The tools we have are powerful enough to allow us to understand the responses of individual brain regions in real time, and give us a high-resolution picture of what’s happening, Giraldo said.

The bee has only a million brain cells, which is not much compared to mice, which have an average of 70 million neurons. However, bees can solve math equations and dance for one another.

“They can do complicated things,” Baer said. “They must be extremely efficient on an individual level to use the available brain power for complex tasks like these.”

Based in Los Angeles, the W. M. Keck Foundation was established in 1954 by the late W. M. Keck, founder of the Superior Oil Company. The foundation’s grant-making is focused primarily on pioneering efforts in the areas of medical research and science and engineering.  The foundation also supports undergraduate education and maintains a Southern California Grant Program that provides support for the Los Angeles community, with a special emphasis on children and youth.  For more information, visit www.wmkeck.org

“On behalf of the UCR community, I extend our sincere thanks to the W.M. Keck Foundation,” said Chancellor Kim Wilcox. “Funding from the foundation will support innovative projects that aim to develop new strategies for understanding and protecting bees. These efforts are crucial as pollinators play a key role in the health of ecosystems and the production of food worldwide.” 

Listen to Boris Baer and Barbara Baer-Imhoof discuss killer bees' role in shaping the agriculture of the future, here.


Honeybees in the wild.

Credit

Stan Lim/UCRZZ





Replacement crop treatment not safe for important pollinator, experts say



University of Bristol
Fig 1 

image: 

Bee nesting blocks for solitary bees (Osmia lignaria)

view more 

Credit: Harry Siviter




A novel pesticide thought to be a potential successor to banned neonicotinoids caused 100% mortality in mason bees in a recent test.

The novel pesticide, flupyradifurone, is thought to pose less risk to pollinators and consequently has been licenced globally for use on bee-visited crops.

However, research by scientists at the University of Bristol and the University of Texas at Austin, discovered, contrary to their expectations, that the chemical was lethal in the bees Osmia lignaria exposed to pesticide-treated wildflowers.

They also found a number of sublethal effects. Seven days post-application, bees released into the pesticide-treated plants were less likely to start nesting, had lower survival rates, and were less efficient foragers, taking 12.78% longer on average to collect pollen and nectar than control bees.

Lead author Harry Siviter from Bristol’s School of Biological Sciences explained: “These results demonstrate that exposure to flupyradifurone poses a significant risk to important pollinators and can have negative impacts on wild bees at field-realistic concentrations.”

Bees are vital pollinators of crops and wildflowers. Neonicotinoid pesticides can have significant negative impacts on pollinators which have led to high profile restrictions in their use in the EU, and other regions, which has increased the demand for ‘novel’ insecticides.

“Due to limitations in formal ecotoxicology assessments, there is an urgent need to evaluate potential replacement crop treatments,” added Harry.

“These results caution against the use of novel insecticides as a direct replacement for neonicotinoids.

“Our findings add to a growing body of evidence demonstrating that pesticide risk assessments do not sufficiently protect wild bees from the negative consequences of pesticide use.”

To avoid continuing cycles of novel pesticide release and removal, with concomitant impacts on the environment, the team say a broad evidence base needs to be assessed prior to the development of policy and regulation.

Harry said: “Restricting the use of commercial pesticides containing flupyradifurone to non-flowering crops would be sensible while more research is conducted.

“In the long-term, as we are already seeing in the EU, a move towards a more holistic approach to risk assessment that considers the biology of non-Apis bees is required to better protect pollinators from the unintended negative impacts of pesticides.”

The team now plan to extend their research to measuring the impact of exposure through soil on solitary bees.

Paper:

‘A novel pesticide has lethal consequences for an important pollinator’ by Harry Siviter et al in Science of the Total Environment.

Saturday, August 03, 2024

Humans Should Teach AI How To Avoid Nuclear War—While They Still Can

August 1, 2024
Source: Bulletin of Atomic Scientists

Mike MacKenzie - Artificial Intelligence & AI & Machine Learning. Flickr.



When considering the potentially catastrophic impacts of military applications of Artificial Intelligence (AI), a few deadly scenarios come to mind: autonomous killer robots, AI-assisted chemical or biological weapons development, and the 1983 movie WarGames.

The film features a self-aware AI-enabled supercomputer that simulates a Soviet nuclear launch and convinces US nuclear forces to prepare for a retaliatory strike. The crisis is only partly averted because the main (human) characters persuade US forces to wait for the Soviet strike to hit before retaliating. It turns out that the strike was intentionally falsified by the fully autonomous AI program. The computer then attempts to launch a nuclear strike on the Soviets without human approval until it is hastily taught about the concept of mutually assured destruction, after which the program ultimately determines that nuclear war is a no-win scenario: “Winner: none.”

US officials have stated that an AI system would never be given US nuclear launch codes or the ability to take control over US nuclear forces. However, AI-enabled technology will likely become increasingly integrated into nuclear targeting and command and control systems to support decision-making in the United States and other nuclear-armed countries. Because US policymakers and nuclear planners may use AI models in conducting analyses and anticipating scenarios that may ultimately influence the president’s decision to use nuclear weapons, the assumptions under which these AI-enabled systems operate require closer scrutiny.

Pathways for AI integration. The US Defense Department and Energy Department already employ machine learning and AI models to make calculation processes more efficient, including for analyzing and sorting satellite imagery from reconnaissance satellites and improving nuclear warhead design and maintenance processes. The military is increasingly forward-leaning on AI-enabled systems. For instance, it initiated a program in 2023 called Stormbreaker that strives to create an AI-enabled system called “Joint Operational Planning Toolkit” that will incorporate “advanced data optimization capabilities, machine learning, and artificial intelligence to support planning, war gaming, mission analysis, and execution of all-domain, operational level course of action development.” While AI-enabled technology presents many benefits for security, it also brings significant risks and vulnerabilities.

One concern is that the systemic use of AI-enabled technology and an acceptance of AI-supported analysis could become a crutch for nuclear planners, eroding human skills and critical thinking over time. This is particularly relevant when considering applications for artificial intelligence in systems and processes such as wargames that influence analysis and decision-making. For example, NATO is already testing and preparing to launch an AI system designed to assist with operational military command and control and decision-making by combining an AI wargaming tool and machine learning algorithms. Even though it is still unclear how this system will impact decision-making led by the United States, the United Kingdom, and NATO’s Nuclear Planning Group concerning US nuclear weapons stationed in Europe, this type of AI-powered analytical tool would need to consider escalation factors inherent to nuclear weapons and could be used to inform targeting and force structure analysis or to justify politically motivated strategies.

The role given to AI technology in nuclear strategy, threat prediction, and force planning can reveal more about how nuclear-armed countries view nuclear weapons and nuclear use. Any AI model is programmed under certain assumptions and trained on selected data sets. This is also true of AI-enabled wargames and decision-support systems tasked with recommending courses of action for nuclear employment in any given scenario. Based on these assumptions and data sets alone, the AI system would have to assist human decision-makers and nuclear targeters in estimating whether the benefits of nuclear employment outweigh the cost and whether a nuclear war is winnable.

Do the benefits of nuclear use outweigh the costs? Baked into the law of armed conflict is a fundamental tension between any particular military action’s gains and costs. Though fiercely debated by historians, the common understanding of the US decision to drop two atomic bombs on Japan in 1945 demonstrates this tension: an expedited victory in East Asia in exchange for hundreds of thousands of Japanese casualties.

Understanding how an AI algorithm might weigh the benefits and costs of escalation depends on how it integrates the country’s nuclear policy and strategy. Several factors contribute to one’s nuclear doctrine and targeting strategy—ranging from fear of consequences of breaking the tradition of non-use of nuclear weapons to concern of radioactive contamination of a coveted territory and to sheer deterrence because of possible nuclear retaliation by an adversary. While strategy itself is derived from political priorities, military capabilities, and perceived adversarial threats, nuclear targeting incorporates these factors as well as many others, including the physical vulnerability of targets, overfly routes, and accuracy of delivery vehicles—all aspects to further consider when making decisions about force posture and nuclear use.

In the case of the United States, much remains classified about its nuclear decision-making and cost analysis. It is understood that, under guidance from the president, US nuclear war plans target the offensive nuclear capabilities of certain adversaries (both nuclear and non-nuclear armed) as well as the infrastructure, military resources, and political leadership critical to post-attack recovery. But while longstanding US policy has maintained to “not purposely threaten civilian populations or objects” and “not intentionally target civilian populations or targets in violation of [the law of armed conflict],” the United States has previously acknowledged that “substantial damage to residential structures and populations may nevertheless result from targeting that meets the above objectives.” This is in addition to the fact that the United States is the only country to have used its nuclear weapons against civilians in war.

There is limited public information with which to infer how an AI-enabled system would be trained to consider the costs of nuclear detonation. Certainly, any plans for nuclear employment are determined by a combination of mathematical targeting calculations and subjective analysis of social, economic, and military costs and benefits. An AI-enabled system could improve some of these analyses in weighing certain military costs and benefits, but it could also be used to justify existing structures and policies or further ingrain biases and risk acceptance into the system. These factors, along with the speed of operation and innate challenges in distinguishing between data sets and origins, could also increase the risks of escalation—either deliberate or inadvertent.

Is a nuclear war “winnable”? Whether a nuclear war is winnable depends on what “winning” means. Policymakers and planners may define winning as merely the benefits of nuclear use outweighing the cost when all is said and done. When balancing costs and benefits, the benefits need only be one “point” higher for an AI-enabled system to deem the scenario a “win.”

In this case, “winning” may be defined in terms of national interest without consideration of other threats. A pyrrhic victory could jeopardize national survival immediately following nuclear use and still be considered a win by the AI algorithm. Once a nuclear weapon has been used, it could either incentivize an AI system to not recommend nuclear use or, on the contrary, recommend the use of nuclear weapons on a broader scale to eliminate remaining threats or to preempt further nuclear strikes.

“Winning” a nuclear war could also be defined in much broader terms. The effects of nuclear weapons go beyond the immediate destruction within their blast radius; there would be significant societal implications from such a traumatic experience, including potential mass migration and economic catastrophe, in addition to dramatic climatic damage that could result in mass global starvation. Depending on how damage is calculated and how much weight is placed on long-term effects, an AI system may determine that a nuclear war itself is “unwinnable” or even “unbearable.”

Uncovering biases and assumptions. The question of costs and benefits is relatively uncontroversial in that all decision-making involves weighing the pros and cons of any military option. However, it is still unknown how an AI system will weigh these costs and benefits, especially given the difficulty of comprehensively modeling all the effects of nuclear weapon detonations. At the same time, the question of winning a nuclear war has long been a thorn in the side of nuclear strategists and scholars. All five nuclear-weapon states confirmed in 2022 that “a nuclear war cannot be won and must never be fought.” For them, planning to win a nuclear war would be considered inane and, therefore, would not require any AI assistance. However, deterrence messaging and discussion of AI applications for nuclear planning and decision-making illuminate the belief that the United States must be prepared to fight—and win—a nuclear war.

The use of AI-assisted nuclear decision-making has the potential to reveal and exacerbate the biases and beliefs of policymakers and strategists, including the oft-disputed idea that nuclear war can be won. AI-powered analysis incorporated into nuclear planning or decision-making processes would operate on assumptions about the capabilities of nuclear weapons as well as their estimated costs and benefits, in the same way that targeters and planners have done for generations. Some of these assumptions could include missile performance, accurate delivery, radiation effects, adversary response, and whether nuclear arms control or disarmament is viable.

Not only are there risks of inherent bias in AI systems, but this technology can be purposely designed with bias. Nuclear planners have historically underestimated the damage caused by nuclear weapons in their calculations, so an AI system fed that data to make recommendations could also systemically underestimate the costs of nuclear employment and the number of weapons needed for targeting purposes. There is also a non-zero chance that nuclear planners poison the data so that an AI program recommends certain weapons systems or strategies.

During peace time, recommendations based on analysis by AI-enabled systems could also be used as part of justifying budgets, capabilities, and force structures. For example, an AI model that is trained on certain assumptions and possibly underestimates nuclear damage and casualties may recommend increasing the number of deployed warheads, which will be legally permissible after New START—the US-Russian treaty that limits their deployed long-range nuclear forces—expires in February 2026. The inherent trust placed in computers by their users is also likely to provide undue credibility to AI-supported recommendations, which policymakers and planners could use to veil their own preferences behind the supposed objectivity of a computer’s outputs.

Despite this heavy skepticism, advanced AI/machine learning models could still potentially provide a means of sober calculation in crisis scenarios, where human decision-making is often clouded, rushed, or falls victim to fallacies. However, this requires that the system has been fed accurate data, shaped with frameworks that support good faith analysis, and is used with an awareness of its limitations. Rigorous training on nuclear strategy for the “humans in the loop” as well as on methods for interpreting AI-generated outputs—that is, considering all its limitations and embedded biases—could also help mitigate some of these risks. Finally, it is essential that governments practice and promote transparency concerning the integration of AI technology into their military systems and strategic processes, as well as the structures in place to prevent deception, cyberattacks, disinformation, and bias.

Human nature is nearly impossible to predict, and escalation is difficult to control. Moreover, there is arguably little evidence to support claims that any nuclear employment could control or de-escalate a conflict. Highlighting and addressing potential bias in AI-enabled systems is critical for uncovering assumptions that may deceive users into believing that a nuclear war can be won and for maintaining the well-established ethical principle that a nuclear war should never be fought.

Editor’s note: The views expressed in this article are those of the authors and do not necessarily represent the views of the US State Department.


Eliana Johns

Eliana Johns, née Reynolds, is a senior research associate for the Nuclear Information Project at the Federation of American Scientists, where she researches the status and trends of global nuclear forces and the role of nuclear weapons. Johns is also an upcoming master’s student at Georgetown University’s Center for Security Studies where she will concentrate on the intersection between technology and security. Previously, Johns worked as a project associate for DPRK Counterproliferation at CRDF Global, focusing on WMD nonproliferation initiatives to curb North Korea’s ability to gain revenue to build its weapons programs. Johns graduated with her bachelor’s in political science with minors in Music and Korean from the University of Maryland, Baltimore County (UMBC).

Tuesday, July 16, 2024


Climate and Political Heatwaves Kill

 

JULY 15, 2024
Facebook

Image by Matt Palmer.

Late July 1987 was a bad time for Athens, Greece. A suffocating heat wave embraced the polis of Athena. Hundreds of Athenians died. The government ordered gravediggers to work overtime days and nights. And to keep bodies chill, hospitals received ice from the fish market. “The Athens summer,” said Alan Cowell, a New York Times reporter, “has been injected with the grisly, and the macabre…. What has normally been a time for leisure has become a time of horror and of questioning of the state’s ability to cope with extremes.”

I heard about this 1987 climate disaster from my niece Theodora. I was talking on the phone to her mother, Georgia who is my sister. Georgia mentioned her daughter Theodora was visiting her in Cephalonia from Athens. When I said hello to Theodora, we exchanged good wishes and, rather immediately, we started talking about climate conditions, especially heat. I told her that in Claremont, southern California, where I live, the daily temperature was 100 degrees Fahrenheit. “But in Claremont,” I said to her, “temperatures drop at night, so, at 6 in the morning, the temperature is 60 degrees Fahrenheit.” “On the contrary with us,” she said, “in Athens and Cephalonia [in the Ionian Sea], the stifling temperature remains the same day and night.”

Theodora explained that an air conditioner in nearly every room saves their lives. Walking is uncomfortable. I then tried to explain why the rulers of the world are doing practically nothing to fight climate change. At that point, she said to me, “do you remember the 1987 heat wave in Athens?” When I replied that I had never heard of it, she rattled data and memories.

“More than 4,000 people died in Athens,” she said. “They used a refrigerated train for the bodies. We did not have air conditioners in houses and apartments. Cars had no cooling devices. Hospitals had no air conditioners. The country suffered. But we are now moving back to 1987. Heat waves are ruling our lives.”

Consequences of heat waves

The World Health Organization says heatwaves are dangerous. Heatwaves are a mixture of high temperatures and hot weather lasting for several days. That climate condition kills. “Heatwaves,” says WHO, “are among the most dangerous of natural hazards.” They kill hundreds of thousands of people per year. A study explains heatwaves: “Earth’s average surface temperature has risen at a rate of 0·07°C [Celsius] per decade since 1880, a rate that has nearly tripled since the 1990s. The acceleration of global warming has resulted in 19 of the 20 hottest years occurring after 2000 and an unprecedented frequency, intensity, and duration of extreme temperature events, such as heatwaves, worldwide.”

In 2021, heatwaves in North America killed hundreds of people in Oregon, Washington, and British Columbia. Why this heat is becoming a killer and who is responsible? “The main culprit of global warming today,” said two reporters, “is humans burning fossil fuels. Thousands of scientists, who have studied the causes across decades, have reached this overwhelming consensus. Globally in 2022, humans emitted roughly 36.8 billion metric tons of planet-warming carbon dioxide by burning coal, natural gas and oil for energy.” This obvious truth, however, does not seem to matter.

Heatwaves defined 2023 much more than 2021 or 1987. In 2023, Canada and the arctic suffered tremendously. Canada was on forest fires and the arctic started melting. The United States did not fare much better. The South boiled over. I remember the heat and moisture in New Orleans. I lived not far from the University of New Orleans where I was teaching. Walking about 20 minutes to the university was enough for taking another shower. “It’s not just the heat, as Southerners have explained for generations. It’s the moist, soupy, suffocating humidity. And this year [2023] the punishing conditions have been relentless.”

Political heatwaves

The high temperatures and suffocating punishment of heatwaves and fires, and floods, and droughts, and other miseries of discomfort and limitation of personal freedom to enjoy life and the natural world are becoming a national and international climate and political curse. For instance, the United States is in a deep sleep. Americans are absorbed by the age deficiencies of President Joe Biden and the political dangers of reelecting tyrant and former president Donald Trump. They don’t ask why a convicted felon, who tried to overthrow the government, is not in prison. And they are not disturbed that he is not merely free but he is also a candidate for president. How could that be in a democracy? When I ask that question, I get incomprehensible rumblings, almost theological dogmas about the “rights” of accused persons to appeal to a higher court. But what is to appeal? The evidence that Trump triggered those who attacked the Capitol on January 6, 2021, is unassailable. I watched the House Hearings about the Trump effort to deny Biden the White House. I am convinced that Trump was the brain and the inspiration behind the awful attack on the Capitol. Yet, regrettably, Trump is running for president. Biden should have made that impossible. Trump’s former personal lawyer, Michael Cohen, says he is afraid for his safety and the safety of this country. He warned that if Trump gets to the White House, he will have unlimited power to do unlimited harm.

Biden is probably afraid of Trump in the White House. But he is consumed with his standing. His first “debate” with Trump did not go well. He mumbled most of the time. Trump kept insulting him, calling Biden incompetent, weak, and dangerous. Even Democrats attacked Biden and asked him to quit. But Biden said no. He felt obliged to give another speech and answer questions from chosen reporters. He said he was the best man to defeat Trump. And he is right. With the exception of his unacceptable and warmongering ideas and policies on Ukraine and Russia and Israel, Biden said it was too late to replace him with another Democrat to defeat Trump. He is right. With money being the defining factor of politics in America, only billionaires talk through the robots they fund. Biden has been in this game and political business for some 50 years. He knows who’s who in the money business behind the election. He defeated Trump in 2020, and if the Democrats stand behind him, he will defeat Trump again.

The grave political problems of America – climate chaos funded by billionaires, the Supreme Court becoming a house of tyranny, and the money of the billionaires in politics – must be faced head on and resolved as soon as possible. But will a reelected Biden be in a position (mental and political) to address them? He would need a Democratic House and Senate. In fact, he would also need the overwhelming number of Americans in the streets demanding a sweep of the stables: Reforming or abolishing the Supreme Court, no more money or billionaires in American politics, and a national mobilization to fight the heatwaves of the climate and political enemies in the room. They kill. Finally, fossil fuels must be banned, and all the resources of the country must be dedicated to bring solar and wind energy to light and move the country.

Evaggelos Vallianatos, Ph.D., studied history and biology at the University of Illinois; earned his Ph.D. in Greek and European history at the University of Wisconsin; did postdoctoral studies in the history of science at Harvard. He worked on Capitol Hill and the US EPA; taught at several universities and authored several books, including The Antikythera Mechanism: The Story Behind the Genius of the Greek Computer and its Demise.

Tuesday, April 30, 2024

Vienna conference urges regulation of AI weapons


By AFP
April 30, 2024

Austrian Foreign Minister Alexander Schallenberg warned autonomous weapons systems would 'soon fill the world's battlefields' - Copyright AFP/File STR

The world should establish a set of rules to regulate AI weapons while they’re still in their infancy, a global conference said on Tuesday, calling the issue an “Oppenheimer moment” of the time.

Like gunpowder and the atomic bomb, artificial intelligence (AI) has the capacity to revolutionise warfare, analysts say, making human disputes unimaginably different — and a lot more deadly.

“This is our generation’s ‘Oppenheimer moment’ where geopolitical tensions threaten to lead a major scientific breakthrough down a very dangerous path for the future of humanity,” read the summary at the end of the two-day conference in Vienna.

US physicist Robert Oppenheimer helped invent nuclear weapons during World War II.

Austria organised and hosted the two-day conference in Vienna, which brought together some 1,000 participants, including political leaders, experts and members of civil society, from more than 140 countries.

A final statement said the group “affirms our strong commitment to work with urgency and with all interested stakeholders for an international legal instrument to regulate autonomous weapons systems”.

“We have a responsibility to act and to put in place the rules that we need to protect humanity… Human control must prevail in the use of force”, said the summary, which is to be sent to the UN secretary general.

Using AI, all sorts of weapons can be transformed into autonomous systems, thanks to sophisticated sensors governed by algorithms that allow a computer to “see”.

This will enable the locating, selecting and attacking human targets — or targets containing human beings — without human intervention.

Most weapons are still in the idea or prototype stages, but Russia’s war in Ukraine has offered a glimpse of their potential.

Remotely piloted drones are not new, but they are becoming increasingly independent and are being used by both sides.

“Autonomous weapons systems will soon fill the world’s battlefields,” Austrian Foreign Minister Alexander Schallenberg said on Monday when opening the conference.

He warned now was the “time to agree on international rules and norms to ensure human control”.

Austria, a neutral country keen to promote disarmament in international forums, in 2023 introduced the first UN resolution to regulate autonomous weapons systems, which was supported by 164 states.