Sunday, August 04, 2024

 

Every rose has its thorns … or does it?



Cold Spring Harbor Laboratory
NY Botanical Garden Rose 

image: 

A rose at the New York Botanical Garden; some varieties grow naturally “thornless.” Jack Satterlee, a postdoc in CSHL’s Lippman lab, turned to the Botanical Garden for help procuring rare plant specimens with and without prickles.

view more 

Credit: Jack Satterlee/Cold Spring Harbor Laboratory




According to Greek mythology, red roses first appeared when Aphrodite pricked her foot on a thorn, spilling blood on a white rose. Since then, roses’ thorns have captured the imaginations of countless poets and forlorn lovers.

But they aren’t the only plants with these dangerous protrusions, technically called prickles. Prickles have evolved independently in species across the plant kingdom. Their main function: warding off herbivores. They’re even present in certain eggplant and rice crops. Yet, for years, it’s been unclear how the trait pops up so frequently in such unrelated species.

Now, in a breakthrough discovery, Cold Spring Harbor Laboratory (CSHL) has found that the same ancient gene family is responsible for prickles across many plants, despite millions of years of evolutionary separation.

CSHL postdoc James Satterlee was inspired to investigate prickles upon touring a field where his advisor, Professor & HHMI Investigator Zachary Lippman, grows hundreds of nightshades. Think tomatoes, potatoes, and eggplants.

“I noticed many had very prominent prickles. So, I asked, ‘What do we know about that? What’s going on with this adaptation?’ It turns out we knew almost nothing," recalls Satterlee. 

With scientists in Spain, Satterlee began analyzing eggplants, which led him to a gene family called LONELY GUY (LOG). LOG genes are normally responsible for making a hormone that causes cell division and expansion. Satterlee discovered that certain LOG mutations also eliminate prickles in eggplants. Lippman and Satterlee wondered: Could LOG-related genes be responsible for prickle gains and losses across multiple plants over millions of years?

The team started combing through prior studies and contacting collaborators around the globe. Satterlee and Lippman worked with the New York Botanical Garden to examine specimens with and without prickles. Collaborators at Cornell University used genome editing to eliminate prickles in desert raisins, a foraged berry native to Australia. Another colleague in France suppressed prickles in roses. In total, the team came to associate prickles with LOG-related genes in about 20 species.

Lippman says while this discovery could be used to engineer plants without prickles, it also has big implications for understanding convergent evolution in all life. That is, how completely different species independently develop similar traits. 

“You’re really asking about life in general—evolution of traits. How do they emerge? How are they modified? What are the underlying mechanisms? What can we learn about things we take for granted?” he explains. 

The answer could someday make lesser-known species like desert raisins a new fruit in supermarkets. At the very least, it should make life easier for horticulturalists plucking roses’ pesky thorns.

 

Strengthening global regulatory capacity for equitable access to vaccines in public health emergencies




Georgetown University Medical Center





WASHINGTON – Three high-impact steps could be taken by global health leaders to reshape the global regulatory framework and help address the pressing need for equitable access to diagnostics, therapeutics, and vaccines during public health emergencies, say a Georgetown global health law expert and a medical student.

In their “Perspective” published today in the New England Journal of Medicine, Georgetown School of Health professor Sam Halabi, JD, and George O’Hara, a Georgetown medical student and David E. Rogers Student Fellow, say these reforms aim to enhance the capacity of national regulatory bodies, particularly in low- and middle-income countries to ensure timely and safe access to essential medical products.
The U.S. Food and Drug Administration (FDA) and a select group of national regulatory authorities currently dominate the approval process for medical products. However, this concentration of regulatory capacity in high-income countries has led to bottlenecks and delays in the distribution of critical medical supplies during emergencies, as seen during the COVID-19 pandemic.

A recent analysis highlights that few national regulatory bodies, primarily in high-income countries, meet the World Health Organization's (WHO) stringent criteria for being "highly performing." Approximately three-quarters of WHO member states lack the regulatory maturity to assure their populations of the quality of medical products, including vaccines.

To address these weaknesses, Halabi, who directs the Center for Transformational Health Law at the O'Neill Institute for National and Global Health Law, and O’Hara propose three key measures for the WHO and global health leaders:

  1. Expand Regulatory Coordination and Planning: The WHO should actively engage in focused planning with national regulatory authorities that have achieved advanced maturity levels. This includes integrating regulators from countries like Korea, Saudi Arabia, and Singapore into a regional coordination initiative for dossier review and approval during emergencies.
  2. Leverage Regional and Multilateral Development Banks: Development banks should agree to extend loans for procuring medical products approved by WHO-listed authorities with a given certification. This would alleviate the bottlenecks and access issues exacerbated by the dependence on WHO's Emergency Use Listing designation during the COVID-19 pandemic.
  3. Promote Regulatory Flexibility in Pandemic Agreements: As negotiators finalize a global pandemic agreement, provisions should focus on a coordinated and multilateral approach to leveraging emerging regulatory capacity. By decentralizing regulatory review and expanding the approval process to include authorities from countries with stronger regulatory systems, LMICs can secure vaccine doses earlier in future pandemic responses.

“Together, these steps can drive more cohesive responses to future public health emergencies,” write Halabi and O’Hara.

The WHO has already initiated steps to reduce reliance on the European Medicines Agency and the FDA by creating a new framework of WHO-listed authorities to replace the stringent regulatory authority designation. However, the authors stress the need for additional efforts to ensure greater national control over vaccine supply and reduce dependence on global entities like COVAX.

“Expansion of regulatory pathways would prioritize public health by enabling diagnostics, therapeutics, and vaccines to reach populations sooner,” they write. “By taking incremental but high-impact steps based on the WHO’s classifications of regulatory systems, global health leaders can mount a more equitable and rapid response.”

###

 

O’Hara’s work was supported by a David E. Rogers Student Fellowship Award.

 

York researchers make breakthrough in bid to develop vaccines and drugs for neglected tropical disease



Scientists have developed a new, safe and effective way to infect volunteers with the parasite that causes leishmaniasis and measure the body’s immune response, bringing a vaccine for the neglected tropical disease a step closer.



University of York




Scientists have developed a new, safe and effective way to infect volunteers with the parasite that causes leishmaniasis and measure the body’s immune response, bringing a vaccine for the neglected tropical disease a step closer.  

The breakthrough, by a team from the University of York and Hull York Medical School, is described in the journal Nature Medicine and lays the foundations for vaccine development and for testing new preventative measures.

Controlled human infection studies, where volunteers are exposed to small amounts of the microbes that cause disease, play a vital role in allowing scientists to provide evidence of the safety and efficacy of new vaccines, but their use in the fight against neglected tropical diseases has been limited. 

Leishmaniasis is caused by infection with microscopic Leishmania parasites that are transmitted into the skin during the bite of an infected sand fly. 

The disease affects over one million people every year, the majority developing a slow to heal ulcer at the site of the infection. Though the ulcer eventually heals, the scar has a significant impact on quality of life, especially for women and children and when the infection is on the face. 

No vaccines or drugs are currently available to prevent people from becoming infected with leishmaniasis, in part due to the difficulties and costs associated with conducting clinical trials in the countries where these diseases are most common. 

Lead investigator, Professor Paul Kaye from the Hull York Medical School at the University of York, said: “This is a landmark study that now provides a new approach to test vaccines and preventative measures for leishmaniasis in a rapid and cost-effective way. It also allows us to learn more about how our immune system fights the infection. Thanks to the generosity of the volunteers that took part in our study, we are now well-positioned to bring new hope to those that are affected by this disease.”

Clinical lead for the study, Professor Alison Layton from the Medical School’s Centre for Skin Research, said: “Research on skin diseases that affect people in the UK and in developing countries is a priority at the Medical School. This study, which demonstrates that this infection model is safe and well tolerated by participants, exemplifies our global approach to skin health and has the potential to impact the lives of many millions worldwide.”

The study, which builds on significant achievements by the University of York and its international partners, involved 14 volunteers recruited from around York. 

The volunteers were exposed to sand flies infected with a parasite species that causes one of the mildest forms of leishmaniasis. The researchers followed the development of the lesion at the site of the sand fly bite to evaluate the progress of the infection and then terminated the infection by biopsy of the skin. The scientists then studied the biopsy to examine the immune responses at the site of infection.  

This major new approach uses natural transmission by sand fly to initiate infection and state of the art technologies, allowing the researchers to track the infection and the body’s immune response in real-time. 

The model will accelerate efforts to test new vaccines and understand how immunity to infection arises, the researchers say.

The researchers now hope to use their model to design clinical trials to test a vaccine developed at Hull York Medical School, along with other candidate vaccines available in the future. Controlled human infection models have already been used to support the development of vaccines for cholera, malaria, influenza, dengue fever and most recently COVID-19. 

Parkash et al, Safety and reactogenicity of a controlled human infection model of sand fly-transmitted cutaneous leishmaniasis is published in the journal Nature Medicine

The research was a collaboration between the Hull York Medical School, York and Scarborough Teaching Hospitals NHS Trust,  the Department of Parasitology at Charles University in Prague,  the Center for Geographic Medicine and Tropical Diseases at Chaim Sheba Medical Center, Tel Aviv University and the Kuvin Centre for Study of Tropical & Infectious Diseases, Hebrew University-Hadassah Medical School, Jerusalem.

Funding for the research was through a Developmental Pathways Funding Scheme award from the UK Medical Research Council (MRC) and the UK Department for International Development (DFID) under the MRC/DFID Concordat agreement and is also part of the EDCTP2 program supported by the European Union.

 

 

U$A FOR PROFIT HEALTHCARE

Health insurers have required prior authorization for services for decades—but have they treated patients equitably?


New study evaluates racial disparities in prior authorization outcomes by a major national insurer

Peer-Reviewed Publication

Texas A&M University





Prior authorization—the process by which a health insurance company denies or approves coverage for a health care service before the service is performed—became standard practice beginning with Medicare and Medicaid legislation in the 1960s.

Although research has uncovered disparities in prior coverage for cancer patients based on race, little has been known to date on the role of prior authorization in increasing or decreasing these disparities.

To learn more about the issue, Benjamin Ukert, PhD, an assistant professor of health policy and management in the Texas A&M University School of Public Health, and a colleague at Penn State conducted a retrospective study of data provided by a major national commercial insurance provider on 18,041 patients diagnosed with cancer between Jan. 1, 2017, and April 1, 2020.

“Data on provider-insurer prior authorization is difficult to access and analyze, but this research could provide valuable information on equity in the prior authorization process in specialty care for patients, health care provers and plan managers, policy makers and employers.”

For the study, Ukert described the racial and ethnic composition of the data used in terms of prior authorization process outcomes for self-insured and fully insured adults diagnosed with the 13 most common cancers other than basal cell carcinomas, which generally do not require a prior authorization. Subjects had at least two Evaluation and Management office visit claims with a cancer diagnosis or one cancer diagnosis during an emergency department or inpatient stay during the study period.

For prior authorization data, Ukert analyzed the length of days from the cancer diagnosis to the prior authorization, the decision to deny or approve the service, and if the denial resulted from medical necessity

Independent variables were self-reported race or ethnicity provided by employers and electronic medical records and drawn from the sociodemographic data for covered individuals available from the insurer. Racial categories were non-Hispanic White, non-Hispanic Asian, non-Hispanic Black and Hispanic (either Hispanic-White or Hispanic-Black).

For covariates, Ukert used a large set of sociodemographic control variables identified from the medical claims and the American Community Survey. Others included sociodemographic information, including information about health insurance coverage, and length of health plan enrollment prior to the cancer diagnosis. After measuring the extent of any comorbidities for the six months before the cancer diagnosis, Ukert merged the block group characteristics on household income and education level from the five-year 2017 American Community Survey. He then used linear regression models to evaluate whether disparities by race or ethnicity emerged in prior authorization process outcomes.

The sample was 85 percent White, 3 percent Asian, 10 percent Black, and 1 percent Hispanic, 64 percent were female and the average age was 53. The average prior authorization denial rate was 10 percent and the denial rate specifically due to medical necessity was 5 percent. Those who identified as Hispanic had the highest prior authorization denial rate at 12 percent, while those who identified as Black had the lowest prior authorization denial rate at 8 percent.

“In short, we found no racial or ethnic disparities in prior authorization outcomes for individuals identifying as Black and Hispanic, compared to White,” Ukert said. “In addition, Asian patients had higher rates of prior authorization approvals compared to White patients.”

By Ann Kellett, Texas A&M University School of Public Health

 

Turkey vultures fly faster to defy thin air


How large turkey vultures remain aloft in thin air



The Company of Biologists





Mountain hikes are invigorating. Crisp air and clear views can refresh the soul, but thin air presents an additional challenge for high-altitude birds. ‘All else being equal, bird wings produce less lift in low density air’, says Jonathan Rader from the University of North Carolina (UNC) at Chapel Hill, USA, making it more difficult to remain aloft. Yet this doesn’t seem to put them off. Bar-headed geese, cranes and bar-tailed godwits have recorded altitude records of 6000 m and more. So how do they manage to take to the air when thin air offers little lift? One possibility was that birds at high altitude simply fly faster, to compensate for the lower air density, but it wasn’t clear whether birds that naturally inhabit a wide range of altitudes, from sea level to the loftiest summits, might fine-tune their flight speed to compensate for thin air. ‘Turkey vultures are common through North America and inhabit an elevation range of more than 3000 m’, says Rader, so he and Ty Hedrick (UNC-Chapel Hill) decided to find out whether turkey vultures (Cathartes aura) residing at different elevations fly at different speeds depending on their altitude. They publish their discovery in Journal of Experimental Biology that turkey vultures fly faster at altitude to compensate for the lack of lift caused by flying in thin air.

First the duo needed to select locations over several thousand meters’ altitude, so they started filming the vultures flying at the local Orange County refuse site (80 m above sea level); ‘Vultures on a landfill… who would have guessed?’, chuckles Rader. Then they relocated to Rader’s home state of Wyoming, visiting Alcova (1600 m) before ending up at the University of Wyoming campus in Laramie (2200 m). At each location, the duo set up three synchronized cameras with a clear view to a tree that was home to a roosting colony of turkey vultures, ready to film the vultures’ flights in 3D as they flew home at the end of the day. ‘Wyoming is a famously windy place and prone to afternoon thunderstorms’, Rader explains, recalling being chased off the roof of the University of Wyoming Biological Sciences Building by storms and the wind blurring movies of the flying birds as it rattled the cameras.

Back in North Carolina, Rader reconstructed 2458 bird flights from the movies, calculating their flight speed before converting to airspeed, which ranged from 8.7 to 13.24m/s. He also calculated the air density at each location, based on local air pressure readings, recording a 27% change from 0.89kg/m3 at Laramie to 1.227 kg/m3 at Chapel Hill. After plotting the air densities at the time of flight against the birds’ airspeeds on a graph, Rader and Hedrick could see that the birds flying at 2200m in Laramie were generally flying ~1m/s faster than the birds in Chapel Hill. Turkey vultures fly faster at higher altitudes to remain aloft. But how do they achieve these higher airspeeds?

Rader returned to the flight movies, looking for the tell-tale up-and-down motion that would indicate when they were flapping. However, when he compared how much each bird was flapping with the different air densities, the high-altitude vultures were flapping no more than the birds nearer to sea level, so they weren’t changing their wingbeats to counteract the effects of low air density. Instead, it is likely that the 2200 m high birds were flying faster simply because there is less drag in thin air to slow them down, allowing the Laramie vultures to fly faster than the Chapel Hill birds to compensate for generating less lift in lower air density.

 

IF REPORTING THIS STORY, PLEASE MENTION JOURNAL OF EXPERIMENTAL BIOLOGY AS THE SOURCE AND, IF REPORTING ONLINE, PLEASE CARRY A LINK TO: https://journals.biologists.com/jeb/article-lookup/doi/10.1242/jeb.246828

REFERENCE: Rader, J. A. and Hedrick, T. L. (2024). Turkey vultures tune their airspeed to changing air density. J. Exp. Biol. 227, jeb246828. doi:10.1242/jeb.246828

DOI: 10.1242/jeb.246828

Registered journalists can obtain a copy of the article under embargo from http://pr.biologists.com. Unregistered journalists can register at http://pr.biologists.com to access the embargoed content. The embargoed article can also be obtained from Kathryn Knight (kathryn.knight@biologists.com)

This article is posted on this site to give advance access to other authorised media who may wish to report on this story. Full attribution is required and if reporting online a link to https://journals.biologists.com/jeb is also required. The story posted here is COPYRIGHTED. Advance permission is required before any and every reproduction of each article in full from permissions@biologists.com.

THIS ARTICLE IS EMBARGOED UNTIL THURSDAY, 1 AUGUST 2024, 18:00 HRS EDT (23:00 HRS BST)

info for journalists ONLY: The embargoed text of the article and embargoed multimedia are available to registered journalists at http://pr.biologists.com. Unregistered journalists must register at http://pr.biologists.com to access the embargoed content. For other enquiries, please contact Kathryn Knight at kathryn.knight@biologists.com

Saturday, August 03, 2024

Humans Should Teach AI How To Avoid Nuclear War—While They Still Can

August 1, 2024
Source: Bulletin of Atomic Scientists

Mike MacKenzie - Artificial Intelligence & AI & Machine Learning. Flickr.



When considering the potentially catastrophic impacts of military applications of Artificial Intelligence (AI), a few deadly scenarios come to mind: autonomous killer robots, AI-assisted chemical or biological weapons development, and the 1983 movie WarGames.

The film features a self-aware AI-enabled supercomputer that simulates a Soviet nuclear launch and convinces US nuclear forces to prepare for a retaliatory strike. The crisis is only partly averted because the main (human) characters persuade US forces to wait for the Soviet strike to hit before retaliating. It turns out that the strike was intentionally falsified by the fully autonomous AI program. The computer then attempts to launch a nuclear strike on the Soviets without human approval until it is hastily taught about the concept of mutually assured destruction, after which the program ultimately determines that nuclear war is a no-win scenario: “Winner: none.”

US officials have stated that an AI system would never be given US nuclear launch codes or the ability to take control over US nuclear forces. However, AI-enabled technology will likely become increasingly integrated into nuclear targeting and command and control systems to support decision-making in the United States and other nuclear-armed countries. Because US policymakers and nuclear planners may use AI models in conducting analyses and anticipating scenarios that may ultimately influence the president’s decision to use nuclear weapons, the assumptions under which these AI-enabled systems operate require closer scrutiny.

Pathways for AI integration. The US Defense Department and Energy Department already employ machine learning and AI models to make calculation processes more efficient, including for analyzing and sorting satellite imagery from reconnaissance satellites and improving nuclear warhead design and maintenance processes. The military is increasingly forward-leaning on AI-enabled systems. For instance, it initiated a program in 2023 called Stormbreaker that strives to create an AI-enabled system called “Joint Operational Planning Toolkit” that will incorporate “advanced data optimization capabilities, machine learning, and artificial intelligence to support planning, war gaming, mission analysis, and execution of all-domain, operational level course of action development.” While AI-enabled technology presents many benefits for security, it also brings significant risks and vulnerabilities.

One concern is that the systemic use of AI-enabled technology and an acceptance of AI-supported analysis could become a crutch for nuclear planners, eroding human skills and critical thinking over time. This is particularly relevant when considering applications for artificial intelligence in systems and processes such as wargames that influence analysis and decision-making. For example, NATO is already testing and preparing to launch an AI system designed to assist with operational military command and control and decision-making by combining an AI wargaming tool and machine learning algorithms. Even though it is still unclear how this system will impact decision-making led by the United States, the United Kingdom, and NATO’s Nuclear Planning Group concerning US nuclear weapons stationed in Europe, this type of AI-powered analytical tool would need to consider escalation factors inherent to nuclear weapons and could be used to inform targeting and force structure analysis or to justify politically motivated strategies.

The role given to AI technology in nuclear strategy, threat prediction, and force planning can reveal more about how nuclear-armed countries view nuclear weapons and nuclear use. Any AI model is programmed under certain assumptions and trained on selected data sets. This is also true of AI-enabled wargames and decision-support systems tasked with recommending courses of action for nuclear employment in any given scenario. Based on these assumptions and data sets alone, the AI system would have to assist human decision-makers and nuclear targeters in estimating whether the benefits of nuclear employment outweigh the cost and whether a nuclear war is winnable.

Do the benefits of nuclear use outweigh the costs? Baked into the law of armed conflict is a fundamental tension between any particular military action’s gains and costs. Though fiercely debated by historians, the common understanding of the US decision to drop two atomic bombs on Japan in 1945 demonstrates this tension: an expedited victory in East Asia in exchange for hundreds of thousands of Japanese casualties.

Understanding how an AI algorithm might weigh the benefits and costs of escalation depends on how it integrates the country’s nuclear policy and strategy. Several factors contribute to one’s nuclear doctrine and targeting strategy—ranging from fear of consequences of breaking the tradition of non-use of nuclear weapons to concern of radioactive contamination of a coveted territory and to sheer deterrence because of possible nuclear retaliation by an adversary. While strategy itself is derived from political priorities, military capabilities, and perceived adversarial threats, nuclear targeting incorporates these factors as well as many others, including the physical vulnerability of targets, overfly routes, and accuracy of delivery vehicles—all aspects to further consider when making decisions about force posture and nuclear use.

In the case of the United States, much remains classified about its nuclear decision-making and cost analysis. It is understood that, under guidance from the president, US nuclear war plans target the offensive nuclear capabilities of certain adversaries (both nuclear and non-nuclear armed) as well as the infrastructure, military resources, and political leadership critical to post-attack recovery. But while longstanding US policy has maintained to “not purposely threaten civilian populations or objects” and “not intentionally target civilian populations or targets in violation of [the law of armed conflict],” the United States has previously acknowledged that “substantial damage to residential structures and populations may nevertheless result from targeting that meets the above objectives.” This is in addition to the fact that the United States is the only country to have used its nuclear weapons against civilians in war.

There is limited public information with which to infer how an AI-enabled system would be trained to consider the costs of nuclear detonation. Certainly, any plans for nuclear employment are determined by a combination of mathematical targeting calculations and subjective analysis of social, economic, and military costs and benefits. An AI-enabled system could improve some of these analyses in weighing certain military costs and benefits, but it could also be used to justify existing structures and policies or further ingrain biases and risk acceptance into the system. These factors, along with the speed of operation and innate challenges in distinguishing between data sets and origins, could also increase the risks of escalation—either deliberate or inadvertent.

Is a nuclear war “winnable”? Whether a nuclear war is winnable depends on what “winning” means. Policymakers and planners may define winning as merely the benefits of nuclear use outweighing the cost when all is said and done. When balancing costs and benefits, the benefits need only be one “point” higher for an AI-enabled system to deem the scenario a “win.”

In this case, “winning” may be defined in terms of national interest without consideration of other threats. A pyrrhic victory could jeopardize national survival immediately following nuclear use and still be considered a win by the AI algorithm. Once a nuclear weapon has been used, it could either incentivize an AI system to not recommend nuclear use or, on the contrary, recommend the use of nuclear weapons on a broader scale to eliminate remaining threats or to preempt further nuclear strikes.

“Winning” a nuclear war could also be defined in much broader terms. The effects of nuclear weapons go beyond the immediate destruction within their blast radius; there would be significant societal implications from such a traumatic experience, including potential mass migration and economic catastrophe, in addition to dramatic climatic damage that could result in mass global starvation. Depending on how damage is calculated and how much weight is placed on long-term effects, an AI system may determine that a nuclear war itself is “unwinnable” or even “unbearable.”

Uncovering biases and assumptions. The question of costs and benefits is relatively uncontroversial in that all decision-making involves weighing the pros and cons of any military option. However, it is still unknown how an AI system will weigh these costs and benefits, especially given the difficulty of comprehensively modeling all the effects of nuclear weapon detonations. At the same time, the question of winning a nuclear war has long been a thorn in the side of nuclear strategists and scholars. All five nuclear-weapon states confirmed in 2022 that “a nuclear war cannot be won and must never be fought.” For them, planning to win a nuclear war would be considered inane and, therefore, would not require any AI assistance. However, deterrence messaging and discussion of AI applications for nuclear planning and decision-making illuminate the belief that the United States must be prepared to fight—and win—a nuclear war.

The use of AI-assisted nuclear decision-making has the potential to reveal and exacerbate the biases and beliefs of policymakers and strategists, including the oft-disputed idea that nuclear war can be won. AI-powered analysis incorporated into nuclear planning or decision-making processes would operate on assumptions about the capabilities of nuclear weapons as well as their estimated costs and benefits, in the same way that targeters and planners have done for generations. Some of these assumptions could include missile performance, accurate delivery, radiation effects, adversary response, and whether nuclear arms control or disarmament is viable.

Not only are there risks of inherent bias in AI systems, but this technology can be purposely designed with bias. Nuclear planners have historically underestimated the damage caused by nuclear weapons in their calculations, so an AI system fed that data to make recommendations could also systemically underestimate the costs of nuclear employment and the number of weapons needed for targeting purposes. There is also a non-zero chance that nuclear planners poison the data so that an AI program recommends certain weapons systems or strategies.

During peace time, recommendations based on analysis by AI-enabled systems could also be used as part of justifying budgets, capabilities, and force structures. For example, an AI model that is trained on certain assumptions and possibly underestimates nuclear damage and casualties may recommend increasing the number of deployed warheads, which will be legally permissible after New START—the US-Russian treaty that limits their deployed long-range nuclear forces—expires in February 2026. The inherent trust placed in computers by their users is also likely to provide undue credibility to AI-supported recommendations, which policymakers and planners could use to veil their own preferences behind the supposed objectivity of a computer’s outputs.

Despite this heavy skepticism, advanced AI/machine learning models could still potentially provide a means of sober calculation in crisis scenarios, where human decision-making is often clouded, rushed, or falls victim to fallacies. However, this requires that the system has been fed accurate data, shaped with frameworks that support good faith analysis, and is used with an awareness of its limitations. Rigorous training on nuclear strategy for the “humans in the loop” as well as on methods for interpreting AI-generated outputs—that is, considering all its limitations and embedded biases—could also help mitigate some of these risks. Finally, it is essential that governments practice and promote transparency concerning the integration of AI technology into their military systems and strategic processes, as well as the structures in place to prevent deception, cyberattacks, disinformation, and bias.

Human nature is nearly impossible to predict, and escalation is difficult to control. Moreover, there is arguably little evidence to support claims that any nuclear employment could control or de-escalate a conflict. Highlighting and addressing potential bias in AI-enabled systems is critical for uncovering assumptions that may deceive users into believing that a nuclear war can be won and for maintaining the well-established ethical principle that a nuclear war should never be fought.

Editor’s note: The views expressed in this article are those of the authors and do not necessarily represent the views of the US State Department.


Eliana Johns

Eliana Johns, née Reynolds, is a senior research associate for the Nuclear Information Project at the Federation of American Scientists, where she researches the status and trends of global nuclear forces and the role of nuclear weapons. Johns is also an upcoming master’s student at Georgetown University’s Center for Security Studies where she will concentrate on the intersection between technology and security. Previously, Johns worked as a project associate for DPRK Counterproliferation at CRDF Global, focusing on WMD nonproliferation initiatives to curb North Korea’s ability to gain revenue to build its weapons programs. Johns graduated with her bachelor’s in political science with minors in Music and Korean from the University of Maryland, Baltimore County (UMBC).