Monday, August 19, 2024

Commentary


The Risk of Bringing AI Discussions Into High-Level Nuclear Dialogues

Overly generalized discussions on the emerging technology may be unproductive or even undermine consensus to reduce nuclear risks at a time when such consensus is desperately needed.



by Lindsay Rand
Published on August 19, 2024
CARNIGIE FOUNDATION

program
Nuclear Policy

The Nuclear Policy Program aims to reduce the risk of nuclear war. Our experts diagnose acute risks stemming from technical and geopolitical developments, generate pragmatic solutions, and use our global network to advance risk-reduction policies. Our work covers deterrence, disarmament, arms control, nonproliferation, and nuclear energy
.Learn More

Last month, nuclear policymakers and experts convened in Geneva to prepare for a major conference to review the implementation of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). At the meeting, calls for greater focus on the implications of artificial intelligence (AI) for nuclear policy pervaded diverse discussions. This echoes many recent pushes from within the nuclear policy community to consider emerging technologies in nuclear security–focused dialogues. However, diplomats should hesitate before trying to tackle the AI-nuclear convergence. Doing so in official, multilateral nuclear security dialogues risks being unproductive or even undermining consensus to reduce nuclear risks at a time when such consensus is desperately needed.

The less-than-catchy official title of the Geneva meeting was the Preparatory Committee for the Review Conference of the Parties to the Treaty on the Non-Proliferation of Nuclear Weapons. There are three preparatory committee meetings in the leadup to the NPT Review Conference, which occurs every five years. The intended goal of preparatory committee meetings is to discuss disagreements that are likely to occur at the NPT Review Conference in hopes of facilitating consensus on a final document. However, recent preparatory committee meetings have grown more contentious, stifling productive dialogue between state policymakers and nuclear security experts.

This precedent for animosity at preparatory committee meetings makes the seemingly unanimous call for increased dialogue on AI striking. Generally, proponents of increased dialogue framed AI as a potential catalyst for re-energizing diplomatic dialogue. They suggested that shared interests in mitigating AI-related risks could foster cooperation among nations with conflicting positions on other nuclear policy issues.

The level of interest in AI at the preparatory committee meeting isn’t surprising, given how much attention is being paid to the implications of AI for nuclear security and international security more broadly. Concerns range from increased speed of engagement, which could reduce human decisionmaking time, to automated target detection that could increase apprehension over second-strike survivability, or even increase propensity for escalation. In the United States, the State Department’s International Security Advisory Board recently published a report that examines AI’s potential impacts on arms control, nonproliferation, and verification, highlighting the lack of consensus around definitions and regulations to govern Lethal Autonomous Weapons Systems (LAWS). Internationally, there have also been calls for the five nuclear weapon states (P5) to discuss AI in nuclear command and control at the P5 Process, a forum where the P5 discuss how to make progress toward meeting their obligations under the NPT. Observers have called for the P5 to issue a joint statement on the importance of preserving human responsibility in nuclear decisionmaking processes.

However, injecting AI into nuclear policy discussions at the diplomatic level presents potential pitfalls. The P5 process and NPT forums, such as preparatory committee meetings and the NPT Review Conference, are already fraught with challenges. Introducing the complexities of AI may divert attention from other critical nuclear policy issues, or even become linked to outstanding areas of disagreement in a way that further entrenches diplomatic roadblocks.

Before introducing discussions about AI into official nuclear security dialogues, policymakers should address the following questions:In which forums could discussions about AI be productive?
What specific topics could realistically foster more productive dialogue?
Who should facilitate and participate in these discussions?
Forum Selection

Although leveraging AI discussions to overcome other diplomatic roadblocks is appealing in theory, it raises a number of practical concerns. When discussed in existing diplomatic forums, AI risks becoming linked with other nuclear policy disagreements. Issue linkage has already forestalled dialogue on other arms control and risk reduction efforts. For example, the ongoing war in Ukraine has been linked to U.S.-Russian arms control efforts, with Russia repeatedly refusing participation in arms control negotiations on the basis that it cannot compartmentalize issues integral to strategic stability. In the context of AI, premature attempts to incorporate the evolving technology into official dialogue could result in states refusing to address command and control issues unless they receive security guarantees that may have lower political appetite, like reductions of certain types of delivery vehicles that undermine second strike without faster (potentially AI-enhanced) decisionmaking. This would have the net effect of making it even more difficult to reduce AI-related risks and overcome the preexisting nuclear policy disagreements.

Instead, new dedicated spaces for conversations on AI could yield more focused, technically grounded approaches. This could be modeled after the Group of Governmental Experts (GGE) on Emerging Technologies in the Area of LAWS, which was established based on a recommendation at a regular Convention on Certain Conventional Weapons (CCW) Review Conference. Insights generated from these focused discussions could then be integrated into broader nuclear policy forums, and hopefully even those at the diplomatic level, such as the preparatory committee meetings and the P5 process. They may also foster an environment conducive to unilateral risk reduction efforts outside of formal agreements.
Topic Selection

Specificity in discussions will enhance the productivity of the new forum. A lot of hype, or unrealistic expectations, already exists around the magnitude and ubiquity of AI implications. Overly vague discussions about risks that make AI seem more revolutionary could be counterproductive by promoting hype that may increase apprehension and feed into arms racing dynamics. Moreover, unfocused dialogues could lead to cross pollination of rhetoric around a so-called AI arms race that has gained traction outside of the nuclear policy realm. Given that states are already devising national strategies to establish leadership on AI technologies, overly generalized discussions that are seen as restricting AI innovation would reduce incentives to cooperate.

Instead, discussions should focus on specific use cases of AI for national security purposes. If the goal of the discussion is to catalyze cooperation, then creativity in topic selection is important. Initial discussions may benefit from focusing on clearly defined, technically grounded topics—such as the use of AI in satellite imagery analysis for verification, or dispute mechanisms for issues that may arise from such applications—rather than more contentious and amorphous issues like AI in nuclear command and control systems.

Discussions would be more productive still if they are based on mutually agreed-upon definitions and metrics for evaluation—but this is easier said than done. Given that groups such as the GGE established for LAWS have still failed to reach agreement on definitions for high-level applications such as autonomous weapons systems, focusing on more narrow terms would be more productive, at least in the short term. Developing a shared technical vocabulary and establishing consensus on standards and procedures to evaluate AI capabilities for specific use cases would not only provide greater clarity on research and development status, but could also help create a more common understanding of risks related to AI, thereby reducing hype.
Participants

Although some nuclear policy experts and diplomats may have in-depth knowledge of AI, it is not a prerequisite for participation in nuclear security forums. Greater care should be given to ensure that policymakers receive foundational information before discussing AI in formal forums that are closed to external participants who have deeper topical expertise. Including AI technical experts, along with strategists and policymakers focused on AI-nuclear convergences, may enhance the dialogue’s quality and depth. A more interdisciplinary approach would better address the complexity of the technology and could slowly improve the quality of the dialogue.

Given the myriad risks at the convergence of AI and nuclear weapons, the intersection of these two issues certainly merits thoughtful discussion in some international forum. However, introducing AI into nuclear policy dialogues at the diplomatic level that are already suffering from major political roadblocks is unlikely to be easy or productive unless treated very carefully. Before rushing to do so, policymakers should give extensive thought to the practicality of such discussions to ensure that they are thoughtful, focused, and technically informed—and thus ultimately productive—and do not further deepen the divides, complicating an already complex geopolitical landscape.



Lindsay Rand
Postdoctoral fellow, Stanford Center for International Security and Cooperation

No comments:

Post a Comment