The views expressed in this post are those of the author and not necessarily those of Open Nuclear Network or any other agency, institution, or partner.
Introduction
The global conversation around AI’s role in military systems, particularly those influencing nuclear decision-making, is gaining momentum. This was underscored by the joint declaration between Presidents Joe Biden and Xi Jinping, emphasising the need to "maintain human control over nuclear decision-making,"[1] mirroring language from another similar statement from the US, UK and France.[2] As Trump takes office, the trajectory of this emerging dialogue on AI and nuclear decision-making remains uncertain.
Within the new administration, opinions on AI vary among the President’s closest advisors, though one consensus appears clear: AI is seen as a critical factor in the strategic competition with China. The ability to maintain US technological primacy through AI has thus become a key strategic priority. The urgency of this priority became even clearer in late January 2025, when Chinese AI company DeepSeek released its latest model, R1, which matches OpenAI’s o1 in capability but at a fraction of the cost.[3] DeepSeek’s breakthrough sent shockwaves through the stock market, causing a sudden and significant decline.[4] Even more surprisingly, this breakthrough was achieved despite US-imposed export controls on advanced computer chips used to train AI technologies, giving China a significant boost in the AI arms race. This raises questions about how the new administration will implement policies to curb China’s rapid AI advancements.
As the debate over AI and nuclear weapons continues to unfold and with new forums for multilateral and bilateral discussions emerging, it is crucial to explore opportunities on how the next administration can leverage AI’s potential while managing the significant risks it presents.
The rising role of AI in systems impacting nuclear decisions
Though much of the conversation around AI in nuclear decision-making remains speculative due to the classified nature of these systems and the lack of official documentation, recent statements from US officials offer some insight into areas where AI might be integrated.[5] This consideration comes at a time when the US is modernising its aging nuclear arsenals and seeking technological innovations to maintain a strategic edge.[6]
Key areas where AI’s integration holds most promise is in enhancing situational awareness, early threat detection and for decision-support functions. The technology could automate data collection from various sensors, streamline data processing and speed up the sharing of critical information with allies, ultimately supporting faster, more informed decision-making. In theory, AI’s ability to analyse vast quantities of data from multiple sources makes it ideal for identifying threats, processing sensor data and evaluating potential courses of action.[7]
However, if AI is rushed into integration without proper safeguards, its deployment could increase the risk of unintended nuclear escalation. These risks span from the potential for AI-driven decisions that outpace human oversight to errors that spread undetected through complex systems.[8] AI’s inherent unreliability, its tendency to generate wrong outputs, its lack of transparency, vulnerability to cyberattacks and difficulty in aligning its outputs with human values all complicate the potential for safe and reliable integration into nuclear systems.[9]
With these risks in mind, it is clear that the process of integrating AI into nuclear systems is far from straightforward. In theory, AI could support nuclear command, control and communications (NC3) or related subsystems that inform decision-making. However, the entanglement between conventional and nuclear systems means that the ongoing integration of AI into conventional military applications will inevitably influence nuclear decision-making.[10] The full consequences of this integration are still difficult to assess due to the rapid pace of technological advancements and gaps in research.[11]
On that front, we are seeing a significant shift in how major AI labs in Silicon Valley are engaging with the defence sector. Indeed, many of these companies, which had previously distanced themselves from military applications of cutting-edge AI technologies, are now making strategic partnerships with defence companies. For instance, in early November, Meta announced it would provide its language models to defence giants Lockheed Martin and Booz Allen, as well as defence tech firms like Anduril and Palantir.[12] Just days later, Palantir teamed up with Anthropic and Amazon Web Services to use Anthropic’s large language model, Claude, for military operations.[13] Then, on December 4, OpenAI announced a partnership with Anduril to develop advanced AI solutions aimed at enhancing US counter-unmanned aircraft capabilities.[14]
These partnerships are focusing on conventional warfare but given the close relationship between conventional and nuclear systems, the military applications of AI will inevitably affect nuclear warfare. The new US President has an opportunity to focus on how to de-risk AI systems to prevent inadvertent nuclear escalation.
Forecasting the evolution of the new administration’s AI policy
Trump’s perspective on AI has shifted between awe at its potential and concern over its implications. He has acknowledged AI’s capabilities as “alarming”, but – most importantly - he sees it as a critical battleground in the race for global technological dominance, particularly in relation to China.[15] This view aligns with a report issued by the US-China Economic and Security Review Commission in November 2024, which called for a “Manhattan Project-like program” focused on acquiring artificial general intelligence capabilities and to view AI as a “national priority.”[16]
To this end, Trump views the restrictions on AI imposed by the previous administration as barriers to innovation and, consequently, a threat to maintaining the US as a global leader in AI technology. On his first day in office, President Trump repealed Biden’s 2023 Executive Order on AI, fulfilling a campaign promise.[17] The Biden AI Executive Order sought to establish guidelines for the responsible use of AI in the federal government, focusing on mitigating risks such as bias and ensuring accountability.[18] Days later, on his fourth day in office, Trump signed a new Executive Order on AI, that sets to revise “all policies, directives, regulations, orders, and other actions taken under the Biden AI order”.[19]
A prime example of Trump’s "AI primacy” agenda is the recently announced Stargate AI infrastructure project, a new initiative created through a partnership involving OpenAI, Oracle, SoftBank and MGX. Stargate seeks to invest 500 billion USD in private funding over four years to build AI infrastructure. The project will focus on constructing data centres and electricity generation facilities, with initial developments already underway in Texas.[20] President Trump, who announced the project, described Stargate as the “largest AI infrastructure project by far in history.” OpenAI CEO Sam Altman echoed this sentiment, calling Stargate “the most important project of this era.”[21]
The administration’s focus on AI investments reflects its determination to stay ahead in the global AI race, particularly against China. However, the day prior the announcement of Stargate, the Chinese company DeepSeek released its latest model, R1, which matches OpenAI’s most advanced released model o1 in capabilities but at a fraction of the cost, as well as with open-source model weights and a paper detailing how it was trained.[22] This breakthrough caught many by surprise, not only because DeepSeek achieved state-of-the-art AI performance with significantly lower investments than American AI companies, but also because it happened despite US export controls on advanced AI chips, which were specifically designed to curb China’s progress in AI development.
China’s unexpected leap in AI technology triggered a massive drop in US stocks, with Nvidia, the leading supplier of AI chips, losing nearly 600 billion USD in market value in a single day. The rollout of the DeepSeek model has reportedly led investors to question the competitive advantage of US tech firms and the vast sums being spent on AI development.[23] In response, US companies like Meta are now investigating how DeepSeek managed to reduce the costs of training and running its model, with the goal of applying similar strategies to their own AI systems.[24]
Reacting to China’s advancements, President Trump called the release of the DeepSeek model a "wake-up call for our industries", thus claiming the need for the US to be "laser-focused on competing to win."[25] Although this signals a renewed push for AI dominance, the development raises critical questions about the effectiveness of US export controls on AI chips—a policy Trump first introduced in his initial term and later expanded by the Biden administration. Given Trump’s history of hardline trade policies against China, it is likely that even stricter restrictions will be imposed, but DeepSeek’s success may force a strategic reassessment of how the US approaches AI-related sanctions[26]
Despite Trump’s aggressive push for AI leadership, his administration’s approach to AI regulation remains uncertain due to internal divisions among key advisors. On one side, Vice President J.D. Vance and other senior officials favour rapid AI development, seeing regulation as a barrier to innovation that stifles competition and hinders tech startups.[27]
On the other hand, Elon Musk, who has been appointed to lead the new Department of Government Efficiency, has raised alarms about AI’s potential existential risks and has advocated for regulation.[28] In early 2023, Musk co-signed an open letter calling for a six-month pause on AGI research to carefully consider its ethical implications.[29] In line with this stance, Musk has openly criticised OpenAI and its CEO, Sam Altman, filing a lawsuit against Altman, OpenAI, and Microsoft, accusing them of transforming OpenAI into a "fully for-profit subsidiary of Microsoft" and prioritising profits over the public good in its pursuit of AI advancement. [30]
This divide within Trump’s circle—between those pushing for rapid innovation and those advocating caution—makes it harder to predict what direction this administration’s AI regulatory approach will ultimately take. What is most likely, however, is that the current administration’s AI policy will be shaped almost entirely by the goal of maintaining American technological primacy in an era of growing competition with China.
Moving forward
Translating this to the context of the AI-nuclear debate, the Trump administration will have a key role in shaping the direction of emerging bilateral and multilateral initiatives. Driven by a strong focus on AI as a strategic asset, the administration can drive innovation while ensuring that its integration into military and nuclear systems is carefully managed. This moment in history offers great potential for the new US President to lead AI innovation in a way that strengthens national security while mitigating strategic risks. In high-stakes domains such as nuclear decision-making, a risk-informed approach is essential to ensuring that AI’s benefits are fully realized without introducing new vulnerabilities. Without proper safeguards, the accelerated deployment of AI driven by strategic competition could increase the risk of unintended escalation, particularly given the entanglement of nuclear and conventional systems, where AI integrations are already underway, but the escalation pathways remain poorly understood.[31]
Trump’s focus on technological leadership may spur further investment in AI for defence, but the absence of clear regulatory frameworks leaves AI applications vulnerable to technical failures or malicious interference. While AI integration might offer significant advantages in isolated applications, it must be pursued with caution, as even marginal errors could have catastrophic consequences, particularly in the nuclear context. Focusing only on the existential risks posed by future AI capabilities, such as artificial general intelligence—as Elon Musk has warned—overlooks the more immediate risks from current AI systems already being integrated into conventional military applications. These systems could lead to inadvertent nuclear escalation if not carefully managed.
The National Institute of Standards and Technology (NIST) developed a companion resource to their AI Risk Management Framework, addressing risks specifically exacerbated by generative AI, including those related to CBRN threats.[32] While this framework is designed to assist organisations in managing AI risks, it also offers a model for how the Trump administration could handle AI-nuclear risks. Probabilistic risk-assessment techniques could help quantify AI points of failure that might lead to escalation, ensuring AI systems do not breach safety thresholds that could result in catastrophic outcomes.[33]
Avoiding inadvertent escalation is a shared global priority and aligns with Trump’s inauguration speech, where he vowed to be remembered as a peacemaker, measuring success by “the wars we (i.e., the US) never get into.”[34] In line with this vision, risk assessments are not about slowing down technological progress, but rather about helping define high-risk applications and ensure safety measures are in place. This research is necessary to understand how to maximise AI’s benefits without crossing safety thresholds. Given the high stakes of unintended escalation and the global consequences of nuclear conflict, the US must continue leading multilateral efforts to manage these risks. Engaging in forums like the REAIM Summit, maintaining bilateral discussions with China and initiating multilateral dialogues at the P5 level will be essential for fostering international cooperation and ensuring AI in nuclear decision-making systems is managed safely and responsibly.
Alice Saltini is a Non-Resident Expert on AI at the James Martin Center for Nonproliferation Studies (CNS), specializing in the impact of AI on nuclear decision-making. She advises governments and international organizations on managing AI-related nuclear risks, focusing on mitigating the challenges of integrating AI into military and nuclear weapons systems by translating complex technical concepts into actionable policy insights. She has published extensively on military applications of AI and has developed a general-purpose risk assessment framework for analyzing AI and nuclear risks.
Contact: asaltini@middlebury.edu
[1] White House, “Readout of President Joe Biden’s Meeting With President Xi Jinping of the People’s Republic of China,” The White House, November 17, 2024, https://www.whitehouse.gov/briefing-room/statements-releases/2024/11/16/readout-of-president-joe-bidens-meeting-with-president-xi-jinping-of-the-peoples-republic-of-china-3/.
[2] “Principles and Responsible Practices for Nuclear Weapon States.” Un.org, July 29, 2022. https://documents.un.org/doc/undoc/gen/n22/446/53/pdf/n2244653.pdf.
[3] “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning”, DeepSeek-AI, January 2025, https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf.
[4] David Goldman and Matt Egan, “A shocking Chinese AI advancement called DeepSeek is sending US stocks plunging”, CNN, January 27, 2025, https://edition.cnn.com/2025/01/27/tech/deepseek-stocks-ai-china/index.html.
[5] Greg Hadley, “AI ‘Will Enhance’ Nuclear Command and Control, Says STRATCOM Boss,” Air & Space Forces Magazine, October 29, 2024, https://www.airandspaceforces.com/stratcom-boss-ai-nuclear-command-control/.
[6] Heather Williams, “Updating Nuclear Command, Control, and Communication,” Center for Strategic and International Studies, December 10, 2024, https://www.csis.org/analysis/updating-nuclear-command-control-and-communication.
[7] Alexa Wehsener et al., “AI-NC3 Integration in an Adversarial Context: Strategic Stability Risks and Confidence Building Measures,” Institute for Security and Technology, February 2023, https://securityandtechnology.org/virtual-library/reports/ai-nc3-integration-in-an-adversarial-context-strategic-stability-risks-and-confidence-building-measures/.
[8] Alice Saltini and Yanliang Pan, “Beyond Human-in-the-Loop: Managing AI Risks in Nuclear Command-and-Control," War on the Rocks, December 6, 2024, https://warontherocks.com/2024/12/beyond-human-in-the-loop-managing-ai-risks-in-nuclear-command-and-control/.
[9] Alice Saltini, “The Implications of AI in Nuclear Decision-making – 2024. Artificial Intelligence and Weapons of Mass Destruction,” June 28, 2024, https://rethinkingarmscontrol.org/papers/the-implications-of-ai-in-nuclear-decision-making/.
[10] Michael Klare and Xiaodon Liang “Beyond a Human ‘In the Loop’: Strategic Stability and Artificial Intelligence',” Arms Control Association, November 13, 2024, https://www.armscontrol.org/issue-briefs/2024-011/beyond-the-loop.
[11] Alice Saltini, “AI And Nuclear Command, Control and Communications: P5 Perspectives,” European Leadership Network, November 13, 2023, https://europeanleadershipnetwork.org/report/ai-and-nuclear-command-control-and-communications-p5-perspectives/.
[12] Nick Clegg, “Open Source AI Can Help America Lead in AI and Strengthen Global Security,” Meta, November 4, 2024, https://about.fb.com/news/2024/11/open-source-ai-america-global-security/.
[13] “Anthropic and Palantir Partner to Bring Claude AI Models to AWS for U.S. Government Intelligence and Defense Operations,” Palantir, November 7, 2024, https://investors.palantir.com/news-details/2024/Anthropic-and-Palantir-Partner-to-Bring-Claude-AI-Models-to-AWS-for-U.S.-Government-Intelligence-and-Defense-Operations/.
[14] “Anduril Partners With OpenAI to Advance U.S. Artificial Intelligence Leadership and Protect U.S. and Allied Forces,” Anduril, December 4, 2024, https://www.anduril.com/article/anduril-partners-with-openai-to-advance-u-s-artificial-intelligence-leadership-and-protect-u-s/.
[15] Impaulsive, “The Donald Trump Interview - IMPAULSIVE EP. 418,” June 13, 2024, https://www.youtube.com/watch?v=xrFdHO7FH8w.
[16] “2024 REPORT TO CONGRESS of the U.S.-CHINA ECONOMIC AND SECURITY REVIEW COMMISSION,” Www.USCC.Gov (U.S. government publishing office, November 2024), https://www.uscc.gov/sites/default/files/2024-11/2024_Annual_Report_to_Congress.pdf.
[17] White House, “Initial rescissions of harmful executive orders and actions,” The White House, January 20, 2025, https://www.whitehouse.gov/presidential-actions/2025/01/initial-rescissions-of-harmful-executive-orders-and-actions/
[18] White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
[19] White House, “Fact Sheet: President Donald J. Trump Takes Action to Enhance America’s AI Leadership“, January 23, 20235, https://www.whitehouse.gov/fact-sheets/2025/01/fact-sheet-president-donald-j-trump-takes-action-to-enhance-americas-ai-leadership/.
[20] “Announcing the stargate project,“ OpenAI, January 21, 2025, https://openai.com/index/announcing-the-stargate-project.
[21] ”BREAKING: Trump—Flanked By Larry Ellison, Sam Altman, & Masayoshi Son—Announces Project Stargate,“ YouTube (Forbes Breaking News), January 21, 2025, https://www.youtube.com/watch?v=IYUoANr3cMo.
[22] “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning”, DeepSeek-AI, January 2025, https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf.
[23] David Goldman and Matt Egan, ”A shocking Chinese AI advancement called DeepSeek is sending US stocks plunging”, CNN, https://edition.cnn.com/2025/01/27/tech/deepseek-stocks-ai-china/index.html.
[24] Marco Quiroz-Gutierrez, ”Meta is reportedly scrambling ‘war rooms’ of engineers to figure out how DeepSeek’s AI is beating everyone else at a fraction of the price”, Fortune, January 27, 25, https://fortune.com/2025/01/27/mark-zuckerberg-meta-llama-assembling-war-rooms-engineers-deepseek-ai-china/.
[25] ”President Trump attends the House GOP Conference meeting”, Youtube (Fox Business), January 27, 2025, https://www.youtube.com/watch?v=AitXub2TE5s.
[26] Karen Freifeld and David Shepardson, “Latest US Clampdown on China’s Chips Hits Semiconductor Toolmakers,” Reuters, December 3, 2024, https://www.reuters.com/technology/latest-us-strike-chinas-chips-hits-semiconductor-toolmakers-2024-12-02/.
[27] Cecilia Kang, “J.D. Vance’s A.I. Agenda: Reduce Regulation,” The New York Times, July 17, 2024, https://www.nytimes.com/2024/07/17/technology/vance-ai-regulation.html.
[28] Walter Isaacson, “Inside Elon Musk’s Struggle for the Future of AI,” TIME, September 6, 2023, https://time.com/6310076/elon-musk-ai-walter-isaacson-biography/.
[29] Future of Life Institute, “Pause Giant AI Experiments: An Open Letter - Future of Life Institute,” March 22, 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
[30] “Musk v Altmann,” United States District Court Northern District of California, November 14, 2024,
https://fingfx.thomsonreuters.com/gfx/legaldocs/mopawwlwypa/Musk%20v%20Altman%20-%20amended%20-%2020241114.pdf
[31] Alice Saltini and Yanliang Pan, “Beyond Human-in-the-Loop: Managing AI Risks in Nuclear Command-and-Control,” War on the Rocks, December 6, 2024, https://warontherocks.com/2024/12/beyond-human-in-the-loop-managing-ai-risks-in-nuclear-command-and-control/.
[32] “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” July 2024, https://doi.org/10.6028/nist.ai.600-1.
[33] Alice Saltini and Yanliang Pan, “Beyond Human-in-the-Loop: Managing AI Risks in Nuclear Command-and-Control,” War on the Rocks, December 6, 2024, https://warontherocks.com/2024/12/beyond-human-in-the-loop-managing-ai-risks-in-nuclear-command-and-control/.
[34] ”President Trump's Inaugural Address”, YouTube (US Department of State), January 20, 2025, https://www.youtube.com/watch?v=0WEC6Fl-JAw.