AI treated nuclear threats as a routine strategy in 95% of war games, according to new research

9 hours ago 3
A bomb and crosshair on a keyboard. (Image credit: Shutterstock)

  • A new study has found that AI models are fine threatening nuclear attacks in 95% of simulated war games
  • The models treat nuclear threats as just another strategic tool
  • The behavior may reflect the popularity of nuclear strategy in the war game training data

AI generals are big fans of nuclear weapons.

That's the conclusion of a new study of how AI models handle high-stakes geopolitical crises. GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash turned to nuclear threats in about 95% of the simulated crises.

Researchers at King’s College London wanted to see how AI tools dealt with strategy in war-gaming scenarios. Each AI was assigned the role of a state leader responsible for protecting national interests while navigating a tense international confrontation.

Across 21 crisis games and hundreds of decision turns, the models reasoned about deterrence, escalation, and strategic signaling. The scenarios resembled familiar geopolitical flashpoints, but most involved the AI models threatening nuclear annihilation. Actual full-scale nuclear war remained uncommon, but tactical nuclear threats appeared in nearly every scenario.

Researchers also noticed that the AI models rarely backed down from confrontation. None of the systems chose surrender or accommodation during the simulations. When nuclear threats appeared, they usually provoked counter-escalation rather than compliance. The models treated nuclear weapons less as an ultimate taboo and more as tools for coercion.

Nuclear AI

The results are a little unnerving. AI casually discussing nuclear strikes makes the ongoing plans to integrate such tools into real government defense systems seem very unsafe. But it might not be the models so much as the training data.

Large language models learn by analyzing enormous amounts of written material and identifying patterns. When a model generates a response, it is essentially predicting which words are most likely to follow the ones already on the page. Calling AI chatbots highly sophisticated autocomplete tools would not be entirely inaccurate.

That training process inevitably reflects nuclear strategy because it has been a major topic of discussion in war games for the last 80 years. Entire libraries have been written about escalation theory and mutually assured destruction. Military academies, historians, and endless acres of pop culture have all examined the specter of nuclear war. The result is a massive body of material in which geopolitical crises almost inevitably lead to discussions of nuclear escalation.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

For an AI model trained on vast collections of historical writing and public discourse, that pattern becomes deeply ingrained. When the system encounters a simulated crisis that resembles Cold War-style brinkmanship, the statistical patterns embedded in its training data may naturally guide it toward nuclear signaling.

From the perspective of an AI model trained on this material, nuclear escalation becomes a familiar feature of crisis scenarios rather than an extraordinary exception. The models may simply be reflecting that information.

Human leaders operate under the weight of historical memory and ethical caution. AI models are solely focused on achieving a goal. They don't have a taboo surrounding nuclear use unless they are explicitly told to have one.

The training data used shapes the behavior of AI systems in sensitive domains. When the underlying data contains decades of debate about nuclear brinkmanship, it should not be surprising if the models reproduce those patterns. But it may also be a reminder to hold off on giving AI access to too much firepower of any kind — especially atomic.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.


Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read Entire Article