AI's Implications for Nuclear Weapons Systems

Summary

Philip Reiner, CEO of the Institute for Security and Technology, discussed the dual-edged impact of AI on cybersecurity and nuclear systems, noting AI's temporary boost to defense in cyber warfare, its enhancement of offensive reconnaissance, and the potential risks AI integration brings to nuclear NC3 systems, ahead of a Washington, DC event on the topic.

SESSION Transcript

As a former Pentagon employee, I find it rather odd that I'm the one that showed up without slides. I'm Phil Reiner. I’m the CEO of the Institute for Security and Technology. I'm going to go as fast as I possibly can because I've only got five minutes. I'm going to talk about a lot of stuff here today, but I look forward to talking with all of you about it more afterward.
There are two main areas for your consideration here today: what is happening a little bit along the lines of what Professor Song was talking about in terms of offensive cyber capabilities, but then also talking a little about what happens when you start to integrate cutting-edge AI tools into very complex systems, namely nuclear command, control and communication systems.
First, we've conducted some research at the Institute for Security and Technology on what's actually happening in the offensive cyber domain. We currently assess that outside of things like phishing and deep fakes, that the advantage really is trending toward the defender. This is actually unique. The defender's dilemma is one in which the defender is usually behind the offensive side, but we see that this is actually something that may not last for very long. That window is probably going to close relatively quickly.
We see that AI-enabled tools are enhancing attacker speed, scale, and effectiveness, Dawn actually had up on her slide there in terms of reconnaissance. On the reconnaissance front, we see that they are more rapidly able to identify targets. We see that they are gathering intelligence much more efficiently. They are processing increasingly vast amounts of data without any human intervention. They are prioritizing targets better. They are gaining a deeper understanding of the attack surface faster, more precise targeting, and the ability to identify vulnerabilities faster. These are all things that are already happening, so this is not conjecture.
In terms of potential future capabilities, some of the stuff that we're seeing on the agentic front: the potential for autonomous agents to conduct end-to-end cyber operations, possible multi-agent systems working collaboratively, and AI agents that could maybe potentially train each other on malicious techniques, things like more advanced polymorphic malware and, again, enhanced technical reconnaissance capabilities.
Then second—I said the first category is the offensive cyber front. The second piece is what challenges could arise from the integration of cutting-edge AI into these complex systems? Well, I think this is a less well-understood set of questions. I think a lot of work still needs to be done on this front. But we do know, I think as everyone here has already been talking about a bit today and as was talked about at IASEAI—I don't know how to pronounce that acronym. I don't know if anybody does really at this point—earlier this week, we know that with these cutting-edge AI systems to include emergent agentic tools, these exhibit vulnerabilities of their own; to include what appears to be an increasing ability to obfuscate their activities, the potential for miscoordination between agents, and even conflict between agents that have different goals.
One of the things that I came here to talk to you about also is nuclear command, control and communications. What is that? This is not—I think often what happens when conversation around this occurs is it's always about just launching nuclear weapons. That's not what these systems are for only.
Nuclear command, control and communications, it compromises or it consists of five critical functions: situation monitoring, planning, decision making, force direction, and force management. There are procedures, processes, facilities. There’s all the equipment that includes. All of that is involved in things like surveillance, intelligence analysis, predictive analytics, providing warning, providing decision support to those that are actually going to launch those nuclear weapons, and all the communications infrastructure that’s necessary in order to actually make that happen.
How does all of this tie together? The offensive cyber piece, the advanced AI being integrated, and nuclear command and control?
Today, the United States is very quickly advancing on its path to update its NC3 architecture, as are other nuclear weapons states. This includes throughout all of the subsystems that make up everything that I just described—those communications pieces, all of those subsystems that do the signals and warning, and everything else. The US has used AI in its NC3 systems before; this is not a novel thing. But beginning to integrate some of these cutting-edge AI tools introduces new vulnerabilities.
I think it’s important to note that the USSTRATCOM commander, the general that is in charge of Strategic Command, has publicly said that they are exploring all possible technologies, techniques, and methods to assist with the modernization of NC3 capabilities and that AI will enhance our decision-making capabilities. This is not again, conjecture. This is happening. This is something they’re intent on doing.
What are the implications? The implications are twofold. You've got new potentially vulnerable attack surfaces due to offensive cyber capabilities as they begin to expand the digital surface of the NC3 architecture. But then you've also got this broader risk from AI itself as you integrate it into the system.
Why does that matter? Because it introduces uncertainty. It introduces a lack of transparency and credibility, and in historical precedent, all of those things increase the risk of nuclear war. I'm about to get dinged.
What can you do about it? All of this is maybe a little bit frightening, a little bit scary, a little bit like, why would they do that kind of questions. Well, they're going to do it, so what are we going to do about it?
We're going to actually host an event in Washington, DC in April—for anyone who’s interested in these issues, I'd love to hear from you—to convene folks to actually talk a little bit more about this. We have to think about norms, and we have to think about how to actually get ahead of this before it creates a catastrophe. [Applause]