National Security & Defense
Summary
Steve Kelly explores AI's impact on national security, addressing its benefits, geopolitical implications, and potential threats from overreliance on AI for military and civil applications.
SESSION Transcript
So I'll be talking about national security and defense and promises and perils of AI in that context. And so in my very short time with you, I'll cover three questions. One is what is AI's value in national security and defense?
So we'll touch on some of the use cases. And the second question being, will AI serve as a tool for geopolitical stability or instability? And the third being, might AIs themselves become a national security threat? And so we'll take a journey through those three questions and I'm quite certain that we will not exhaust those, but I'll try to give them some reasonable treatment.
So on that first question, we know that AI is absolutely already transforming intelligence gathering and analysis, cognitive operations, cyberspace operations, autonomous defense systems, and of course both bioweapons and vaccine development. And so AI is quite useful for military planning in its ability to process vast amounts of data quickly, running multiple scenarios simultaneously, and optimizing logistics and resource allocation.
In the military context, the US military is actively investing in AI for fire control to gain decision advantage. And when I talk about fire control, I'm not talking about putting out fires, I'm talking about pew pew fires. So targeting and controlling the weapons systems. However, it is cautious about fully autonomous lethal weapons systems, especially in high-stakes domains like nuclear command and control, where the consequences of failure are severe.
So the Department of Defense has a Directive 3000.09 that specifically addresses the use of autonomous and semi-autonomous weapon systems, which place a priority on minimizing unintended engagements and ensuring meaningful human control. A good example of this is the need for more autonomous and semi-autonomous weapons systems in defensive systems like missile interceptors due to the time constraints involved.
So examples include Israel's Iron Dome and US Navy's close-in weapons systems, particularly given hypersonic anti-ship missiles. And so things where human decision making doesn't act quickly enough to be able to neutralize the threat, but the scenarios are quite defined and so one can develop approaches and systems to be able to handle those types of threats.
Another point I'd like to make is I will veer into the doctrinal differences between the United States and some other countries or the west and perhaps China in how we even think about national security. What does that term even mean in our context versus theirs? So for authoritarian systems like in China, oftentimes national security actually means for them in addition to external threats, regime stability is also national security.
And so they're expending a lot of effort on domestic surveillance, information control, suppressing dissent, in addition to, of course, traditional external threats. And AI is very useful and a key to these activities. So the second question is, will AI serve as a tool for geopolitical instability or actually might it increase stability? And so I'll make a couple points that have nothing to do with AI specifically.
One is that the US has, as you heard earlier from some of our speakers, referring back to Afghanistan and Iraq, we've got a lot of battlefield experience and confidence in our weapons systems and our commanders, and accordingly, we're incorporating AI into these approaches in a cautious and stepwise manner. In contrast, Chinese President Xi Jinping and their Central Military Commission have highlighted what they refer to as the five incapables resulting from their lack of real-world combat experience, which are intended to drive reforms within their military system.
And they judge that some or perhaps many or most of the People's Liberation Army commanders are incapable of five things. And I'll list these out. These are available in open sources, one being judging situations, another is understanding higher authority's intentions. The third being making operational decisions, the fourth being deploying forces, and the fifth being managing unexpected situations.
These make a lot of sense that if your military has not been tested on the battlefield, your weapon systems have not been used in real scenarios, it's hard to predict how the entire apparatus will perform when the time comes. And so I think that this does have relevance in AI, because, you know, given this and also what we know about the PRC's intentions with regard to Taiwan, I'm concerned that they're going to be much more willing to employ AI-driven lethal autonomous weapon systems and over-rely on AI decision support in making strategic warfighting decisions.
So in the US with our system, with many of these areas, we're using AI in planning and decision support. But ultimately the military commanders, who are very capable, very experienced and very trusted within the chain of command, are making the decisions. But over there, I wonder, when things get tense, might Xi trust AI more than his own generals? Their public doctrine says no, but I'm not so certain.
And so that is an area of ambiguity for me and more broadly, humans in all domains will increasingly be at risk of over-reliance on AI and gradual cognitive decline. I think that this will occur in every realm. And of course we're seeing news reports and commentary made by professors in academia that their undergraduate students don't know how to write. And they're very much over-relying on generative AI to be doing work that in previous generations was done manually.
I think it's time to bring out the blue books and the pencils to prove that people can write. But I think that can be happening across a range of domains where we are going to become increasingly comfortable with and trusting of the information, the analysis and the suggested courses of actions that AI provides to us.
And we will trust it more and more and more. And perhaps it will degrade our ability to reason and to make decisions on our own.
And so that has relevance in military contexts as well as intelligence analysis and every other domain of our lives, frankly. So all of this, I believe, is a risk to global stability as it bends our traditional approaches for reading and responding to strategic signals, maintaining deterrence and managing escalation.
So then the third point being, the third question that I wanted to address is whether AI itself may become a threat to national security. And in exploring this topic, I actually have found the writing of Anthony Aguirre from the Future of Life Institute as quite compelling. So I will riff a little bit from his writings which can be found at the KeepTheFutureHuman.ai. And he suggests a construct in which we consider AIs, their autonomy, generality and intelligence as being kind of the tripartite factors that are required for identifying where we might have control risks going forward.
So he argues that AI exhibiting all three are highly risky and should not be built unless we can guarantee to keep them secure. And advocating for keeping AI as a tool for human empowerment, he coined the term tool AI as AI that could assist humans but not replicate them. Thus, AIs not having all three of these characteristics would not be a threat to humanity. And I know that in fact, some of the side conversations I've had, even in this event, there are some out there who remain skeptical about the reality of scheming models and the loss of control scenarios.
And even if there is a real risk, the policymakers would not be able to successfully manage it given corporate incentives to build AGI. So even if we were able to control highly capable frontier models, one other scenario that I think might be interesting, and I have not seen a lot of analysis on this, is might large distributed multi-agent systems start to develop and exhibit emergent behaviors beyond the foundation model themselves and serve as well as another vector for a control risk.
Kind of like the ant and anthill analogy, where individual ants seem to not be terribly capable, they're very simple, but then when you put them together into a colony, there's all sorts of interesting things that they do, signals that they share, capabilities that they exhibit that are quite surprising. So I don't know, but I think that's an area worth investigating. So, in closing, whether it is due to a passive loss of control by our own cognitive decline, or that we are ceding meaningful oversight to AIs, or active loss of control of a powerful model that overcomes our carefully implemented safeguards, we need to plan for and avoid these scenarios.
So thanks to those in this room who are working on these challenges and appreciate FAR AI, CNAS and RAND for this great gathering and bringing awareness of this topic right here to our nation's capital. Thank you.
So we'll touch on some of the use cases. And the second question being, will AI serve as a tool for geopolitical stability or instability? And the third being, might AIs themselves become a national security threat? And so we'll take a journey through those three questions and I'm quite certain that we will not exhaust those, but I'll try to give them some reasonable treatment.
So on that first question, we know that AI is absolutely already transforming intelligence gathering and analysis, cognitive operations, cyberspace operations, autonomous defense systems, and of course both bioweapons and vaccine development. And so AI is quite useful for military planning in its ability to process vast amounts of data quickly, running multiple scenarios simultaneously, and optimizing logistics and resource allocation.
In the military context, the US military is actively investing in AI for fire control to gain decision advantage. And when I talk about fire control, I'm not talking about putting out fires, I'm talking about pew pew fires. So targeting and controlling the weapons systems. However, it is cautious about fully autonomous lethal weapons systems, especially in high-stakes domains like nuclear command and control, where the consequences of failure are severe.
So the Department of Defense has a Directive 3000.09 that specifically addresses the use of autonomous and semi-autonomous weapon systems, which place a priority on minimizing unintended engagements and ensuring meaningful human control. A good example of this is the need for more autonomous and semi-autonomous weapons systems in defensive systems like missile interceptors due to the time constraints involved.
So examples include Israel's Iron Dome and US Navy's close-in weapons systems, particularly given hypersonic anti-ship missiles. And so things where human decision making doesn't act quickly enough to be able to neutralize the threat, but the scenarios are quite defined and so one can develop approaches and systems to be able to handle those types of threats.
Another point I'd like to make is I will veer into the doctrinal differences between the United States and some other countries or the west and perhaps China in how we even think about national security. What does that term even mean in our context versus theirs? So for authoritarian systems like in China, oftentimes national security actually means for them in addition to external threats, regime stability is also national security.
And so they're expending a lot of effort on domestic surveillance, information control, suppressing dissent, in addition to, of course, traditional external threats. And AI is very useful and a key to these activities. So the second question is, will AI serve as a tool for geopolitical instability or actually might it increase stability? And so I'll make a couple points that have nothing to do with AI specifically.
One is that the US has, as you heard earlier from some of our speakers, referring back to Afghanistan and Iraq, we've got a lot of battlefield experience and confidence in our weapons systems and our commanders, and accordingly, we're incorporating AI into these approaches in a cautious and stepwise manner. In contrast, Chinese President Xi Jinping and their Central Military Commission have highlighted what they refer to as the five incapables resulting from their lack of real-world combat experience, which are intended to drive reforms within their military system.
And they judge that some or perhaps many or most of the People's Liberation Army commanders are incapable of five things. And I'll list these out. These are available in open sources, one being judging situations, another is understanding higher authority's intentions. The third being making operational decisions, the fourth being deploying forces, and the fifth being managing unexpected situations.
These make a lot of sense that if your military has not been tested on the battlefield, your weapon systems have not been used in real scenarios, it's hard to predict how the entire apparatus will perform when the time comes. And so I think that this does have relevance in AI, because, you know, given this and also what we know about the PRC's intentions with regard to Taiwan, I'm concerned that they're going to be much more willing to employ AI-driven lethal autonomous weapon systems and over-rely on AI decision support in making strategic warfighting decisions.
So in the US with our system, with many of these areas, we're using AI in planning and decision support. But ultimately the military commanders, who are very capable, very experienced and very trusted within the chain of command, are making the decisions. But over there, I wonder, when things get tense, might Xi trust AI more than his own generals? Their public doctrine says no, but I'm not so certain.
And so that is an area of ambiguity for me and more broadly, humans in all domains will increasingly be at risk of over-reliance on AI and gradual cognitive decline. I think that this will occur in every realm. And of course we're seeing news reports and commentary made by professors in academia that their undergraduate students don't know how to write. And they're very much over-relying on generative AI to be doing work that in previous generations was done manually.
I think it's time to bring out the blue books and the pencils to prove that people can write. But I think that can be happening across a range of domains where we are going to become increasingly comfortable with and trusting of the information, the analysis and the suggested courses of actions that AI provides to us.
And we will trust it more and more and more. And perhaps it will degrade our ability to reason and to make decisions on our own.
And so that has relevance in military contexts as well as intelligence analysis and every other domain of our lives, frankly. So all of this, I believe, is a risk to global stability as it bends our traditional approaches for reading and responding to strategic signals, maintaining deterrence and managing escalation.
So then the third point being, the third question that I wanted to address is whether AI itself may become a threat to national security. And in exploring this topic, I actually have found the writing of Anthony Aguirre from the Future of Life Institute as quite compelling. So I will riff a little bit from his writings which can be found at the KeepTheFutureHuman.ai. And he suggests a construct in which we consider AIs, their autonomy, generality and intelligence as being kind of the tripartite factors that are required for identifying where we might have control risks going forward.
So he argues that AI exhibiting all three are highly risky and should not be built unless we can guarantee to keep them secure. And advocating for keeping AI as a tool for human empowerment, he coined the term tool AI as AI that could assist humans but not replicate them. Thus, AIs not having all three of these characteristics would not be a threat to humanity. And I know that in fact, some of the side conversations I've had, even in this event, there are some out there who remain skeptical about the reality of scheming models and the loss of control scenarios.
And even if there is a real risk, the policymakers would not be able to successfully manage it given corporate incentives to build AGI. So even if we were able to control highly capable frontier models, one other scenario that I think might be interesting, and I have not seen a lot of analysis on this, is might large distributed multi-agent systems start to develop and exhibit emergent behaviors beyond the foundation model themselves and serve as well as another vector for a control risk.
Kind of like the ant and anthill analogy, where individual ants seem to not be terribly capable, they're very simple, but then when you put them together into a colony, there's all sorts of interesting things that they do, signals that they share, capabilities that they exhibit that are quite surprising. So I don't know, but I think that's an area worth investigating. So, in closing, whether it is due to a passive loss of control by our own cognitive decline, or that we are ceding meaningful oversight to AIs, or active loss of control of a powerful model that overcomes our carefully implemented safeguards, we need to plan for and avoid these scenarios.
So thanks to those in this room who are working on these challenges and appreciate FAR AI, CNAS and RAND for this great gathering and bringing awareness of this topic right here to our nation's capital. Thank you.