AI National Security Policy: Industry and Government

Summary

Drawing from her experience drafting AI export controls, Sara McNaughton argues for leveraging industry expertise and implementing clear, practical regulations in U.S. AI policy to avoid undermining U.S. military superiority.

SESSION Transcript

Hi everyone. Great to be here with you this weekend. By way of introduction, I am former BIS, where I worked on the team that wrote the AI diffusion rule and spent about three weeks in the Middle East talking to governments about the diffusion of AI and the corresponding national security implications as the United States looks to export large quantities of GPUs abroad.
Now I work at a geopolitical consulting firm where I work with companies across the AI stack, from chip designers to hyperscalers, as they navigate this increasingly complex geopolitical landscape. There are two things that have been constant across my time in both the public sector and the private sector that I want this group to take away from this.
The first is that there's a true deep desire and eagerness on the part of industry to work with government to mitigate national security concerns. Of course, when industry comes to the table, they bring a bias with them, but that is true with any institution or individuals such as ourselves. And I like what Saif said yesterday about the role of a policymaker being similar to that of a judge, meaning that their responsibility and their job is to take all the information from every perspective for a given issue and then adjudicate accordingly once they have all of that information available to them.
I think this is really important because industry is on the front lines of compliance with government, especially when it comes to export controls. So I think that's something that the current BIS and the current administration should be thinking about as they're looking to rewrite policies next. And this relates to diffusion in that what's more important than making a policy that's good or a policy that's bad is making a policy that's practical, a policy that industry can comply with.
Reasonable people are going to have reasonable policy disagreements and that's normal. There are going to be lessons that are learned. And sometimes that takes a while. But what's really important is that at the end of the day, these policies are practical.
Practical and easy for industry to navigate. Right now, the current administration is considering replacing the AI diffusion rule with a structure that looks like a government-to-government agreement mechanism.
A question whether these agreements would be public or not. I would assume that they would not be. But let's assume for the sake of conversation, they would be public. Even so, that would create a tremendous burden on industry compliance teams that I fear would result in an inability to plan long term and for them to innovate in a way that would promote US leadership in AI.
I want to talk about timing a little bit too. Timing is really important. This group knows intimately the urgency of the AI race between the United States and the PRC and the stakes that come with that. The question was posed earlier about what winning that race looks like.
I think that's a really good question. So I'm going to say what losing looks like. Losing this race to the PRC means that we would lose our military advantage to them for the decades to come. With that, the PRC has been clear that means that they want to reshape the world order to benefit the PRC and the CCP's values.
The longer that we exist in a space of policy uncertainty, the greater the risk becomes that we would lose that race. In the Biden administration, like I mentioned earlier, we had to learn some lessons that sometimes took too long, and now I'm watching the Trump administration go through similar learning curves that we did at the same time last year.
This became abundantly clear coming out of the President's trip to the Middle East a few weeks ago. The scale of the announcements weren't just astonishing, but to me they seemed a little misaligned with the President's America First policy. Security guardrails were clearly missing from these agreements, and they came in advance of a clear diffusion policy. So that creates just a really confusing environment for industry to have to navigate.
In a time when, in order for the US to maintain leadership in AI, what industry needs from the US Government is clear and timely policy. Now, don't get me wrong, I'm not saying that we should be compromising practicality for the sake of speed, quite the opposite. But what I'm saying is that government has already learned some of these lessons and we don't have the time to spare for them to learn them all over again.
To conclude, I just want to reemphasize my two points from the beginning. One, that synergy between industry and government will result in better national security policies for everyone, and two, that we can't let perfect be the enemy of the good, and that policies at the end of the day need to be practical.
I think these two things should be kept front of mind when we're looking at any AI policies. This goes for AI diffusion, but since export controls aren't a silver bullet as many of us know, it should be applied to other areas as well. Especially as we're looking at our domestic infrastructure landscape, trying to unlock the permitting process, invest in workforce development, attract talent from abroad, and as we're having conversations on chip governance, all of these things need to be kept front of mind.
Industry wants to win the AI race just as much as US Government does. So I think it's really important that we work together so that we can come out with practical policies so that everyone wins. Thank you.