How Policy Changes are Effectively Implemented within the US Government

Summary

Austin Carson's (SeedAI) persistent advocacy over three years led to DARPA starting a program on AI risks in defense, highlighting the impact of sustained effort on government policy despite initial resistance.

SESSION Transcript

Hey folks, Austin Carson, founder and CEO of SeedAI, which is a 501(c)(3) nonprofit in Washington, DC. Instead of slides, I have an Apple note. We're gonna go with this. SeedAI, we kicked off a year before ChatGPT hit. At the time, it was quite funny because I told folks at DC I was starting a nonprofit, and they were like, AI, that's a rifle shot. That's so crazy. Why don't you just do a tech nonprofit? I'm like, well, it’s everything. It's the entire future of computing. What are you talking about? A year later, of course, everyone was in a bit of a different headspace.
I'm going to go back a bit towards what I did even before I started, and how I got pushed in this direction. About 2017, I ran another think tank, and I was actually talking to Phil over here about the very beginning of language models are coming out. They're being put on top of large streams of data and making recommendations. The easiest way for you to go look at a really high-grade security implementation of any system is by talking about it in a contested environment. It is like the most hardcore, all of your Russia and Chinese chaos hackers are always trying to screw with it in any way possible, in all the most creative ways. Phil and I were thinking, how can we give someone an amendment to the National Defense Authorization Act, which, starting before implementation, is how you actually pass things into law 99% of times. It's called a must-pass bill. If you're trying to get to the passing before implementation stage, it's either like a funding bill or it is something like this. Every year, you have to figure out what the military can do.
In thinking through, first, you have to start with a practicality. You have to get to either a very specific question or a very specific mechanism of implementation, and it has to be a rifle shot. Some of the bills that we've seen over time, and some of the things that I think a lot of us are familiar with, like 1047, some of the other stuff that—like Nathan's working on in Congress and that kind of thing, all start in this big question, like a lower-dimensional view of a system, which is like, can we just make a department that looks at foundation models? You obviously have to get into a much more specific version of that into something operational.
In this instance, we built with something that was quite discrete, yet a bit broad which was—and it was pre-agentic. It moved into a place where it's like, can you know the risk surface of multiple interacting language models on top of decision support systems in a contested environment? If you're asking yourself about either the safety and security of individual models, you're asking about that on top of critical data streams, and then if you’re looking at one of the biggest questions we're getting to now which is the interaction between them, that gives you a discrete question to answer.
Over the course of three years, we went and talked to several different members of Congress about getting this passed in the first place. It was quite confusing to people because in 2017-2018, nobody knew what the fuck we were talking about. Then we got a little bit further along, and we're thinking, who even can think about this at this point? So we're thinking, DARPA. DARPA is like the super science part of the government that generally is involved with—well, it used to be—but is generally involved in these types of questions that nobody else is investigating.
We point this language at DARPA. We go and get a member of Congress that actually asked about this in the first place, Ro Khanna, to push it forward. It almost passes, and then DARPA gets pissed off. This is the first step of implementation: make sure you understand the department that you're trying to point something at and its characteristics. Then we go and investigate around. We learn at some point, DARPA, it's like, if anybody else has tried to do this at all and thought about it, we don't deal with it, and second of all, don't tell us what to do. We like to think of the things that we think are interesting, and we like to work on those.
Fast forward a little bit. Next year, we get to this point of maybe we can pass it again. Still, people don't know what we're talking about. Year three is when the Senate Insight Forum started. For those of you familiar, Malo was on it. He got asked his p(doom) by the Leader Schumer, which was great. As part of that at the end of it, we finally got this language a little bit further along because people had started thinking about agents. This is, I think, two years ago or a year and a half ago at this point. Even then, it was still quite on the edge. Implementing anything involving agents or involving multiple interacting language models, the government has no idea. They've not even begun to try to do anything like this, so implementing it is fairly impossible, but you have to get ahead of the ball, or else you're going to find yourself in a situation where you're already being fought by existing forces in the world.
Let's go back to 1047, Andreessen Horowitz and all these other folks are really fired up because you're in a space that is immediately pertinent to them, and so the government's interaction with it is less sophisticated than what the private industry is functionally doing. You're now trying to create new government things in an existing private field, which is quite difficult. Talking about implementation, you'll get stopped in that a lot of the time.
I realize my five minutes is not going to let me bring this home, because the second part is most of what matters, so I'm sorry; this is really random. Email me at austin@seedai.org if you want to talk more.
Moving forward a bit, we finally got to the point where, after the Insight Forums, that had become like a live ball. DARPA had moved forward and started thinking about this out of my knowledge; nobody ever told me this. They were still mad at us pointing this at them however, so we pointed it at CDAO, which is a part of the DoD that thinks about AI and other things like that. It was renamed from the JAIC, which Phil—or no, where’s Mark Beall? Is Mark Beall here? Mark Beall was part of the JAIC, ask him. This thing passed, and then you have a task force report. You have a year where this government agency gets to think about it. Now from implementation, you would like to be like, no, start right now. What do you guys do? Start right now. But they don't. It never happens that way. You have 180 days, or you have an entire year where they have to think about it. That's just implicit in the system. So if you don't get ahead of the ball, you're going to find yourself in 180 days, or 365 days, where the whole time you're like, oh my god, we're still waiting. What the fuck? You cannot, you cannot, you cannot do that. You have to look forward into the future.
The funniest part to me, and this is kind of the end of the story, it passes two years ago or a year and change ago. The bill goes through. They're supposed to write a report. What I found the other day in talking to Judd, for those you guys that know Judd, and trying to figure out—where’s Tim? I want to talk about your conversation the other day. In knowing this, DARPA had actually started up this program. In large part, the implementation of the system happened because we had pushed this for three years. In that time, a plan had arisen, and then DARPA was already doing it. Sometimes the implementation of a bill is not because you built the perfect system in the bill. It's because, over the course of several years, you have pushed in this direction to have the government functionally change.
Wow. Sorry, guys, I thought I had 15 minutes. My bad. All right, thanks.