What Would it Take to Stop the Development of Superintelligence? A Treaty Proposal

Summary

Aaron Scher presented “What Would it Take to Stop the Development of Superintelligence?“ on August 27, 2025 at FAR.AI Labs Seminar.

SESSION Transcript

Hi, I'm Aaron. I'm going to be talking about a potential treaty to pause AI development and stop development of superintelligence.
This is based on work done with my colleagues at the Machine Intelligence Research Institute who are named on this slide. But the slides are mine in that they have not reviewed them, and might disagree with various parts of it. Also, this is sort of work in progress. There is an upcoming paper about this sort of stuff, but I think the content might change and that's okay. Cool. So we're going to talk about motivations for a treaty.
We're going to talk about historical analogies. We're going to talk about various questions to help when thinking about treaties. We're going to talk about the actual treaty text, and then the bulk of the presentation will be about why the particular treaty that we've proposed works, according to me.
So quick motivation. There's a book coming out soon: if anyone builds it, everyone dies. That's one of the core motivations behind this sort of agenda of: we might need to have a global treaty to pause AI development short of superintelligence, where the idea being that if artificial superintelligence is developed anywhere on Earth, it's a threat to people everywhere. Therefore, you need to have a global encompassing treaty. Additionally, we might need a lot of time to figure out how to do safe AI development, or how to align AI systems.
I think it's not clear. Some plans give us only a couple months. For example, the “Let's race ahead and beat China with the US companies for a while.” It only gives you a couple months basically to solve all the safety problems you might need to solve. That's pretty dangerous. It'd be great if we had a better plan. And then again, sort of the key motivation is we don't want to have someone build ASI, but we also don't know where the line is.
And so that's sort of the motivation behind the “Let's stop now,” or “Let's stop pretty soon, rather than let's kind of keep going to the edge.” You can imagine this as sort of driving a car towards a cliff edge in the fog. You don't actually know where the cliff edge is. You know that you're getting closer to it every second. Maybe this is a bad plan. Maybe you should hit the brakes.
Cool. So historical analogies. Some of the useful ones here is thinking about US/USSR nuclear arms control agreements. There's a bunch of them. There's sort of been a series of these over the years, including sort of reducing arms stockpiles, including stopping certain kinds of testing, stopping having certain kinds of nuclear missiles, et cetera.
Also the nuclear Non Proliferation agreement, which is sort of a global agreement. And one could think of as sort of a, “Let's keep the nuclear states with nukes and try to prevent the rest of the world from getting nukes.” That's sort of like a key part of the deal. But then as sort of compensation for that, allow the rest of the world to use nuclear power. That's sort of the idea behind the NPT.
We also have biological and chemical weapons conventions which sort of ban the development stockpiling of biological and chemical weapons. And then I think another interesting precedent is the Montreal Protocol banning CFCs in order to sort of fight ozone depletion.
I think these historical analogies are useful to think about, but the framing that I want you to have for them is “The things being proposed in this treaty have been done before to some extent, or there are similar things that exist.” Not “This is like a super normal thing. Don't worry, it will be totally fine. We totally know how to do this.” So it's more to push off the “This is totally crazy, how could we possibly regulate AI development?” No, we regulate plenty of things. We know how to do this, we know how to do arms control. We need to apply it here.
That's not to say it will be easy. That's not to say there aren't flaws in a bunch of the previous agreements. But there's a key thing here which is that proposing various forms of AI governance should not be a total non-starter. We do various forms of governance for other fields. Some specific disanalogies with those other examples...
So on the nuclear side, a lot of nuclear weapons, dare I say all of them, are sort of owned and operated by governments. By contrast, AI is done by the private sector, primarily by large companies. That's maybe one key disanalogy. AI is also very dual- use, in that the sort of more powerful AI directly correlates with being more dangerous. Whereas for nuclear energy and power, while there is this sort of energy side which is maybe beneficial, it's very clear that sort of nuclear weapons have this sort of dangerous purpose.
Also, AI is not yet universally seen as dangerous, whereas chemical and biological weapons and nuclear weapons are very much seen as “Oh wow, this is like a dangerous technology. We should be careful.” I think also the perceived upside for AI could be quite high. Thinking about the Montreal Protocol that I mentioned that banned CFCs. The important thing is that the market for CFCs at the time, around the time of the Montreal Protocol was on the order of $10 billion. By contrast, Nvidia's stock is worth $4 trillion. So I think you're dealing with an industry that's, I don't know, something like one and a half orders of magnitude larger, with big bounds and hard to know exactly.
But yeah, the perceived upside for AI could be very high and the actual market, if you will, is just much larger. But I think there's another disanalogy which is that ASI is self-destructive, whereas a lot of these other things are less self- destructive. So in the nuclear case, having nuclear weapons gives you strategic advantage over your sort of opponents, whereas having an ASI, one might say, is not actually a thing because if your ASI is misaligned, it's going to kill you too, along with the rest of the world.
And so I think there's a self-destructive aspect of AI that doesn't necessarily apply to the other things. Again, the point of this precedent is not to say it'll be easy, but more to say the things that are suggested in this treaty are not totally unprecedented. We've done things like them before. Cool. Some of the questions that I like to think about when talking or thinking through what could a treaty include is: Does the proposed treaty actually prevent the thing that we're worried about? A key aspect is: Is this internationally verifiable?
So the intuition here is that I would like to make sure that some other party is not doing dangerous ASI development. I would like to actually make sure, I don't just want to take them at their word. Countries might also want the same thing. They want to be able to verify that everyone else is following the rules.
And so you can imagine sort of like, the US and China agree to some rules, just like we did with the US and USSR. Like the Strategic Arms Reduction Treaty (START) treaties, where we say, “Ha, we're going to decommission this data center,” for example, “We're going to decommission those nuclear weapons.” We want to make sure people are actually following the rules. We want to be able to check. So I think there's this key aspect of verifiability that seems very important for international relations and for these treaties to actually go through.
There's another question which is like, “What is the cost here?” Ideally you want a treaty that prevents the dangerous thing, is internationally verifiable and doesn't cost very much. That's the ideal. As you'll see. I think there's places where that's not totally going to be the case. Cool. So yeah, we've drafted a treaty. This is what it currently looks like. There's 16 articles. It's pretty long. I'm not going to talk about all of it right now, but I'll highlight some of the key ideas.
So in our proposed treaty, we create an international body to carry out the treaty. We call this an IAIA monitoring, following from the IAEA used for the sort of carrying out and enforcing the nuclear Non Proliferation agreement. And that includes doing inspections of nuclear facilities in dozens of countries, and setting various limits, and making safeguards agreements with specific countries. So that's sort of the main thing we're doing is part of the governance structure of this treaty we propose. Sort of making an international body to carry this out.
Again, there's plenty of treaties that do things differently. For instance, the US/USSR nuclear treaties are sort of bilateral. There isn't some third international body that gets created to sort of carry this out. We're also going to go ahead and ban large AI training based on the amount of compute used. And then for medium sized training, if you will, we're going to monitor that. And the monitoring jumping down to Article 14 is sort of in order to make sure that the governing body has situational awareness; understands if AI capabilities are increasing, do we understand why that's happening? Can we prevent it if it needs to be prevented? Sort of keeping them in the loop on everything.
We're then going to talk about compute consolidation and chip monitoring and chip production monitoring, where the idea here is that one of the main inputs to AI development is AI chips. These are pretty specialized, they're pretty expensive, they're physical objects which are nice and monitorable. So basically our treaty says countries are going to take their chips, they're going to put them in facilities that the IAIA can monitor and then the IAIA is going to monitor them. And then we're also going to monitor new chip production. The actual monitoring will involve basically making sure that chips aren't doing dangerous activities, where the dangerous activities will depend a lot, but one way to define this is to say: no big AI training, but you're allowed to do inference, you're allowed to do small scale experiments, that kind of thing.
That's one potential example that works for the current world where nobody has trained any dangerous AI models to my knowledge. And so it's fine if you ban training but let people do other things. On the other hand, if you'll have already trained dangerous models, you have a concern there and so you have to draw the line in a different place. Cool. The next thing that this treaty does is it restricts research, basically saying you're not allowed to do AI research.
The idea here being that AI progress is a combination of compute the chips used, the number and quality of chips, and the actual algorithms that are used. And so we just did our sort of ban on increasing the sort of compute with Article 4.
And then we're also going to ban AI research itself. If you only banned one of these, the other would continue and AI progress would continue, and again, often in sort of hard-to-predict ways. So we're going to ban that. We're also going to try to verify that we've banned that. Again, international verification is like a key point of these plans. Unfortunately, this looks pretty hard. We'll talk about the sort of potential ideas there.
But I think this is one of the sort of weak points in our current plan. And then, as some other treaties have done, we're going to sort of nod to the fact that states are going to use their intelligence efforts to sort of supplement the treaty. They're going to do intelligence gathering, they're going to tell the IAIA about their findings in order to try to identify secret AI projects that might be happening.
And then similar to the Chemical Weapons Convention, for example, we're going to have challenge inspections where if a country sees that another country is doing something they don't like, for instance, it looks like you're building a secret data center. You can sort of ask the international community, can someone go and inspect that, to see if they actually are building a secret data center, and then have inspections to sort of carry that out, ensure compliance.
Whistleblowers, classic thing people have heard a lot about. There's some other stuff that's part of treaties, like how do we actually do the negotiation and dispute here? And then one other sort of key point is the sort of escalation and enforcement aspect, where the plan, as I've laid out, relies a lot on getting a lot of countries to be part of the treaty and getting them to actually follow the rules. And there's a question like, what do you do if they stop following the rules? Or what do you do if a country decides not to join the treaty? And so there's this aspect of “What are the options available, and what should you be willing to do as a country in order to sort of intervene there to stop dangerous activities from continuing?”
Cool. So that's the treaty. Is it in your heads? Is it somewhat in your heads? Okay, cool. Well, we're now going to walk through what I claim are most important pieces, sort of that hold it together technically. And specifically, I'm going to be talking about this critical claim: the treaty works. Now the rest of the presentation is me explaining why I think the treaty works. Where works is, “If implemented, this would actually work at avoiding AI development from continuing very much and avoiding the development of superintelligence.”
Cool. This is what we'll talk about. We'll keep coming back to this slide, but this is the sort of structure of the argument around why the treaty works. So at a high level, we're going to stop AI progress. How are we going to do that? We're going to stop large scale training and we're going to ban AI research and then we'll go into each of those things. And then the other big, high-level point is, “Well, what if they do it anyway?” And that's where we have nonproliferation and enforcement mechanisms.
Cool. Let's jump in to locating existing AI chips. This is, I think. Yeah. So you can split this into two different categories. One being sort of big AI data centers, which are probably pretty detectable. They're often publicly reported on. There's an image here from the sort of Stargate construction going on. And basically big companies often just say where their big data centers are. They're proud of this fact. It looks good for them when they build a big data center to go to the community and say, “Hey look, we're creating 200 jobs in your community.” Of course they don't mention that the AIs they're creating are going to take away thousands of jobs in that community. But that's a down-the-line problem that doesn't look good to the community.
So basically I think the big AI data centers are going to be pretty detectable. One of the reasons they're detectable is that they have a large physical and electricity footprint where one H100, which is one of the AI GPUs that's commonly used, has about the same power consumption as a home on average. And so you can think about data centers with like 50,000 or whatever. These are medium to large towns, maybe small cities, basically, in terms of their power draw. Smaller data centers, I think things are much harder. One of our hopes here is that if you domestically require that people report their chips to the government, people will do it because they're usually law abiding. That's a claim.
Fortunately, we also have things like domestic law enforcement, which, well, dare I say, enforces the law and then hopefully we can get some international verification of that process. So the problem was you're like, the US is like, “China says that they told all their citizens to turn in their GPUs to the government, but do they actually turn in their GPUs to the government? We don't really know.” The idea here is that maybe we can add some verification to that domestic law enforcement process, though I think this is kind of hard to do. Another hope here is that we're not talking necessarily.
Yeah, basically we're not talking about gaming GPUs. In our treaty, we draw a specific line, like eight H100 equivalents, which cost about $200,000, $300,000, something like that. So these are not cheap. And so some of the hope here is this is an expensive enough thing. There aren't that many random individuals that own this. It's mostly companies. The companies will mostly follow the laws or there will be some whistleblowers within the companies that are willing to follow the laws.
And then again, as I mentioned before, we have challenge inspections, in case things are really not going well. You might have, “Hey, we think there's still a data center there that you didn't declare to the international community. We want to go investigate.” Cool. So that's locating existing chips. That's sort of the argument for why this seems feasible. It definitely seems feasible for the big AI data centers, and I think it's sort of unclear for the smaller AI data centers, but maybe we can make it work.
Cool. Next we'll talk about tracking new chips. And the main sort of feature here is that the chip supply chain is very narrow. There's a nice graphic here in the bottom right of the slide from the paper, Computing Power in the Governance of AI, which sort of looks at parts of the compute supply chain.
So basically a lot of the chips are being designed by Nvidia, they're being fabricated. A lot of these chips, of the key components is being fabricated by TSMC. And then ASML is sort of making key machines useful for that fabrication. Now, speculatively, I would claim that there are approximately three factories in the world that do all of the logic die fabrication. This is hard to exactly test, but I think the evidence points towards it on the order of three physical factories.
And there are like, three addresses in the world where all of this chip production is happening. So when I come to you and I say, “Hey, I think it's feasible for us to track new chips,” I'm sort of making the claim, “Hey, we can go to those three factories and we can track everything that leaves the factories.” We should be able to do that. We should be able to pull it off.
Cool. Next up we'll talk about sort of verification technology and monitoring chip use. Again, the sort of structure here was that we're going to find existing chips, we're going to track new chips, we're going to put them into IAIA monitored facilities and then, how are we going to monitor them? What does that look like? So basically verification of AI workloads or compute use verification is a topic that has ongoing research.
There's been three big papers about it, one of them from myself. And unfortunately this is sort of early stage research. We're not totally sure what to do. There are a few different problems that people could or a few different sort of solutions and approaches that people could end up going with. One of those is using hardware-enabled governance mechanisms where we get the AI chips themselves to sort of help carry out this governance. Unfortunately, this has some major security problems, which is that the environment that you need your chips to be secure for is a pretty difficult environment. You have state-level adversaries who own the chips. Depending on how much monitoring you have, they may or may not be able to sort of fiddle with the chips in various ways. So that's in some sense like the gold standard.
What we love is for chips themselves to help us carry out this governance work. They could do very detailed reporting about what workloads they're running. But unfortunately this runs into security problems with current chips. Hopefully in the future chips will be made more secure for this threat model and we'll be able to sort of carry out that governance work.
Another option is to go with interconnect bandwidth limits where we basically take some set of chips and we say, “This is a cluster. Communication outside this cluster is going to be really limited. We're going to sort of drop the bandwidth and maybe increase the latency on that.” The reasoning being this would allow those chips to do certain AI workloads but not others.
In particular, if it would make it difficult for those chips to help with a broader large training run. Why? Because large training runs require a bunch of communication flow between chips. In particular, you can imagine sort of data parallel processing where one pod of, let's say 64 Chips is processing some data, another pod is processing some other data, and then every sort of step they're exchanging the gradients basically, or exchanging the weight update that they did. That's maybe like a sort of standard data parallel training setup.
The idea here is that we'd restrict the bandwidth between these chips such that it's really inefficient for them to exchange those gradients. By contrast, if these chips are just being used for inference, the communication costs are really, really low between different pods of chips. Again, there's a bunch of details that need to be worked out, like: how big is your pod of chips? And, what are your bandwidth limits? And, would this actually work if people get better at decentralized and distributed training? I think these are important questions.
Again, this is like early stage development. Another option is partial workload rerunning, where the owner of some chips goes, “Hey, these are the workloads that we did on these chips and they give you enough information that you can go and replicate this elsewhere,” and basically make sure that in fact the operations that were run on the chips actually happened by sort of replicating the results.
This is similar, if you will, to things like cryptocurrency, sort of blockchain verification, where someone's like, “Ah, I found a nonce that creates the correct hash,” or whatever. Everyone else can go very quickly check that the hash is correct. There's a process by which actually checking is much faster than doing the whole process. Again, here we're taking advantage of the fact that you can replicate a small piece of the workload. And if that small piece of the workload is correct, then probably the overall workload is correct.
There's a fun graph here from, I don't know, a paper looking at power usage in data centers. And I think this is sort of to demonstrate there's a bunch of features about what these chips are doing that we could take advantage of when trying to do this verification. We'll move on from compute now and talk about banning AI research.
So first off, we'll talk about domestic bans. I think when I talk about this stuff, people are like, “You couldn't possibly ban AI research. That's crazy.” I claim that this is only a little crazy. And the reason I claim this is because there is precedent. In fact, there are types of research that you are not allowed to do. There are types of research that both the US Federal government says that you're not allowed to do, and the international community says that you're not allowed to do. I talk about things like the Biological and Chemical Weapons Convention. You're not allowed to just go build biological weapons. This is not a thing that you're legally allowed to do.
One interesting precedent is the Atomic Energy Act. This basically created a regime of nuclear secrets. And the idea of born secret, which is that ideas could be classified before publication, if you will, and simply by coming into existence, this information falls under a classification category. That's sort of one regime. I think this is particularly controversial. There's sort of like a reason I put up on the slide, both because it's relatively extreme and relatively controversial, to point at the fact that, yes, we might have to do kind of extreme things, but again, they're somewhat precedented.
United States v. Progressive is one of the main court cases that sort of takes born secrets doctrine and investigates whether this is constitutional. And again, that's important because when someone says, “I'm going to ban research by default,” that's an unconstitutional thing to do. The First Amendment in the US says that you're not allowed to make restrictions on speech like that anyway.
So the Atomic Energy act then maybe seems at least sort of at face value, it might be unconstitutional. The major court case that we have about this sort of only made it to a district court and was then dismissed. So this is not binding precedent and is sort of only what's referred to as persuasive precedent, as in other courts can sort of consider this judgment, but they don't have to follow it. That is to say that I think the constitutionality of the Atomic Energy act and this born secrets doctrine is kind of ambiguous still, even 70 years later or whatever.
Another example we have export controls on a bunch of information. There's plenty of research, including in academic research, that you're simply not allowed to go tell certain foreign nationals about your research because the information that you're researching is export controlled. Again, most controversial example here is cryptography, where back in the 90s and before, the US government tried to control the release of cryptography- relevant information and algorithms.
And then one of the main court cases here was Bernstein v. US. Again, this court case made it to the Circuit Court. It did not make it to the Supreme Court, and then it was also withdrawn. And so again, the sort of precedent here is not strongly binding. People often cite this court case as an example of “Code is often protected free speech.” I think that is correct, except that, again, the case is not sort of strongly binding as precedent.
So again, I think this is an example of there have been constitutional challenges to these research bans or, if you will, publishing bans. But it's sort of ambiguous still, whether or not these are constitutional. Okay, that was a bunch of the legal stuff. I think there's like, yeah, besides legal stuff, there's plenty of other ways that you could try to ban technology. You can try to drop support for it. For example, in 2014, the US did a pause on gain-of-function research funding in particular. And so sometimes people refer to this as a pause on gain-of-function research. That's not right. It was a pause on government funding. At the same time, there were sort of scientific norms put together to sort of stop doing gain- of-function research.
There's a bunch of other sort of social and norms present here. The Asilomar conference is maybe one of the most notable examples where a bunch of the relevant scientific communities came together and said, “We're worried about this technology. Let's make sure we can do it safely.” They got together, they came up with sort of protocols to put in place to develop this technology safely and not misuse it. That is all to say that I think that banning AI research, while it maybe sounds somewhat crazy, is somewhat precedented in that we have banned research in other fields.
And in particular, I'll make the following bold claim: if you try to build a bomb, the FBI shows up. This is obviously not always true. Sometimes people build bombs and the FBI does not show up. But I'm like, the State has an interest in making sure that people don't do dangerous activities. And we have plenty of examples of the State intervening in people's personal lives and research in order to prevent them from doing dangerous things. Regulation exists.
I think, again, often the feedback that we get on this kind of work is of the form, “You could never ban AI research.” And I'm like, well, I think people often have the frame of AI research in their head where they live, in the AI world, where there is no regulation. I'm like, there are plenty of other fields with tons of regulation. Perhaps we just need a frame shift. Perhaps we need to change the reference class that we're looking at. Cool.
Next, I'm going to talk about trying to verify this AI research ban. This is a pretty tough part. I think this is one of our big question marks here. There's a bunch of things that we might try to do. And again, the question you're trying to answer is, basically: the US is looking over at China and goes, “Are your AI researchers secretly taking part in a government AGI project? Does China have an AGI project that they're not telling us about?” That's the sort of problem we're trying to solve here.
One point of optimism is that I think there aren't that many top AI researchers. If you count up the number of technical staff at top AI companies, you get on the order of 1,000 or 2,000 people. I think if you expand this to include hardware companies and other folks that are maybe not at top AI companies, you get maybe on the order of 100,000 people.
If you count the number of software engineers in the world, I think you're in the tens of millions. But my guess is you don't have to count all the software engineers in the world. That is to say, there's not that many people that you need to be keeping track of. With the caveat, as long as it's mainly humans that you're worried about and not AIs.
So, yeah. What are some methods that you might go about to try to verify that people are following this AI research ban? Well, you might interview the various researchers. You might be like, well, you used to work at an AI company and now you claim that your startup is building some random tech thing or some AI product that's allowed under the treaty. Is that actually true? We can have our inspectors interview the various researchers.
We can also do the sort of standard intelligence gathering methods. We can track what companies people are working at, if they're working at companies. Yeah, I think that you can rely on whistleblowers to some extent. You might have embedded auditors. Again, this may be a good example of contrasting AI with other industries.
For nuclear, if you have a like, yeah, all nuclear power plants in the US have two inspectors from one of the nuclear safety bureaucracies, if you will, that are just at the plant, that walk around, that interview people, that check the daily logs, this kind of thing. It's just the case that in some industries we care a lot about the industry having good safety practices. And so we have auditors to make sure that people are following the rules. You can imagine a similar thing where OpenAI shifts and they go, “because of the treaty, we're not going to do any more frontier training. We're only going to work on product.” Great.
Maybe we want to put some IAIA monitors or embedded auditors, if you will, inside the organization to ensure that's actually happening. Again, I want to flag, this is like a question mark. I'm not actually sure if this all works. What you see on the slide is to some extent a kitchen sink approach of we're going to throw a bunch of methods at the problem and hope that it works. Now we'll talk about: what if they tried to do their AI development anyway? Where again, this is talking about countries that are party to the treaty and countries that are not part of the treaty. So especially relevant to countries that are not part of the treaty. We're going to try to do non-proliferation.
Specifically we're going to try to make it hard for these countries to get AI hardware and knowledge. We have a bunch of export controls. We have export controls on all kinds of dangerous technology. We also have export controls on AI since 2022. There's a bunch of precedent for doing this. It's totally a thing we could do. We'd obviously have to step up our enforcement game. We'd have to get better at getting ahead of of the technology landscape, trying to understand where things are going to change, what we need to do in advance. But this is clear. We can try to apply export controls.
There's also this excellent quote about counterproliferation from somebody who works on counterproliferation, which is a non-proliferation with attitude, by which basically they're talking about intervening directly on smuggling rings and really trying to get ahead of the problem, taking a more active effort towards preventing the export of controlled technologies and materials. There's a bunch of stuff that you could do in that domain.
So I think, yeah, at the top of the slide I say nonproliferative AI hardware and knowledge. I think the knowledge part... people, someone might go, “What, are you saying that the AI researchers are not allowed to have freedom of movement?” I'm like, I'm saying that we could take the precedent that the US government tries very hard to stop people who are part of the US intelligence community or the US military from defecting to US enemies. We have various protocols in place and methods to try to prevent defection and we could apply those to AI if we wanted. The world has precedent for saying we would like people on our side with a bunch of critical knowledge to not go join our enemies. How do we stop them? That's just totally a thing that the world has experience doing. Or could apply it to AI if the world was serious about doing so.
Cool. Now I'll talk about enforcement. So I think there's basically a range of different enforcement measures that one could use. Again, treaty parties and non-parties. So the JCPOA, the Iran nuclear deal, one of the main parts of that deal was that the IAEA was going to do a bunch more monitoring and inspections on Iran's nuclear facilities. That's like one example of a thing you can do. You're sort of especially worried about a country. Maybe they've hinted that they might be breaking the rules. Maybe you have a little bit of evidence that they're breaking the rules.
Great, step up your monitoring and inspection efforts. We also have examples of things like economic sanctions, visa bans, and asset freezes. All three of these, there are UN Security Council resolutions targeting North Korea. These resolutions, therefore, on paper at least, require that all UN Member states follow these specific sanctions and visa bans and whatnot. The map at the bottom is sort of the interesting inverse. It's countries that have violated those sanctions. But I think there's an interesting dynamic where, in fact, basically the entire world is supposed to be following these sanctions.
And there is, in fact, a lot of compliance as in, not helping fund the North Korean nuclear program. So I think there's like, again, there's a bunch of precedent for doing things like economic sanctions, visa bans, asset freezes. This is the type of thing that we're going to want to do in the AI case. Again, if people are breaking the rules or if countries have decided to not be part of the treaty and are doing dangerous AI development anyway, we might want to treat them like pariah states, the same way we do with North Korea.
There's another aspect here which is destruction of AI infrastructure as a last resort. If your other methods don't work, what else can you do? And I think we have sort of interesting precedent in the case of Iran's nuclear program where the US and Israel had Stuxnet, this cyber physical attack, to sort of destroy centrifuges and slow down that program. Most recently this past summer, we saw sort of airstrikes from both the US and Israel targeting Iran's nuclear facilities. Now, we don't know sort of how effective those are, I don't think yet. But again, there's precedent where, if the situation is extreme enough, countries are sometimes willing to use military force to disrupt dangerous technology development in other countries.
On the side here is a graphic from the MAIM Superintelligence Strategy paper from Dan Hendricks and colleagues, looking at one example of their escalation ladder, where maybe this all would fit under destruction of AI infrastructure. There's a bunch of ways you could try to sabotage AI development or slow down AI development in other countries. Again, if people are breaking the rules, cool.
So I've now talked about what I think sort of adds up to an effective treaty. There's a couple things missing. The first one most obviously is political will. I'm saying things that many people would interpret as being very crazy. They do not have the will that I do to have an international treaty. So one of the big things that we're in need of is more people to sort of be on board with, “Oh, yeah, we need a treaty. And these are some of the ends we might be willing to go to to make that happen.” Next up. Yeah, I mentioned sort of verification of locating those small AI data centers. Looks like it'll be hard. Same with verifying the AI research ban is not taking place.
The concerns here are, are basically that the US and China might both sort of go, “Oh wait, the IAIA is going to round up our compute or start monitoring our compute. Let's siphon a bit off to the side.” And then they both do that and then they both set up sort of secret projects where they grab some of their top scientists to help out. These both seem like difficult things to sort of internationally verify that they aren't happening. Another one of the key concerns is that this treaty might not happen fast enough. Where I sort of mentioned it earlier, that if you're trying to verify a research ban and you can focus on people, that's going to be relatively convenient.
On the other hand, if AI capabilities have advanced far enough, AIs themselves might be doing most of the AI research. And then your verification problem looks very different. Instead you have to answer the question, “Is anyone doing inference on these kinds of models?” for example, or is anyone doing inference about AI development on these kinds of models? It looks like a much harder problem. So, yeah, there's this issue of if the treaty doesn't happen fast enough. That's one of the problems. AI capabilities advanced too far. Also, hardware gets proliferated a bunch. Then you have this problem of like, yeah, it gets even harder to find the compute, to find the chips, and people have more time to set up secret projects.
Again. If you knew the IAIA was going to come round up your compute a year from now, one of the government might decide that they want to set up a secret facility, a secret data center before that happens. Those are some reasons it'd be bad if the treaty did not happen fast enough. That is it for my presentation. Thank you to my collaborators. Peter and David are here and can help me answer any questions that you might have. Yeah.