14 Ways of Looking at AI

Summary

Brad Carson discusses energy demands, military implications, and societal impacts of AI among his 14 unanswered questions on the development of AI

SESSION Transcript

So I'm going to go through these questions quickly and then try to focus on just a couple of them and give you my thoughts on them. I thought I would list all 14 because one of the things I have learned about dealing with AI policy—I got my start working in the Obama Defense Department where I was really running the US Army on a day-to-day basis—was to have a lot of humility. While I read voluminously in AI policy these days, every morning I wake up to yet more substacks or tweets or dedicated microsites with a hundred-page report that I feel dutifully I must get through. There is so much being written that nonetheless, despite this effort, I still have a few questions that remain unsettled for me that I do think are enormously important to setting policy.
They range from things that are rather practical, like the first two you see here: are the hyperscalers actually capital-constrained—a very important issue if you want to evaluate the merits of the recent deal with the UAE and Saudi Arabia. What I will talk about at greater length in a moment is whether the hyperscalers face real energy constraints in expanding their AI data centers.
To this issue, which is kind of a substrate politically that we must deal with: I think we have to acknowledge but scarcely do that AI is in fact deeply unpopular. Every poll you see says anywhere from 70 to 90% of Americans are skeptical about AI and would like to see it heavily regulated. That issue is important, and I'll talk about it a little more today, because if we're asking the American people to invest perhaps a trillion dollars or more in modernizing the grid or to tolerate nuclear power plants in their backyard—well, it was a hard enough project 30 years ago when we said that power is actually going to illuminate your home or power your business. When you're asking it now to put this kind of controversial technology in your backyard and its sole purpose is to take your job away from you, which is really what the hyperscalers themselves will tell you it is—well, that's a political problem that you don't have to be a former US congressman to fully appreciate.
The question about bioweapons is, I think, a very important one too. Now, Janet glossed over a bit of my long career. My academic interest—I was a professor at the University of Virginia before going back to becoming the president of the University of Tulsa, which is my hometown—I taught terrorism and counterterrorism. I'll talk a bit at the end more about that because I think the analogy to terrorism of the 1990s is actually very appropriate to how we think about AI today as a policy measure.
But one of the questions I used to ask my students a lot was: we have been concerned about terrorists getting biological or chemical weapons now for 30 years. It's often one of the horribles that our government will list when they tell us that we have to stomp out terrorism. But when I used to teach this course, a very popular one at UVA, I always asked my students: why don't terrorists actually use bioweapons or chemical weapons? Why do they prefer C4 and Semtex or just perhaps firearms?
I could talk at length about this very question, but I think there is an answer to it. With the lone exception of Aum Shinrikyo, who had a billion dollars and recruited from the highest technical classes of one of the most sophisticated societies in the entire world, no one really tries to do it. And there's a reason for it: because you can do 9/11, which cost trillions of dollars to the United States, not to mention the global economy, for $500,000. That is the estimated cost of 9/11. Trying to develop sarin gas or botulinum toxin is a hard project that even Aum Shinrikyo with all of their capabilities scarcely was able to do. They did kill 13 people on the Tokyo subway. That is true. But that fell far short of what their ambitions were.
I would invite, as we think about the dangers of AI, to try to engage in the questions about the sociology of violence, about which there is a lot written, and when people actually avail themselves of hypothetical capabilities.
A question I'll talk about more here today briefly is one that Mark, a great friend, hinted on in his clarion call. He probably has a slightly different view than I do at this stage, but it is whether AI is going to be a revolution in military affairs, or an incremental improvement.
Related to that is this: Is it going to compromise nuclear deterrence by making the oceans transparent? I have a particular fascination with this question because I see it little studied. There's one paper from Australia about 10 years ago that addressed it, but the transparency of the oceans were it to become realized would be one of the seismic events in national security. Because the most secure leg of the nuclear triad are, of course, the boomer submarines, which are unfindable when you're with Congressman Foster under the North Pole. If they were to be able to be located in some way, that whole leg would immediately become perhaps the most vulnerable part of the triad. And of course, how AI could affect targeting of land-based missiles too is an important question. So this is an interesting question that I see understudied, and I hope someone in the room can make me smarter about it.
This is an issue that Mark talked about a bit as well, and of course is of the moment in Washington: do we have a realistic assessment of Chinese capabilities? I have no good answer to this despite having read and talked to many experts about it. One of the issues about the deals with the UAE and Saudi Arabia, for example—I mentioned already the appeal of their capital. But one of the appeals is also that if we don't do it, Huawei is simply going to step in and seize the field. But is that true?
There are many people who seem to be quite expert on the hardware who would suggest that the moat around companies like ASML, Zeiss, Tokyo Electron, the entire supply chain, which is heavily—well, it's an oligopoly if not a monopoly at many of these stages—that the latent knowledge, the data acquired from their continued use that they now apply machine learning to, that it could be a 15 or 20 year project for even a nation state with the most brilliant of engineers and unlimited money to be able to replicate. Well, 15 years is a long time in AI policy, and we know there's going to be likely some incredible events that happened long before then.
So if it's true that we can maintain a 15 or 20 year moat around Western semiconductor technology, well, that speaks a lot differently about the merits of export controls and the ability of Huawei to step in and fill the void. There is a strong case I hear people make that I myself am unable to adjudicate about the alternative for the UAE and Saudi Arabia—their BATNA, if you will, in business school speak—was not Huawei chips, but it was nothing. And if that is true, it calls into question again more seriously the merits of that particular idea.
As Mark talked about, what does it even mean to win a race like this? I'm keenly interested in national security, having worked at the Defense Department for a long, long time. Most of the time when we speak in history of arms races, it's pejoratively. It turns out to be a foolish quixotic effort at enormous expense that ends up with lots of people dead on the back end and no real-world gain. But nonetheless, in Washington, DC, among people in both parties, it seems that the race framework has taken hold and prevailing in it is not only an imperative, it's the strongest argument one can make against reasonable guardrails about our own domestic development.
Much as climate change has been stymied by this, of course. People will lament that it'd be great to limit greenhouse gases, but if China's not going to do it, why should we be the only actor to penalize our own economic prosperity? Well, the question is: what's it mean to win a race? What is victory in this actually like?
This question about the rise of inference compute over the last 18 months: does it change the nature of governance? Most of our governance ideas have been focused about frontier training, the amount of flop that's used in the training runs. But does the rise of inference compute change the way we think about governance more broadly?
Why are there actually radical disagreements about AGI timelines and potential job displacement? When you read the various folks about this, some will say it's the lump of labor fallacy and we're always going to create new jobs and labor is going to maintain its share at 55% of GDP, which has been stable despite a lot of economic dislocation over the last 50 years. Well, others believe that nearly every job is going to be displaced over time, at least. Is it because we're using different economic models—Cobb-Douglas versus Leontief versus Romer's growth model? Or what exactly is it? I never see this quite fleshed out enough that allows me to understand it still better.
This question, I think, is going to be a really profound one. We're enormously proud of the fact and jealous, of course, that we have the best frontier models in the United States. But if you think their real importance is going to be how they get embodied into products, then I have every reason to believe, and somewhat unhappily believe, that it's the Chinese who will defeat us. Perhaps we always have the best model trained at the cutting edge, and that Jaime Sevilla and Epoch AI say that in three or four years, 10 to the 30th flop will be the norm for the training runs. But when you look at the Chinese domination of the advanced manufacturing tech stack, the first country that's going to actually have, say, killer soldiers, it's going to be China almost certainly. One has to believe that because when you look at robotics in particular, to say they dominate the tech stack doesn't even begin to capture the gap between the United States and the People's Republic of China.
I'll talk about this issue at the very end too about whether we are overly focused on generative AI, neglecting the other aspects of AI from predictive classifier schemes, which actually have more real-world impact today, and that few people talk about. The conference where Congressman Foster and I just returned from last night in Spain dealing with AGI and what American policy should be about AI—every representative there was from a generative AI company, and every discussion was about generative AI. Even though again, it may be the most interesting and creative and froth with future possibility, but basic AI every day is running our lives in many ways and not always in a good way or one consistent with democratic values.
This issue about what aligning with human values really means. Now, some of you in this room probably make your living working on AI alignment. It's obviously an important project. But I've often been uncertain: what's that actually mean? There's a great paper from Google DeepMind, Seb Krier and others, in the last couple of weeks about questioning what alignment really means, what human values are, is there a certain bedrock agreement about what alignment might even mean? Or are you someone like me perhaps who comes at it from a question heavily influenced by philosophers like Jonathan Darcy and his idea of moral particularism that doubts that there are ever any long-standing moral principles? Every situation is idiosyncratically contextual, and you can never generalize from one situation to another. And if that's the case, what does that mean for alignment?
A final issue that we should think more about is what does democracy really mean in a society without meaningful work? The compact, at least since the Enlightenment, has been about people who worked, gave their labor, were productive, and had a certain relationship with the state largely based upon their status as a citizen and worker. If that becomes untethered, it will be very unique in the last six or seven hundred years. And what does it mean for democracy?
We've had experiments in this country where work has disappeared. I represented a very rural congressional district in Oklahoma. Work disappeared there. It had light manufacturing in the 70s and 80s, and it all went to Mexico, and then Vietnam, and China, and people were left with nothing. The level of social pathology in these communities that I call home—you cannot underestimate it. Opiates, methamphetamine, it's extraordinary. We see this with the work of William Julius Wilson, looking at what happened to inner cities in this country too, when light manufacturing once dominated the employment base there. We have no examples where people without work find the ability to thrive. And it's an interesting question about what this means for our society.
So here's the three I want to focus on though: the energy constraints, the revolutionary/evolutionary questions about war, and the focus on generative AI.
I'll go quickly through this. Where will we find the energy? I have a bet out there with a good friend that we will not find the energy in this country, absent some kind of improvement greatly in efficiencies of how our data centers work or how the hardware is actually working as well. Energy is likely the limiting factor, and it could well be a very hard constraint in our development.
So let's talk a bit about that in terms that, well, in this room filled with technical experts who spent probably a lot of time studying physics can quickly appreciate. In the United States today, it's fair to say we consume about 100 quadrillion BTUs of energy, BTUs being the preferred US way to measure total energy consumption. Now when you look at the electricity demand, it's obviously usually spoken of in a unit that you can make comparisons to in watt hours. The electricity demand in this country is about four petawatt hours each year.
How much of that goes to data centers? Well, not much historically, until the AI revolution came about in the last three or four years. It's estimated that at most, four to five percent of electricity used in this country is due to data centers. And of that, the AI data centers are just a very small part, maybe one percent of total use. It's estimated. It's estimated a lot because policymakers should help us on this by creating their own industrial classification codes for AI data centers where we can measure energy use much more quickly and much more accurately.
So it's just estimates, but the AI data use in this country is today about 100 terawatt hours per year. But it's growing rapidly. We know this. If AI data center usage was just small in the past, it's going to be 60-70% in the near future. And it's going to grow to 200 to 500 or maybe more terawatt hours per year. From 4% of total usage data centers to 12%. Some people estimate even up to 15%.
To put that in terms of the need for continuous power generation, today data centers are about five gigawatts of continuous power demand. It's going to expand to maybe between 80 to 100 gigawatts. Some people like McKinsey, Bloomberg have numbers between 100 and 200 gigawatts of needed continuous power demand.
So how much is that? Well, when you talk to a lot of congressmen, watts and joules and BTUs and million tons of oil equivalent, the things that energy experts talk about—well, it's hard to grasp what that is. Well, this is Three Mile Island, of course, soon to be an AI-powered data center in its way. That produces about one gigawatt of energy, a nuclear reactor like this. So you need 75 of these to make the kind of estimates of 80 gigawatts that you need out there. So nuclear power being the most efficient way to produce electricity today. Or you might need a couple hundred combined cycle gas plants.
This is going to be hard for us because we know the grid doesn't work too well. And you're going to have to have maybe a trillion dollars of grid improvements if you have any hope of generating renewable energy to help fund this. But I guess the question is: do you think we have as a nation the ability to build the equivalent of 75 Three Mile Islands over the next five to six years? And that's again, if you had them co-located with AI data centers and you didn't need all the grid improvements and all those kinds of things.
I think there is every reason to say no to that. And if it happens at all, it's likely to be about the continued use of fossil fuels, with all the climate problems that maintains, keeping coal plants that are slated to go offline online instead, and the net zero ambitions of all of the AI companies will quickly go by the wayside.
About the revolution in military affairs, my view is, as someone who worked in the military, served in the military myself, that it's not yet and maybe never. And let me tell you quickly why. One of the great RAND publications—and I know RAND is part of the sponsor here—the single book I could recommend to people who are interested in how the military really works is a RAND publication from a long time ago. It's by Carl Builder. It's called "The Masks of War," and it's about the unique cultures of the military. He concludes, even though he's quite a conservative and quite friendly to the military, that the uniquely American approach to our thinking at least, if not execution, is we love to replace [labor with capital]. We don't like the dirty jobs of kicking in doors and shooting people in the face, to use a term that the army frequently uses. We love the Air Force with its stealth and supersonic technology, the kind of things that plays into the American way of war.
AI in some ways is just like that. We're always about the next bright bauble in the US military. And it becomes a mania, a fad of sorts. So think about what possible use cases: can better intelligence? Sure, that's a great use. To assimilate petabytes of information into a common operating picture. That's an amazing thing that AI can probably help do. But does it really revolutionize things with edge cases, out-of-distribution chaos? Would you actually want to use it for decision support, to actually model possibility spaces? You would, but you wouldn't want to rely upon it.
And even what you see in the Ukraine, AI has a very limited utility there with these first-person vision drones we see. In some ways, they're more equivalent to IEDs, a new form of artillery rather than nuclear weapons. And even in places like Israel, where you have their system, Habsora—the gospel it's called—where they're now targeting thousands of people a day. It used to be 50 a year. They do thousands of people a day because of this incredible capability. But it doesn't seem like that's a transformation of the nature of war. So I think it's a bit of a fad, I will provocatively say.
And finally, let me say, are we watching the wrong thing? Gen AI is a very small part of AI. We know this. Recommender systems, we confront them 50 to 80 times a day at least. We use predictive AI to determine whether we're credit worthy or not, or whether this person is going to be a recidivist, or whether this family is likely to abuse their children. The problem is, as the guys were quite skeptical about some of the things we talk about here—the so-called now Princeton School of AI Safety—they'll tell you they don't even work. We talk a lot about the bias within them and our concerns about disparate impact of these systems.
The problem is actually more fundamental and not talked about much: the evidence is they hardly work better than humans or better than just acting randomly in some way, but they're making life-dispositive decisions. And of course, the recommender systems, which are ubiquitous—it doesn't have to be spoken of here because the work is so much talked about more broadly about how they're creating certain epistemic bubbles. We just hear what we want to hear. You have groups like TikTok now, where they've eschewed the social network entirely. It's purely algorithmic in what you get.
And so the issue is: yes, generative AI is very important and has some serious policy implications. But I do think policymakers don't think enough about these other forms of AI, which are actually far more ubiquitous today and perhaps more dangerous to the things we care about. And for those of us who work on frontier AI regulation, which is what I do now full time, the idea that our democracy frays because of these other forms of AI—which is going to make it only that more difficult to achieve the kind of regulation we want about the frontier, more dangerous capabilities.
So that's me focusing briefly on three of the 14 questions. I have some time this afternoon. I hope you will sign up to have one-on-one meetings if you can make me smarter about them, because I do offer those with a bit of humility. Some of you may have dogmatic views about the answers to each of those ones that have evaded me to date.
All right, I'm done. Thank you so much.