DoD & AGI Preparedness

Summary

Mark Beall argues that AGI represents our era's defining challenge, requiring immediate action to close the cultural gap between Silicon Valley and Washington—or risk surrendering AGI dominance to authoritarian regimes.

SESSION Transcript

In 1853, American diplomats stunned the German royal court by arriving in simple black suits and ties instead of the normal ceremonial regalia. When a bewildered courtier asked the Americans, "Why are you dressed like undertakers?" they replied with quiet confidence, "Because we represent the burial of monarchy."
Today, as I see all these technologists and AI engineers and Silicon Valley types descending upon Washington, D.C. in your hoodies and your jammies and your sneakers, I hope you guys represent the burial of the necktie. But your presence here in fact does represent something quite extraordinary: the burial of an old world and the birth of an age that may reshape human civilization.
Not long ago, many people, including people in this very room, believed that engaging Washington, D.C. on matters of AGI was actually dangerous. They called it an info hazard that might accelerate the very risks we were worried about. As a consequence of that unfortunate choice, I think many in Washington are simply unaware of or discount the possibility that AGI could be here soon. How times have changed, and how they must continue to change.
The divide between Silicon Valley and Capitol Hill, between those who build the future and those who must govern it, between the titans of industry and those Americans who stand the watch as our sentinels and guardians—these divides are threatening to our republic more than any foreign adversary. I hope that today this divide can end, that we can bury the hatchet and forge a new alliance between innovation and American values, between acceleration and altruism that will shape not just our nation's fate but potentially the fate of humanity.
I think most of you know I speak pretty plainly, and I plan on doing so today. I will tell you why I believe AGI is not merely another policy issue, but perhaps the defining challenge of our era. Why it matters more than competition with China, why it transcends even nuclear weapons and their implications, and why we perhaps have three years—or 1,000 days—before we find ourselves standing at the bank of the Rubicon with dice in our hands.
We stand in America as heirs to the sacred tradition of Western civilization. It's a golden thread in an intellectual patrimony that began in ancient Jerusalem and its reverence for human dignity, through Athens and their experiment with democracy, to the Roman Republic and the notion of the rule of law, to London and the English wisdom to impose restraint on absolute power, and finally to Philadelphia and to our framers' revolutionary synthesis that gave birth to the world's great masterpiece of practical statecraft in our Constitution.
This tradition gave us the radical notion that power must serve people, not rule them, that individuals possess inherent worth that no algorithm can compute. The choices we make over the next thousand days may determine whether AI honors this inheritance or degrades it. It will either amplify human dignity or reduce us to data points. It will either preserve our liberty or engineer our obedience. It will either submit to law or become law unto itself. It will either serve humanity or subjugate us.
Unfortunately, there is no middle path, no comfortable compromise, not even technical certainty of the range of potential future scenarios. We just have the assessment of some of our brightest minds, the very people who are building these systems. When the architects of our future are purchasing remote bunkers and hunkering down in the advent of AGI being unleashed on the world, I think only a fool would dismiss that warning as something that can be written off with certainty.
So where do we find ourselves today? Today we find ourselves in not just one race on AI, but actually two. The first race is the race we understand: the competition with China for economic and military supremacy. This is a familiar great game of great power politics played with new pieces on an ancient board.
The second race is alien to our strategic thinking. This is the race toward artificial superintelligence or AGI itself. This is not nation against nation, but humanity against time, against our own creations, against the possibilities that make our Cold War nightmares seem quaint. Conflating these races invites catastrophe. We must dominate the first without triggering disaster in the second.
China's recent breakthroughs in AI have unfortunately erased our lead. DeepSeek's models match American capabilities from merely months ago at one-tenth of the cost. Chinese researchers have weaponized American open-source AI for military applications and have taken advantage of corporate access by buying hardware and research partnerships that they need to compete and win.
I remember watching Xi Jinping's New Year's Day speech in 2018 from my office in the Pentagon. Displayed very conspicuously on the bookshelf behind him was Pedro Domingos's book "The Master Algorithm." It was a very clear sign to us in the leadership of the Defense Department of the stakes of this issue. Beijing pursues AI dominance not just as an economic prize, but as the foundation for a new international order built on control and brought to you in part by the American technology industry.
So what's at stake? AGI represents what a military strategist might call a black swan—a sweeping invalidation of most of our load-bearing assumptions about the way a system works. Unlike nuclear weapons, which require enrichment facilities and leave radiological signatures, AGI needs compute, electricity, and expertise. Resources that exist in dozens of facilities around the world and are growing every day. Resources that, as Congressman Foster mentioned, may actually increase exponentially as AI starts to improve upon itself.
Consider what we're discussing: AIs capable of designing novel bioweapons in minutes, more lethal than the last. WMD-grade cyber weapons that can evolve quickly. Robotic lethal systems that make kill decisions autonomously. Overnight decapitation strikes on government leaders with thousands of lethal autonomous drones and computer vision.
Now imagine a system with these capabilities pursuing goals we never programmed, optimizing for outcomes we never intended, resisting our attempts to shut them down with an intelligence that vastly exceeds our own. This is not, unfortunately, a Dan Simmons novel. This is the assessment of some of our best scientists and the public assessment of each and every one of the leaders in the American Frontier AI Labs today.
Once an AI superintelligence escapes human control, no army can defeat it, no firewall can contain it, no treaty can bind it. As my friend Max Tegmark said, the Terminator scenario is actually quite unrealistic because it implies that we have a chance.
As a result, America faces a challenge that demands complete mobilization across three fundamental pillars: protect, promote, and prepare. The three Ps. Together, they form our pathway to human flourishing.
On protect: make no mistake, as nations wake up to the issue that AGI represents—that it could mean decisive and potentially permanent strategic advantage—the temptation for preemptive war may grow. We already hear whispers in war colleges and military journals and even shouts from Silicon Valley calling for solving the AGI problem through the use of kinetic strikes. Ideas like the bombing of data centers. This is not only irresponsible rhetoric, but it's taking hold in capitals around the world. War over Taiwan with armies ready to fight over semiconductor fabs. Top AI scientists becoming targets of assassination. Cyber weapons designed to corrupt and sabotage training runs. Nations even threatening to release unaligned AI—the ultimate doomsday device, worse even than the salted nuclear weapon.
So our response must be overwhelming and meet the needs of the moment. The first thing we must do is on the export control front: the fact that Chinese military researchers freely buy, steal, download, and then weaponize American technology represents a dereliction of duty that would have been unthinkable during the Cold War. We must shut down AI research partnerships between entities such as the University of California at Berkeley and Chinese army-affiliated institutions like Tsinghua University. We must close loopholes in which U.S. companies such as Oracle can lawfully provide access to restricted chips to Chinese researchers through the provision of cloud services.
We must grapple with the very difficult question of open source issues and establishing the threshold by which powerful model weights will no longer be permitted to be released into the public so that we can stop hemorrhaging our advantages to our adversaries and working at cross purposes with our chip controls. We need urgent, modified national security industrial program protocols applied at the Frontier AI labs, and we need know-your-customer requirements for cloud compute service providers that prevent adversaries from training on American infrastructure.
On the military deterrence side, unfortunately, I see the potential for significant global conflict on the path to AGI. I think this is an assessment shared by my colleagues at RAND. If nations perceive a permanent disadvantage for failing to achieve AGI first, or worse, worry about what some of you in industry have actually called and maybe even plan for—pivotal acts, or in other words, the idea of an unlawful use of an AGI to disable other AGI programs—this is a dangerous situation we find ourselves in.
I would argue that with our closest ally in the UK, we must be ready to deter aggression on the path to AGI. We need to accelerate President Trump's Golden Dome Initiative and develop contingency plans for threats ranging from special operations forces raids to hypersonic strikes, to non-kinetic attacks, to assassination attempts of leading AI researchers. It's not that these scenarios are something that I want to come to pass. They are just, if you extrapolate what's happening right now and you look right around the corner, it's sort of obvious what will end up happening.
Next, we really need to consider the resiliency of our critical infrastructure. Every power grid, financial network, and communication system was designed for human-speed threats. Unfortunately—or fortunately—that world is dying. We must rebuild for machine-speed defense and rebuild urgently. The message to our adversaries must be crystal clear: the United States will not seek war, but we will be prepared to respond to hostile aggression.
Next, we must pour resources into perhaps the most critical defensive technology of all: ensuring AI systems do what we intend, and not what we fear. This means funding alignment research at levels matching the Manhattan Project. It means creating specialized programs and growing the cadre of research capacity in the United States for interpretability, robustness, and control measures for advanced AI systems. It means building an AI immune system that can detect and counter misaligned behavior and developing shutdown mechanisms that cannot be overridden by superhuman intelligence.
Next, there is no protection without promotion. Protection without promotion is paralysis. America must not just defend but dominate through construction and deployment, through adoption and diffusion, through deregulation and acceleration. The Department of Defense needs to be twice as lethal at half the cost. We need to shatter the bureaucratic barriers that keep AI out of the hands of our warfighters and our intelligence professionals. What took years must take months, what took months must take days.
We need to deploy urgently the American AI stack globally and give our allies an option before they are forced to choose from alternatives. I know the UAE agreement that this administration reached was quite controversial, but I think with the right security protocols in place, we might need 20 more such deals.
AGI energy demands are something that we are not ready yet to meet. I think by some estimates we have a 64-gigawatt shortage by 2030. We need the power plants, the data centers, the cooling systems. This is our generation's Interstate Highway Program, our Apollo Program—measured in computations, not concrete.
Last, we must prepare for the range of potential future contingencies. First and most critically, we need to create total situational awareness for members of Congress and for the executive branch. We need radical transparency between the frontier labs and the government. We need classified testing and evaluation programs focused on weaponization and loss-of-control risks so that policymakers such as Mr. Foster can help peer into the future of AI development and make informed and data-driven choices on behalf of the American public. We need early warning signs for foreign AGI programs, for capability jumps that could destabilize the world, and create good plans in preparedness for responding.
Last, we need to think, unfortunately, about some ideas that have been heretofore unthinkable. We know that neither Washington nor the United States, under current technological understanding, can build advanced AI systems that we can reliably steer. So the first unthinkable truth is we probably need a grand bargain with the Chinese. As much as export controls can slow down China, they are perhaps late to need in this moment. Like the superpowers stepping back from nuclear annihilation during the Cold War, we must recognize that AGI race cannot be won with China—only survived.
We need an AGI treaty that channels competition away from mutual destruction. President Trump may be uniquely positioned to forge that agreement, and he may be forced to grapple with this if AGI comes online during his administration. The message to Beijing should also be clear: America will outcompete you commercially and militarily. We will deter aggression, but we are not suicidal, and diplomacy is on the table.
The second unthinkable truth is that we must start considering plans for other extraordinary measures. We must prepare options that I think many in Silicon Valley and those of us on the libertarian side of the aisle would be loath to even contemplate. So I'll say the word: nationalization. If AGI development threatens the constitutional order itself or risks amassing power in ways that threaten the current systems of checks and balances, then a Manhattan-scale project around government partnerships where allies and partners can buy in may become necessary. Direct oversight or intervention in the economy by the United States government may become an unfortunate reality. A national dialogue on how a safe superintelligence should be used in accordance with our constitutional principles will likely become necessary.
So what is the choice before us? I think we have one chance to shape AGI's arrival. Already, as I talk to members of Congress and members of the previous administration, members of this administration, members of industry, everyone I talk to—many of them are throwing their hands up and saying, "There's nothing we can do to stop this. It's too late. The race itself has a mind of its own." This is really the manifestation of Moloch or Satan or whatever human foil that you'd like to attribute that to. And it's simply not true—at least not yet.
However, I think once AGI does emerge, events will move beyond our control with stunning velocity. The policy establishment faces a choice. We can treat AGI as merely another technology to be integrated into existing frameworks, or we can recognize it as the most consequential development in human history and prepare accordingly. The former path offers bureaucratic comfort and the path of least resistance. The latter may assure human flourishing.
I will conclude here with a call to action. First, to the technologists: your government needs you, not in advisory roles, but in positions of authority. The gap between Silicon Valley's understanding of where this technology is headed and Washington's is perhaps the greatest strategic threat to effective decision making on this issue. Your brilliance can be decisive in helping shape the choices that we make in the next few years.
Second, to our policymakers: every month of delay reduces your future choices and compounds our vulnerability. The luxury of deliberation belongs to challenges that evolve slowly, and unfortunately, AGI does not.
I was a senior in high school at Benedictine Military School on September 11, 2001. The war on terrorism shaped who I became. I spent a lot of time in Afghanistan. I studied deeply how it was possible that we failed to connect the dots and stop the Islamists from killing 3,000 Americans on that fateful day. Our failure on 9/11, just like our failure on Pearl Harbor, and just like our failure potentially with this issue, was not one of a lack of data or technical expertise. It was one of imagination. We see the warning signs today. A few executives at top AI labs have the courage to speak openly about the issues, and so I implore you not to let another 9/11 happen while you stand on the watch.
Last, we need to recognize that American AGI leadership is not about maintaining hegemony. It's about assuring that when humanity's most powerful technology emerges, it reflects American founding principles rather than authoritarian control. The United States has prevailed in previous strategic competitions, but each required us to recognize when the nature of that competition had shifted fundamentally. I believe that AGI represents such a change. It demands strategies that seem radical today only because our thinking has not caught up to the technology.
We stand at the Rubicon River. Before we say "Alea iacta est," we need to think deeply about what it is that we need to do. What we do in the next 1,000 days may echo in the next 1,000 years. The question is not whether we have the capability to lead. The question is whether we will exercise that wisdom to lead while leadership still matters.
Thank you so much, and I look forward to your questions.