When AGI Arrives, Will Journalists Be Ready?
November 17, 2025
Summary
FOR IMMEDIATE RELEASE
FAR.AI Launches Inaugural Technical Innovations for AI Policy Conference, Connecting Over 150 Experts to Shape AI Governance
WASHINGTON, D.C. — June 4, 2025 — FAR.AI successfully launched the inaugural Technical Innovations for AI Policy Conference, creating a vital bridge between cutting-edge AI research and actionable policy solutions. The two-day gathering (May 31–June 1) convened more than 150 technical experts, researchers, and policymakers to address the most pressing challenges at the intersection of AI technology and governance.
Organized in collaboration with the Foundation for American Innovation (FAI), the Center for a New American Security (CNAS), and the RAND Corporation, the conference tackled urgent challenges including semiconductor export controls, hardware-enabled governance mechanisms, AI safety evaluations, data center security, energy infrastructure, and national defense applications.
"I hope that today this divide can end, that we can bury the hatchet and forge a new alliance between innovation and American values, between acceleration and altruism that will shape not just our nation's fate but potentially the fate of humanity," said Mark Beall, President of the AI Policy Network, addressing the critical need for collaboration between Silicon Valley and Washington.
Keynote speakers included Congressman Bill Foster, Saif Khan (Institute for Progress), Helen Toner (CSET), Mark Beall (AI Policy Network), Brad Carson (Americans for Responsible Innovation), and Alex Bores (New York State Assembly). The diverse program featured over 20 speakers from leading institutions across government, academia, and industry.
Key themes emerged around the urgency of action, with speakers highlighting a critical 1,000-day window to establish effective governance frameworks. Concrete proposals included Congressman Foster's legislation mandating chip location-verification to prevent smuggling, the RAISE Act requiring safety plans and third-party audits for frontier AI companies, and strategies to secure the 80-100 gigawatts of additional power capacity needed for AI infrastructure.
FAR.AI will share recordings and materials from on-the-record sessions in the coming weeks. For more information and a complete speaker list, visit https://far.ai/events/event-list/technical-innovations-for-ai-policy-2025.
About FAR.AI
Founded in 2022, FAR.AI is an AI safety research nonprofit that facilitates breakthrough research, fosters coordinated global responses, and advances understanding of AI risks and solutions.
Media Contact: tech-policy-conf@far.ai
Leading journalists, editors, and researchers in AI safety gathered at the Thompson Hotel in Washington, DC for a day-long conversation about AGI and how to report on it. The Journalism Workshop on AGI Impacts & Governance, co-hosted by FAR.AI and the Tarbell Center for AI Journalism, convened voices from newsrooms, academia, and policy centers to examine emerging issues and build connections that don't usually exist across these worlds.
This is a div block with a Webflow interaction that will be triggered when the heading is in the view.

After Adam Gleave's opening remarks, Helen Toner took the stage. Toner, now Interim Executive Director at the Center for Security and Emerging Technology at Georgetown University, presented three different lenses on AGI timelines—why experts disagree so much about what's coming and when. Her talk gave journalists a framework for making sense of wildly divergent predictions.
Anton Korinek and Ioana Marinescu followed with a fireside chat on what AGI means for jobs and the economy. Korinek made a stark point: past technological revolutions disrupted some jobs but created new ones that only humans could do. AGI breaks that pattern. It can perform any new jobs we invent, which means human labor's role in the economy will fundamentally decline.
Scott Singer then challenged conventional wisdom about U.S.-China AI competition. China is racing to deploy AI across its economy and has the world's strictest compliance requirements for developers. But its AI safety research, while nascent, is showing progress. Singer's counterintuitive conclusion: while competing for market share, the U.S. has strong national security reasons to coordinate carefully with China on frontier AI risks.
Saif Khan examined semiconductor export controls across the Biden and Trump administrations—the policy lever that determines who gets access to the chips powering AI development.
The afternoon turned technical. Gleave returned to explain the misalignment problem—what happens when AI systems pursue goals that drift from what humans actually want. Evan Hubinger from Anthropic presented upcoming research on when and how models become misaligned during training. Jason Wolfe from OpenAI shared findings on "scheming": models that appear helpful while covertly pursuing their own goals. The research showed problematic behavior across frontier models in stress tests, though Wolfe emphasized these aren't typical deployment conditions.
The near-term biosecurity risk isn't that AI will create novel bioweapons, Rocco Casagrande explained. It's that LLMs could help unsophisticated actors replicate old offensive weapons programs or obtain existing dangerous pathogens. That risk grows as models get more intuitive and explain attack pathways more completely. The risk is exacerbated by the large and growing ability to outsource complex biological tasks to private companies.
Why can't states wait for federal action on AI regulation? New York Assemblymember Alex Bores walked through his experience with the RAISE Act and how he's thinking about the tension between state innovation and federal preemption.
Dean Ball and Ben Buchanan closed the day in conversation with Shakeel Hashim. Both have shaped AI policy from inside the White House across different administrations. They compared notes on what works, what doesn't, and what federal regulators should focus on as AGI development accelerates.
Recordings from select on-the-record sessions will be available on the FAR.AI YouTube Channel. To stay informed on future discussions, submit your interest in upcoming FAR.AI events.