FAR.AI Secures Over $30 Million in Multi-Funder Support to Scale Frontier AI Safety Research
January 15, 2026
Summary
FOR IMMEDIATE RELEASE
FAR.AI Launches Inaugural Technical Innovations for AI Policy Conference, Connecting Over 150 Experts to Shape AI Governance
WASHINGTON, D.C. — June 4, 2025 — FAR.AI successfully launched the inaugural Technical Innovations for AI Policy Conference, creating a vital bridge between cutting-edge AI research and actionable policy solutions. The two-day gathering (May 31–June 1) convened more than 150 technical experts, researchers, and policymakers to address the most pressing challenges at the intersection of AI technology and governance.
Organized in collaboration with the Foundation for American Innovation (FAI), the Center for a New American Security (CNAS), and the RAND Corporation, the conference tackled urgent challenges including semiconductor export controls, hardware-enabled governance mechanisms, AI safety evaluations, data center security, energy infrastructure, and national defense applications.
"I hope that today this divide can end, that we can bury the hatchet and forge a new alliance between innovation and American values, between acceleration and altruism that will shape not just our nation's fate but potentially the fate of humanity," said Mark Beall, President of the AI Policy Network, addressing the critical need for collaboration between Silicon Valley and Washington.
Keynote speakers included Congressman Bill Foster, Saif Khan (Institute for Progress), Helen Toner (CSET), Mark Beall (AI Policy Network), Brad Carson (Americans for Responsible Innovation), and Alex Bores (New York State Assembly). The diverse program featured over 20 speakers from leading institutions across government, academia, and industry.
Key themes emerged around the urgency of action, with speakers highlighting a critical 1,000-day window to establish effective governance frameworks. Concrete proposals included Congressman Foster's legislation mandating chip location-verification to prevent smuggling, the RAISE Act requiring safety plans and third-party audits for frontier AI companies, and strategies to secure the 80-100 gigawatts of additional power capacity needed for AI infrastructure.
FAR.AI will share recordings and materials from on-the-record sessions in the coming weeks. For more information and a complete speaker list, visit https://far.ai/events/event-list/technical-innovations-for-ai-policy-2025.
About FAR.AI
Founded in 2022, FAR.AI is an AI safety research nonprofit that facilitates breakthrough research, fosters coordinated global responses, and advances understanding of AI risks and solutions.
Media Contact: tech-policy-conf@far.ai
FAR.AI secured over $30 million of funding commitments in 2025. This enables a significant expansion of our research capabilities and field-building initiatives. Principal supporters include Coefficient Giving (previously Open Philanthropy), Schmidt Sciences, Survival and Flourishing Fund, the Center for Security and Emerging Technology (CSET), and the AI Safety Fund (AISF) supported by the Frontier Model Forum (FMF).
This is a div block with a Webflow interaction that will be triggered when the heading is in the view.
We're pleased to share that FAR.AI has secured over $30 million in funding commitments throughout 2025 from a diverse group of leading organizations, enabling a significant expansion of our research capabilities and field-building initiatives. Principal supporters include Coefficient Giving (previously Open Philanthropy), Schmidt Sciences, Survival and Flourishing Fund, the Center for Security and Emerging Technology (CSET), and the AI Safety Fund (AISF), supported by the Frontier Model Forum (FMF).
This multi-funder support represents a strong vote of confidence in FAR.AI’s research agenda and positions us as a growing, independent institution tackling some of the most critical challenges in ensuring advanced AI systems are safe and beneficial.
What This Funding Enables
The combined support allows us to pursue an ambitious expansion across four key areas:
Scaling our technical research team: We're growing from 15 to 30+ researchers over the next year. This expansion accelerates our work on fundamental challenges – from robustness and deception to developing formal safety cases. These are the research directions that could prove as transformative as reinforcement learning from human feedback (RLHF) was for the field.
Launching a technical governance division: We'll be launching a dedicated research team to bridge cutting-edge technical insights and actionable policy solutions. This team will collaborate with AI Safety Institutes, think tanks, and policymakers in the UK, US, and EU to develop verification mechanisms for AI developers' safety claims and create evidence-based standards.
Establishing fellowship and internship programs: We'll be establishing new programs to train the next generation of AI safety researchers, supporting 20+ fellows annually with mentorship, research opportunities, and pathways into the field.
Expanding our field-building efforts: Our field-building initiatives will also see major growth. The Alignment Workshop series will expand from two to three events annually, reaching new regions including Asia and the Global South. We'll launch specialized technical workshops, hackathons, and consensus-building forums that bring together the brightest minds from industry, academia, and government to collaborate on solving AI's most pressing safety challenges.
Why This Matters Now
Advanced AI systems are developing rapidly, and the technical challenges of ensuring they remain safe and beneficial are growing more complex. Our research on adversarial robustness has already shown that even superhuman AI systems can have critical vulnerabilities – findings that directly inform how frontier labs should approach safety. With this expanded capacity and diverse funding base, we can tackle these challenges at the scale they demand while maintaining independence and research integrity.
Our Track Record
Since our founding in 2022, we've published groundbreaking research on adversarial robustness, interpretability, and red-teaming that has influenced safety practices at major AI labs. Our events have convened thousands of AI researchers and policymakers. Our Berkeley coworking space has become a hub for AI safety innovation. This new support accelerates all of these efforts at a critical moment for the field.
Building Long-Term Institutional Strength
Diversified funding strengthens FAR.AI’s institutional resilience and positions us for sustained impact. These commitments enable us to:
- Attract world-class research talent with competitive, long-term positions
- Maintain research independence while collaborating with frontier labs and government agencies
- Build enduring programs that compound in impact over years, not just funding cycles
- Weather funding environment changes while staying focused on our mission
Looking Ahead
By 2028, we aim to achieve 3-5 major technical breakthroughs, influence the adoption of safety standards by frontier labs, and inform key decision makers.
The work ahead is challenging, but it's essential. As AI systems become more capable, ensuring they remain safe and aligned with human values becomes increasingly critical. This support enables us to meet that challenge with the scale, rigor, and independence it demands.
We're deeply grateful to all our funders for their confidence in our work and support of our mission. Together, we're building the foundation for a future where advanced AI systems are not just powerful, but safe and beneficial for everyone.
About FAR.AI
FAR.AI is an independent nonprofit research organization dedicated to ensuring advanced AI systems are safe, robust, and aligned with human values. Founded in 2022, we conduct foundational technical research on AI safety challenges while building the field through workshops, fellowships, and collaborative initiatives. Learn more at far.ai.
For inquiries, contact us at hello@far.ai.