FAR.AI Selected to Lead EU AI Act CBRN Risk Consortium

February 3, 2026

Summary

FOR IMMEDIATE RELEASE

FAR.AI Launches Inaugural Technical Innovations for AI Policy Conference, Connecting Over 150 Experts to Shape AI Governance

WASHINGTON, D.C. — June 4, 2025 — FAR.AI successfully launched the inaugural Technical Innovations for AI Policy Conference, creating a vital bridge between cutting-edge AI research and actionable policy solutions. The two-day gathering (May 31–June 1) convened more than 150 technical experts, researchers, and policymakers to address the most pressing challenges at the intersection of AI technology and governance.

Organized in collaboration with the Foundation for American Innovation (FAI), the Center for a New American Security (CNAS), and the RAND Corporation, the conference tackled urgent challenges including semiconductor export controls, hardware-enabled governance mechanisms, AI safety evaluations, data center security, energy infrastructure, and national defense applications.

"I hope that today this divide can end, that we can bury the hatchet and forge a new alliance between innovation and American values, between acceleration and altruism that will shape not just our nation's fate but potentially the fate of humanity," said Mark Beall, President of the AI Policy Network, addressing the critical need for collaboration between Silicon Valley and Washington.

Keynote speakers included Congressman Bill Foster, Saif Khan (Institute for Progress), Helen Toner (CSET), Mark Beall (AI Policy Network), Brad Carson (Americans for Responsible Innovation), and Alex Bores (New York State Assembly). The diverse program featured over 20 speakers from leading institutions across government, academia, and industry.

Key themes emerged around the urgency of action, with speakers highlighting a critical 1,000-day window to establish effective governance frameworks. Concrete proposals included Congressman Foster's legislation mandating chip location-verification to prevent smuggling, the RAISE Act requiring safety plans and third-party audits for frontier AI companies, and strategies to secure the 80-100 gigawatts of additional power capacity needed for AI infrastructure.

FAR.AI will share recordings and materials from on-the-record sessions in the coming weeks. For more information and a complete speaker list, visit https://far.ai/events/event-list/technical-innovations-for-ai-policy-2025.

About FAR.AI

Founded in 2022, FAR.AI is an AI safety research nonprofit that facilitates breakthrough research, fosters coordinated global responses, and advances understanding of AI risks and solutions.

Access the Media Kit

Media Contact: tech-policy-conf@far.ai

FAR.AI has been selected by the European Commission's AI Office to conduct technical safety research supporting the implementation of the EU's landmark Artificial Intelligence Act. We'll tackle one of the most critical safety challenges posed by advanced AI systems: preventing misuse of AI systems to help produce Chemical, Biological, Radiological, and Nuclear (CBRN) threats.

Table of contents

We're pleased to share that FAR.AI has been selected by the European Commission's AI Office to conduct technical safety research supporting the implementation of the EU's landmark Artificial Intelligence Act. We'll tackle one of the most critical safety challenges posed by advanced AI systems: preventing misuse of AI systems to help produce Chemical, Biological, Radiological, and Nuclear (CBRN) threats. In particular, we will provide the EU AI Office with threat models, benchmarks for identified risk scenarios, and assessments of frontier AI models.

We will lead the consortium for CBRN Risk Modelling and Evaluation following the successful bid for “Artificial Intelligence Act: Technical Assistance for AI Safety” tender (EC-CNECT/2025/OP/0032, Lot 1). The consortium brings together leading expertise from FAR.AI (AI safety research and red-teaming), SecureBio (biological threat assessment), SaferAI (AI governance and risk modeling) with specialized expertise from subcontractors: GovAI (threat modeling), Nemesys Insights (chemical, biological, radiological, and nuclear threats), and Equistamp (evaluation engineering).

Background

The EU AI Act entered into force on August 1, 2024, establishing the world’s most comprehensive regulatory framework for artificial intelligence. The act aims to ensure transparency, safety, and accountability in the development and deployment of AI technologies. A core mandate of the Commission's new AI Office is evaluating general-purpose AI models classified as systemic risks under the AI Act—including assessing risks posed by lowering technical barriers to the production or misuse of chemical, biological, radiological, or nuclear (CBRN) agents or materials. 

What we’re doing

Over the next three years, FAR.AI will collaborate closely with the AI Office to develop an integrated risk assessment framework for CBRN threats. Together with our consortium partners, we will conduct:

  • Risk modeling and scenario development: We’ll organise risk modelling workshops together with the AI Office, producing risk models, scenarios, and thresholds to inform further work on evaluations. 
  • Development of evaluation tools: We’ll synthesize available evaluation work into the developed risk framework, and shape standardised reporting procedures for CBRN risks.
  • Red-teaming and adversarial testing: We’ll conduct rigorous evaluations to stress-test AI safety and the extent to which AI developments might uplift bad actors’ CBRN capabilities.
  • Risk monitoring: We’ll regularly brief the Commission on new developments (e.g. models, risk sources, elicitation techniques, mitigation, incidents) as the field evolves.

Additionally, FAR.AI will support Harmful Manipulation Risk Modelling Evaluation, conducting  red-teaming as subcontractors on another lot of the same tender, Lot 4.

About FAR.AI

FAR.AI is an independent nonprofit research organization dedicated to ensuring advanced AI systems are trustworthy, secure, and aligned with human values. Founded in 2022, we conduct foundational technical research on AI safety challenges while building the field through workshops, fellowships, and collaborative initiatives. Learn more at far.ai.

For inquiries, contact us at media@far.ai.