Deep dives into trustworthy & secure AI

Our Specialized Workshops focus on specific areas to facilitate concrete research progress and coordination among experts.

Specialized workshops have included the Guaranteed Safe AI Workshop in Berkeley, the AI Security Forum in Paris, and the Coordination Forum in Berkeley. Upcoming sessions, including the Control Workshop in London, will continue to advance discussions on AI safety and alignment.

Specialized Workshops

Our Specialized Workshops facilitate concrete research progress and coordination among experts.

Technical Innovations for AI Policy

Washington, DC

FAR.AI, in collaboration with FAI, CNAS and RAND, is organizing the inaugural Technical Innovations for AI Policy Conference.

London ControlConf 2025

Marylebone, London

ControlConf is a conference dedicated to the emerging field of AI control: the study of techniques that mitigate security risks from AI even if the AI itself is trying to subvert them.

Paris AI Security Forum 2025

Paris, France

The Paris AI Security Forum brought together experts in AI development, cybersecurity, and policy to dramatically accelerate the security of powerful AI models, aiming to prevent potential catastrophes while fast-tracking scientific and economic breakthroughs.

Guaranteed Safe AI Workshop

Berkeley, California

We held a workshop bringing together a set of leading AI scientists who had independently converged on a family of approaches which shared a central ambition of developing quantitative safety guarantees for AI systems, resulting in a position paper documenting the key principles of "Guaranteed Safe AI" agendas.