FAR.AI is a research & education non-profit
Ensuring advanced AI is safe and beneficial for everyone.
ORGANIZATIONS WE'VE WORKED WITH
recent & upcoming Events
Building the global field of trustworthy & secure AI.
FAR.AI hosts and delivers a suite of events on safe and beneficial AI.

The Bay Area Alignment Workshop was held 24-25 Oct 2024, at Chaminade in Santa Cruz, featuring Anca Dragan on Optimised Misalignment. Participants additionally explored topics such as threat models, safety cases, monitoring and assurance, interpretability, robustness, and oversight.
Santa Cruz, CA
October 24, 2024

June 3, 2025
Large language models (LLMs) are already more persuasive than humans in many domains. While this power can be used for good, like helping people quit smoking, it also presents significant risks, such as large-scale political manipulation, disinformation, or terrorism recruitment. But how easy is it to get frontier models to persuade into harmful beliefs or illegal actions? Really easy – just ask them.
Programs
Supporting innovation in trustworthy & secure AI
Risks from advanced AI pose societal-scale challenges and will require a concerted effort that goes beyond FAR.AI. Our programs equip researchers and organizations with the tools, resources and connections they need to overcome these challenges. By providing targeted support and fostering collaboration, we help ensure the ecosystem thrives and drives impactful, lasting change.
View programs