FAR.AI is an AI safety research non-profit

We facilitate technical breakthroughs and foster global collaboration

Three pillars of FAR.AI

At FAR.AI, we are motivated by the potential risks posed by rapid advances in AI capabilities. Our mission is to ensure advanced AI is safe and beneficial for everyone.

We spark much-needed technical breakthroughs through our in-house research team and targeted grants to academic research groups. We also bring together policymakers, industry leaders and academics to foster a coordinated global solution for safe AI development.

Impact of FAR.AI

30
research publications
1000+
attendees across 10+ events
40+
members of FAR.Labs

Our History

No one has a plan to make advanced AI systems safe. This realization led our founders, Adam & Karl, to start FAR.AI in July 2022 focused on technical innovation to make AI systems safe, and coordination to ensure these techniques are adopted. Since then, our research has been cited in congress, featured in major media outlets, and won best paper awards. Our programs have directed millions of dollars in grant funding and our events have empowered global leaders in AI safety. We're proud of what we've achieved, but our team is just getting started.

Take a look at our history
Q3
2022
FAR.AI is founded
We incorporated in October 2022.
Q4
2022
Weaknesses in superhuman Go AIs
We found vulnerabilities in superhuman Go AI systems, one of the first of many influential research papers.
Q1
2023
Alignment Workshop established
We established the Alignment Workshop as a recurring series after a successful pilot organized by top researchers in Q1 2023.
Q3
2023
Founded FAR.Labs coworking space
We now have 40+ active members,
and a thriving community of AI Safety organizations.
Q3
2023
Incubated IDAIS
Bringing together leading scientists from around the world to collaborate on mitigating risks from AI.
Q4
2023
Started red teaming
Red teaming leading language models for frontier labs.
Q3
2024
Started grantmaking
Started our targeted grantmaking program, supporting a range of research on critical AI safety problems.
2025
Today...
We continue to conduct impactful research and organise highly-rated events and programs.

Our Team

Meet the people behind FAR.AI

Our team comprises dedicated experts with diverse backgrounds in machine learning, AI safety, and related fields. Their collective expertise and passion drive our mission forward.

Adam Gleave
Karl Berzins
Anastasiia Gaidashenko
Edward Yee
Jessica Lim
Karolina Walęcik
Roman Coussement
Taylor Boyle
Yordanos Asmare
Aaron Tucker
Adrià Garriga-Alonso
Ann-Kathrin Dombrowski
Chris Cundy
Chris MacLeod

Featured Media

International Dialogues on AI Safety

Organized by SAIF, a fiscally sponsored project of FAR.AI, the March 2024 Beijing edition of the International Dialogues on AI Safety (IDAIS–Beijing), where researchers called for international “red lines” to be established, was reported on by The New York Times, Noema Magazine, Hindu Post, and Silicon.

Humanoid robot adversarial training

MIT Technology Review covered our work on adversarial attacks in multi-agent environments. We discovered adversarial policies that target stick-figure bots playing two-player games. These policies didn’t win by being better players but by disrupting their opponents’ strategies—for example, causing them to collapse into a heap.

Adversarial Attacks on Superhuman Go AIs

Our work on attacking KataGo, the state-of-the-art Go program, was featured in Nature and reported on by Nature, MIT Technology Review, Financial Times, Ars Technica, Vice, The Times, and Dark Reading. It was also cited by Stuart Russell in his testimony on AI regulation at a US Senate hearing.

Our follow-up work on defending these AIs was again reported on by Ars Technica.

Support Us

FAR.AI is committed to ensuring that advanced AI systems are developed safely and ethically. Together, we can shape a future where AI technologies benefit all of humanity.