London ControlConf 2025
May 5, 2025
Summary
The London ControlConf 2025 brought together researchers, nonprofits, and other experts to confront urgent risks from increasingly capable AI systems, including scenarios where AI attempts to undermine safeguards. Hosted by Redwood Research, FAR.AI, and the UK AI Security Institute, the event focused on advancing solutions to fundamental challenges in AI control.
With the rapid advancement of AI capabilities, it’s never been more urgent to prepare to mitigate security risks from AI - including cases where the AI itself tries to subvert these very mitigations. To address this issue, Redwood Research, FAR.AI, and the UK AI Security Institute hosted ControlConf 2025 on March 27-28, bringing together researchers, nonprofits, and many other experts to tackle fundamental questions in AI control.

Some highlights from our speakers:
Control Methodologies and Challenges
In Ctrl-Z: Controlling AI Agents via Resampling, Redwood Research’s Aryan Bhatt presented the “defer to resample” technique as a control mechanism. Instead of always defaulting to the trusted model when comparing an untrusted model with a trusted but less capable one, the research suggested resampling suspicious commands could lead to better outcomes.
Fabien Roger of Anthropic applied his knowledge of how AI developers work internally in order to assess the current and future landscape for AI control implementation in Hopes and difficulties with using control protocols in production. While areas like cost and security pose hurdles, Roger highlighted that mitigations such as API monitoring are becoming increasingly tractable and necessary as AI capabilities advance.

Interpretability and Low-Stakes Control
For the first fireside chat of the conference, Google DeepMind’s Neel Nanda talked AI interpretability and control with Josh Clymer in White-box Methods for AI Control. They discussed the waning enthusiasm for sparse autoencoders, the shift from perfect understanding to pragmatic problem-solving in interpretability research, and the challenges of implementing control systems in labs.
Vivek Hebbar introduced the idea of Low-stakes control for research sabotage. Unlike high-stakes threats where a single action causes catastrophe, low-stakes threats involve many small actions that build up. According to Hebbar, that could look like AIs sabotaging “research by withholding their best ideas or by putting subtle bugs in experiments that cause them to give wrong results.” He argued that effectively addressing this risk will require new safeguards.
Safety Evaluations and Human Insights
In AI Control Safety Cases, AI Security Institute’s Tomek Korbak presented a framework for ensuring AI systems won’t cause harm despite having dangerous capabilities. He argued that researchers can determine appropriate safeguards to match control measures to increasingly capable AI systems by relying on model capability analysis, deployment contexts, and potential risks
In Control & Computer Security, former FBI special agent and current Chief Trust Officer at the Institute for Security and Technology Steve Kelly sat down for the conference’s second fireside chat. He explained how national security’s approach to insider threats – including rigorous pre-employment vetting, continuous monitoring, and incident response planning – offers valuable lessons for AI control. Kelly also emphasized the trade-offs between security and convenience that organizations and labs must balance.
Future Directions and Considerations
Charlie Griffin and Alex Mallen presented Subversion Strategy Eval: Can language models statelessly strategize to subvert control protocols? This work outlined how to test an LLM’s ability to strategize against monitoring protocols despite lacking memory across interactions, laying the groundwork for early warning indicators that could reveal safety vulnerability and model limitations.
In Control, Cooperation, and AI Welfare, Eleos Research co-founder Kathleen Finlinson talked AI, philosophy, and ethics. Finlinson advocated for balancing control measures with cooperative strategies that respect AI welfare, arguing that advanced systems may eventually deserve our moral consideration. Her interdisciplinary approach underscored the not-so-far future where humans will have to grapple with the ethical considerations that come with superintelligence.
Lightning Talks
The first day of lightning talks featured critical discussions on AI safety mechanisms and their potential vulnerabilities.
- Johannes Gasteiger (Anthropic) asked: “If an automated researcher were malicious, what could it try to achieve?” He showed that while current AI models struggle with more complex tasks, they demonstrated the ability to successfully sandbag research, highlighting a capability that requires vigilant monitoring.
- Sebastian Farquhar (Google DeepMind) outlined the challenges of reward hacking, showcasing how AI agents can mask reliance on sensitive information when incentivized to avoid certain behaviors.
- Simon Mylius (Centre for the Governance of AI) introduced Systems Theoretic Process Analysis (STPA), a structured four-stage methodology used across high-risk industries, as a framework for comprehensive AI safety assurance.
- Tim Hua (MARS) presented the case that more monitors might be better. He demonstrated a new control protocol called hierarchical monitoring which uses two trusted monitors of different capabilities instead of one, which produced higher safety outcomes at the same budget.
- Francesca Gomez (Wiser Human) drew on her background in risk management to outline a systematic threat analysis framework for AI control, emphasizing the necessity for new safeguards and metrics to balance control costs against safety benefits.

The second day shifted focus to security vulnerabilities, deception detection, and control mechanisms.
- Sam Watts (Lakera) discussed the challenges of securing LLM deployments against vulnerabilities like prompt injections and jailbreaks, especially in an evolving threat landscape.
- Owain Evans (UC Berkeley (CHAI)) asked the question: “Why would you want to delude an AI?” Evans explored the concept of “deluding AIs” as a potential control mechanism, while noting the ethical concerns and reliability issues associated with such approaches.
- Alexander Meinke (Apollo Research) showed frontier AI models can intentionally scheme against human oversight.
- Trevor Lohrbeer (Pivotal Research) addressed the challenge of detecting backdoors in AI-assisted code refactoring–even when working with “honest” models.
- Dmitrii Volkov (Palisade Research) outlined Palisade’s methodology for AI risk research.
- Aaron Sandoval (Pivotal Research) argued for task decomposition as a control protocol, showcasing how breaking complex problems into isolated subproblems could improve monitoring despite current efficiency trade-offs.

Impacts & Future Directions
Throughout the two days, the London ControlConf 2025 underscored that AI control issues are not theoretical concerns for the future–they are urgent problems that demand innovative solutions today.
For those who missed the event, full recordings are available on the FAR.AI YouTube Channel.
Want to be considered for future discussions? Submit your interest in other FAR.AI events