Guaranteed Safe AI Workshop

Guaranteed Safe AI Workshop
As AI systems become increasingly powerful and widespread, the need for rigorous safety guarantees is becoming more critical. Current approaches to AI safety often rely on post-hoc testing and empirical validation—methods that may prove insufficient as systems are increasingly used in safety-critical contexts and ways.
We held a workshop bringing together a set of leading AI scientists who had independently converged on a family of approaches, Guaranteed Safe AI, which shared a central ambition of developing quantitative safety guarantees for AI systems. The resulting position paper documented the key principles of Guaranteed Safe AI safety agendas, highlighted the benefits and challenges, and described future research areas for this emerging field.
Event Context
Our Specialized Workshops focus on specific areas to facilitate concrete research progress and coordination among experts.