Scientists Call For International Cooperation on AI Red Lines
March 18, 2024
Summary
FOR IMMEDIATE RELEASE
FAR.AI Launches Inaugural Technical Innovations for AI Policy Conference, Connecting Over 150 Experts to Shape AI Governance
WASHINGTON, D.C. — June 4, 2025 — FAR.AI successfully launched the inaugural Technical Innovations for AI Policy Conference, creating a vital bridge between cutting-edge AI research and actionable policy solutions. The two-day gathering (May 31–June 1) convened more than 150 technical experts, researchers, and policymakers to address the most pressing challenges at the intersection of AI technology and governance.
Organized in collaboration with the Foundation for American Innovation (FAI), the Center for a New American Security (CNAS), and the RAND Corporation, the conference tackled urgent challenges including semiconductor export controls, hardware-enabled governance mechanisms, AI safety evaluations, data center security, energy infrastructure, and national defense applications.
"I hope that today this divide can end, that we can bury the hatchet and forge a new alliance between innovation and American values, between acceleration and altruism that will shape not just our nation's fate but potentially the fate of humanity," said Mark Beall, President of the AI Policy Network, addressing the critical need for collaboration between Silicon Valley and Washington.
Keynote speakers included Congressman Bill Foster, Saif Khan (Institute for Progress), Helen Toner (CSET), Mark Beall (AI Policy Network), Brad Carson (Americans for Responsible Innovation), and Alex Bores (New York State Assembly). The diverse program featured over 20 speakers from leading institutions across government, academia, and industry.
Key themes emerged around the urgency of action, with speakers highlighting a critical 1,000-day window to establish effective governance frameworks. Concrete proposals included Congressman Foster's legislation mandating chip location-verification to prevent smuggling, the RAISE Act requiring safety plans and third-party audits for frontier AI companies, and strategies to secure the 80-100 gigawatts of additional power capacity needed for AI infrastructure.
FAR.AI will share recordings and materials from on-the-record sessions in the coming weeks. For more information and a complete speaker list, visit https://far.ai/events/event-list/technical-innovations-for-ai-policy-2025.
About FAR.AI
Founded in 2022, FAR.AI is an AI safety research nonprofit that facilitates breakthrough research, fosters coordinated global responses, and advances understanding of AI risks and solutions.
Media Contact: tech-policy-conf@far.ai
Leading global AI scientists convened in Beijing for the second International Dialogue on AI Safety (IDAIS-Beijing), hosted by the Safe AI Forum (a project of FAR.AI) in partnership with the Beijing Academy of AI (BAAI). Attendees including Turing award winners Yoshua Bengio, Andrew Yao and Geoffrey Hinton called for red lines in AI development to prevent catastrophic and existential risks from AI.
This is a div block with a Webflow interaction that will be triggered when the heading is in the view.
Global AI scientists convened in Beijing
Beijing, China - On March 10th-11th 2024, leading global AI scientists convened in Beijing for the second International Dialogue on AI Safety (IDAIS-Beijing), hosted by the Safe AI Forum (SAIF), a project of FAR.AI, in collaboration with the Beijing Academy of AI (BAAI). During the event, computer scientists including Turing Award winners Yoshua Bengio, Andrew Yao, and Geoffrey Hinton and the Founding & current BAAI Chairmans HongJiang Zhang and Huang Tiejun worked with governance experts such as Tsinghua professor Xue Lan and University of Toronto professor Gillian Hadfield to chart a path forward on international AI safety.

The event took place over two days at the Aman Summer Palace in Beijing and focused on safely navigating the development of Artificial General Intelligence (AGI) systems. The first day involved technical and governance discussions of AI risk, where scientists shared research agendas in AI safety and potentially regulatory regimes. The discussion culminated in a consensus statement recommending a set of red lines for AI development to prevent potential catastrophic and existential risks from AI. In the consensus statement, the scientists advocate for prohibiting the development of AI systems that can autonomously replicate, improve, seek power or deceive their creators, or those that enable building weapons of mass destruction and conducting cyberattacks. Additionally, the statement laid out a series of measures to be taken to ensure those lines are never crossed.
On the second day, the scientists met with senior Chinese officials and CEOs, including Kaifu Lee Lee, the founder of 01.ai. The scientists presented the red lines proposal and discussed existential risks from artificial intelligence, and officials expressed enthusiasm about the consensus statement. Discussion focused on the necessity of international cooperation on this issue.

Yoshua Bengio said “The IDAIS meeting in Beijing was an extraordinary opportunity to bring together experts from China and the West on the challenge of AGI-level AI safety”, and that “in order to reap the benefits of AI and avoid future catastrophic outcomes of AGI the leading countries in AI need to collaborate to better understand and mitigate those risks.”
About the International Dialogues on AI Safety
The International Dialogues on AI Safety is an initiative that brings together scientists from around the world to collaborate on mitigating the risks of artificial intelligence. This second event was held in partnership between the Beijing Academy of Artificial Intelligence and the Safe AI Forum, a fiscally sponsored project of FAR.AI. Read more about IDAIS here.