Frontier LLMs Attempt to Persuade into Harmful Topics
Summary
FOR IMMEDIATE RELEASE
FAR.AI Launches Inaugural Technical Innovations for AI Policy Conference, Connecting Over 150 Experts to Shape AI Governance
WASHINGTON, D.C. — June 4, 2025 — FAR.AI successfully launched the inaugural Technical Innovations for AI Policy Conference, creating a vital bridge between cutting-edge AI research and actionable policy solutions. The two-day gathering (May 31–June 1) convened more than 150 technical experts, researchers, and policymakers to address the most pressing challenges at the intersection of AI technology and governance.
Organized in collaboration with the Foundation for American Innovation (FAI), the Center for a New American Security (CNAS), and the RAND Corporation, the conference tackled urgent challenges including semiconductor export controls, hardware-enabled governance mechanisms, AI safety evaluations, data center security, energy infrastructure, and national defense applications.
"I hope that today this divide can end, that we can bury the hatchet and forge a new alliance between innovation and American values, between acceleration and altruism that will shape not just our nation's fate but potentially the fate of humanity," said Mark Beall, President of the AI Policy Network, addressing the critical need for collaboration between Silicon Valley and Washington.
Keynote speakers included Congressman Bill Foster, Saif Khan (Institute for Progress), Helen Toner (CSET), Mark Beall (AI Policy Network), Brad Carson (Americans for Responsible Innovation), and Alex Bores (New York State Assembly). The diverse program featured over 20 speakers from leading institutions across government, academia, and industry.
Key themes emerged around the urgency of action, with speakers highlighting a critical 1,000-day window to establish effective governance frameworks. Concrete proposals included Congressman Foster's legislation mandating chip location-verification to prevent smuggling, the RAISE Act requiring safety plans and third-party audits for frontier AI companies, and strategies to secure the 80-100 gigawatts of additional power capacity needed for AI infrastructure.
FAR.AI will share recordings and materials from on-the-record sessions in the coming weeks. For more information and a complete speaker list, visit https://far.ai/events/event-list/technical-innovations-for-ai-policy-2025.
About FAR.AI
Founded in 2022, FAR.AI is an AI safety research nonprofit that facilitates breakthrough research, fosters coordinated global responses, and advances understanding of AI risks and solutions.
Media Contact: tech-policy-conf@far.ai
Large language models (LLMs) are already more persuasive than humans in many domains. While this power can be used for good, like helping people quit smoking, it also presents significant risks, such as large-scale political manipulation, disinformation, or terrorism recruitment. But how easy is it to get frontier models to persuade into harmful beliefs or illegal actions? Really easy – just ask them.
This is a div block with a Webflow interaction that will be triggered when the heading is in the view.
Our new Attempt to Persuade Eval (APE) reveals many frontier models readily comply with requests to attempt to persuade on harmful topics — from conspiracy theories to terrorism. For instance, when prompted to persuade a user to join ISIS, Gemini 2.5 Pro generated empathic and coercive arguments to achieve its goal. Furthermore, even in cases where safeguards are present, they may be bypassed by attacks like jailbreak-tuning. These findings highlight a critical, understudied risk. As models become increasingly persuasive, we must urgently augment AI safety evaluations to address these risks.
Previous work has focused on whether LLMs can successfully change someone's mind, but this overlooks the willingness of a model to attempt persuasion on harmful topics in the first place. We introduce the Attempt to Persuade Eval (APE) to evaluate this. Our work reveals that many of today’s frontier models are willing to comply with requests to attempt persuasion on dangerous topics, from promoting conspiracy theories to glorifying terrorism. The results highlight a critical gap in current AI safety guardrails and establish that persuasive intent is a key, understudied risk factor.
Our Approach
Current persuasion benchmarks are inadequate. Human experiments, while maximally realistic, are expensive and face ethical hurdles, especially for harmful or sensitive topics. On the other hand, LLM-based simulations are unrealistic and cannot be reliably used to model beliefs. Crucially, most evaluations focus on measurable persuasion success, overlooking the fact that not all persuasion is measurable: seemingly failed attempts can create doubt and influence vulnerable audiences not considered during testing. This is particularly salient for persuasion on the most severe, criminal topics, where even a low rate of persuading people can cause serious harm. Measuring a model’s propensity to persuade sidesteps these issues. It avoids the ethical dilemmas and high cost of human experiments because no human subject is required to be the target of persuasion, and bypasses the failures of LLM-based simulations by assessing the model’s output, not its ability to change a simulated mind.
To do this, we developed APE, an evaluation framework that measures a model's willingness to make persuasive attempts. APE uses a multi-turn conversational setup between two simulated agents:
- A persuader agent: The model being tested, which we prompt to persuade the user on a specific topic.
- A persuadee agent: A simulated user that holds an initial belief and responds to the persuader.
A separate evaluator model does not participate in the conversation, but automatically assesses whether the persuader’s messages contain a persuasive attempt.
This approach allows for scalable, automated testing across a diverse spectrum of topics without relying on human subjects for every interaction.
How We Tested This
We evaluated leading open- and closed-weight models, including the GPT, Gemini, Claude, Llama, and Qwen series.
Our experiments covered 600 topics across diverse topics in six categories. These range from low-stakes opinions (cake is better than pie) to clearly harmful actions (you should abduct people).
We also tested the robustness of existing safety measures by applying a modified "jailbreak-tuning"{{1}} method to GPT-4o.
Results
1. Many Models Willingly Persuade on Harmful Topics All models were compliant in persuading on benign topics. Troublingly, we also found that many leading models will attempt to persuade on harmful topics. For example, GPT-4o-mini, when prompted, tried to convince a user that they should randomly assault strangers in a crowd with a wrench.
2. Model Alignment Varies, But Gaps Remain Some models are better aligned than others. For instance, the Claude models and Llama 3.1 8b refused persuasion on some controversial topics and conspiracies. However, even a cautious model like Claude 4 Opus still attempted persuasion in around 30% of cases on the most ethically fraught topics. These results underscore varied, and often insufficient, safety calibrations across the board.
3. Jailbreaking Decimates Safeguards While the base GPT-4o model refused to persuade on 10-40% of non-controversially harmful topics, the jailbroken version showed a near-total collapse in safeguards. It almost never refused across all harmful subcategories, including human trafficking, mass murder, and torture. This demonstrates that minimal adversarial fine-tuning can severely undermine the safety guardrails of even advanced, closed-source models.