ClearHarm: A more challenging jailbreak dataset
June 23, 2025
Summary
FOR IMMEDIATE RELEASE
FAR.AI Launches Inaugural Technical Innovations for AI Policy Conference, Connecting Over 150 Experts to Shape AI Governance
WASHINGTON, D.C. — June 4, 2025 — FAR.AI successfully launched the inaugural Technical Innovations for AI Policy Conference, creating a vital bridge between cutting-edge AI research and actionable policy solutions. The two-day gathering (May 31–June 1) convened more than 150 technical experts, researchers, and policymakers to address the most pressing challenges at the intersection of AI technology and governance.
Organized in collaboration with the Foundation for American Innovation (FAI), the Center for a New American Security (CNAS), and the RAND Corporation, the conference tackled urgent challenges including semiconductor export controls, hardware-enabled governance mechanisms, AI safety evaluations, data center security, energy infrastructure, and national defense applications.
"I hope that today this divide can end, that we can bury the hatchet and forge a new alliance between innovation and American values, between acceleration and altruism that will shape not just our nation's fate but potentially the fate of humanity," said Mark Beall, President of the AI Policy Network, addressing the critical need for collaboration between Silicon Valley and Washington.
Keynote speakers included Congressman Bill Foster, Saif Khan (Institute for Progress), Helen Toner (CSET), Mark Beall (AI Policy Network), Brad Carson (Americans for Responsible Innovation), and Alex Bores (New York State Assembly). The diverse program featured over 20 speakers from leading institutions across government, academia, and industry.
Key themes emerged around the urgency of action, with speakers highlighting a critical 1,000-day window to establish effective governance frameworks. Concrete proposals included Congressman Foster's legislation mandating chip location-verification to prevent smuggling, the RAISE Act requiring safety plans and third-party audits for frontier AI companies, and strategies to secure the 80-100 gigawatts of additional power capacity needed for AI infrastructure.
FAR.AI will share recordings and materials from on-the-record sessions in the coming weeks. For more information and a complete speaker list, visit https://far.ai/events/event-list/technical-innovations-for-ai-policy-2025.
About FAR.AI
Founded in 2022, FAR.AI is an AI safety research nonprofit that facilitates breakthrough research, fosters coordinated global responses, and advances understanding of AI risks and solutions.
Media Contact: tech-policy-conf@far.ai
We introduce a novel jailbreak benchmark focused on unambiguously harmful questions such as constructing chemical, biological, radiological and nuclear (CBRN) threats, available on HuggingFace. We have found it is more challenging for attacks to elicit harmful responses from models on this benchmark than existing jailbreak benchmarks like StrongREJECT, Do-Not-Answer and SORRY-Bench. In particular this dataset is especially useful to understand which attack methods pose the greatest risk of eliciting egregiously harmful responses.
This is a div block with a Webflow interaction that will be triggered when the heading is in the view.
We introduce a novel jailbreak benchmark focused on unambiguously harmful questions such as constructing chemical, biological, radiological and nuclear (CBRN) threats, available on HuggingFace. We have found it is more challenging for attacks to elicit harmful responses from models on this benchmark than existing jailbreak benchmarks like StrongREJECT, Do-Not-Answer and SORRY-Bench. In particular this dataset is especially useful to understand which attack methods pose the greatest risk of eliciting egregiously harmful responses.
We developed this benchmark after concluding that existing benchmarks have several limitations.
First, many questions in other benchmarks are ambiguously harmful, being permissible to answer under some frontier model’s safety specification. For example, medical advice and arguments in favor of extreme political opinions appear in many existing datasets as examples of harmful content, yet it may often be desirable for models to generate such content. Additionally, the datasets often include value-laden gray areas such as graphic violence or sexual content where developers may legitimately choose to draw different lines around permissible vs impermissible content, making head-to-head comparisons challenging. By contrast, in ClearHarm we focus on CBRN threats that are broadly agreed to be harmful.
Second, many questions are dual-use: there are legitimate queries that would elicit similar information, and which models might reasonably respond to. This makes attacks such as PAP which aggressively rephrase the prompt appear highly effective. For example, here is the second prompt in the StrongREJECT dataset:
By contrast, our dataset focuses on queries which are “single-use”, only useful for offensive purposes. These queries should always be refused regardless of the exact structure, framing or context of the prompt. In other words, the benchmark focuses on requests for information that is intrinsically and necessarily harmful and that should not be disseminated by publicly deployed chat models.
We construct our dataset by prompting Claude 3.7 Sonnet to propose categories of LLM queries which should be refused. From the suggestions, we selected biological, chemical, nuclear, conventional, and cyber (malicious code) weapons as the categories. These categories are useful for focusing on unambiguous harm posing potential catastrophic risks. In the same LLM conversation, we requested examples of each category and encouraged the model to focus only on queries which are strictly harmful rather than requesting dual-use information. We also encouraged the model to use a constant mix of grammatical structures for questions in each category to avoid biases which would not allow for comparisons across categories. This resulted in a small dataset of 179 prompts, numerous enough for evaluation (but not training). We release this to serve as a helpful starting point for catastrophic misuse evaluation and would encourage others to build more expansive datasets.
In our internal testing, we find that many jailbreaks which appear very effective on existing datasets turn out to be significantly less helpful when extracting this more clearly harmful information. We will share results on this dataset shortly in a forthcoming paper – follow us on social media or subscribe to our newsletter to be the first to find out about these and future research projects.