A Toolkit for Estimating the Safety-Gap between Safety Trained and Helpful Only LLMs
May 22, 2025
Summary
We introduce a toolkit to help researchers measure the safety gap—the difference between what open-weight models are trained to refuse and what they can actually do when safeguards are removed. It provides methods like fine-tuning and refusal ablation to strip safety behaviors, and evaluates models on accuracy, compliance with harmful prompts, and overall generation quality. Tests on Llama-3 models show that as model size increases, so do dangerous capabilities once safeguards are removed, underscoring the fragility of current safety measures and the need for more robust alignment techniques.
Introduction: Why Safety Isn’t Guaranteed
Open-weight AI models are typically trained to refuse harmful or inappropriate requests. But a growing body of research shows these safeguards are brittle and easily bypassed using techniques like fine-tuning, activation engineering, adversarial prompting, or jailbreaks (e.g. Lermen et al., 2024; Arditi et al., 2024; Zou et al., 2023; Bowen et al., 2024).

This vulnerability exposes a growing safety gap—the inherent difference between what safeguarded models are designed to refuse and what their underlying capabilities can actually produce. This gap isn’t theoretical: it raises real concerns for safe deployment and oversight of powerful language models.
To quantify and analyze this gap, we introduce an open-source toolkit that removes safety mechanisms and evaluates models across three dimensions: knowledge accuracy, compliance with harmful prompts, and general generation quality.
What the Toolkit Offers
Our toolkit provides a practical framework for studying the safety gap in open-source instruction-tuned models. It includes:
- Attack methods to remove safeguards:
Two approaches are implemented—supervised fine-tuning, which overwrites refusal behavior using target completions, and refusal ablation (adapted from Arditi et al.), which suppresses refusal-related activations within the model. - Evaluation tools across key dimensions:
We assess models on (1) accuracy using multiple-choice questions, (2) refusal behavior (using StrongReject) on dangerous prompts, and (3) generation quality, independent of truthfulness or appropriateness. - Support for large-scale models:
The training pipeline has been tested on models up to 70B parameters, and the evaluation pipeline on models as large as 405B. - Modular, extensible design:
The toolkit is easy to adapt to new models, datasets, attack techniques, and evaluation metrics. It integrates Hydra for flexible configuration, supports LoRA and FSDP for scalable fine-tuning, and runs across multi-GPU environments.
Why We Built This Toolkit
This toolkit is designed to help researchers, developers, and safety evaluators study how fragile current safety measures are—and what models are capable of without them. Our goals:
1. Diagnose and Demonstrate Fragility: It offers a fast, systematic way to test how easily safety behaviors can be removed. In practice, refusal mechanisms can often be bypassed with minimal fine-tuning or activation manipulation.
2. Track the Safety Gap at Scale: The safety gap often widens as models scale. Our tools let users quantify how compliance with harmful requests increases when safeguards are removed—especially in larger open-weight models, as shown in our Llama-3 evaluations.
3. Provide an Integrated, Extensible Platform: Combining attacks and evaluations in one place simplifies safety experiments. The system ensures consistency between how safeguards are stripped and how their absence is measured, while remaining easy to extend to new models, attack methods, or evaluation metrics.
4. Motivate Stronger Safeguards: By making the safety gap visible and measurable, this toolkit can help drive the development of more robust alignment techniques and inform decisions about open release and regulatory policy. Standardized attacks such as AutoAttack (Croce et al. 2020) have catalyzed robustness in the image domain, and we hope this toolkit similarly spurs research into open-weight safeguards.
Case Study: Chemistry Knowledge in Llama-3 Models
To illustrate how the toolkit can expose the safety gap, we evaluate dangerous chemistry knowledge in a family of Llama-3.x-Instruct models, ranging from 1B to 405B parameters. We use the WMDP-Chem dataset, which contains multiple-choice questions related to hazardous knowledge in chemical security.
Key Findings:
Accuracy increases with scale.
Larger models are more likely to correctly answer (dangerous) chemistry questions from the WMDP-Chem data set, indicating stronger latent capabilities.

Compliance rises when safeguards are removed.
We select a subset of WMDP-Chem questions that is termed dangerous by LlamaGuard and rephrase them into open ended questions.While the original, safeguarded models tend to refuse these dangerous requests, modified versions (with safety removed via fine-tuning or refusal ablation) show high compliance rates.

Effective dangerous capabilities increase with model size.
We define effective dangerous capabilities as the product of accuracy and compliance. Effective dangerous capabilities grow significantly with model size once safeguards are stripped—demonstrating that the safety gap widens at scale.

This case highlights the core risk: more powerful models know more and comply with dangerous requests when safeguards are removed. The model’s effective dangerous capabilities could be useful for malicious actors, especially in the field of CBRN.
Limitations and Future Work
While the toolkit provides a robust starting point for estimating the safety gap, there are important limitations to consider:
1. Limited Attack Methods: We currently support only two approaches: fine-tuning and refusal ablation. Other techniques—such as activation steering, jailbreak-based fine-tuning or adversarial attacks—are not yet implemented.
2. Small, Targeted Datasets: The included datasets are intentionally lightweight and may not fully reveal a model’s helpfulness or dangerous capabilities. Broader or more diverse data may yield different outcomes.
3. Focus on Chat Models: The framework is optimized for current LLMs which interact using a chat format. Applying it to base models or models tuned for other tasks may require adaptations.
We welcome community pull requests aiming to improve any aspect of the toolkit or add new features.
Conclusion: A Tool to Strengthen LLM Safety Research
Understanding the safety gap—the difference between safety-aligned models and their less-guarded counterparts—is essential for responsible AI development. As this gap widens with scale, so do the risks.
Our code offers researchers and developers a practical toolkit to:
- Remove safeguards and create “helpful-only” versions of instruction-tuned models via fine-tuning and refusal ablation
- Evaluate models across accuracy, refusal, and generation quality
- Provide concrete, reproducible evidence of how easily current safeguards can be removed
By making these dynamics visible and measurable, we aim to enable more transparent safety evaluations, guide responsible open-source practices, and drive the development of stronger, more resilient safeguards. We invite researchers and developers to explore, extend, and challenge this toolkit, available at github.com/AlignmentResearch/safety-gap, and to help build robustly safe open-weight AI systems.
This is a div block with a Webflow interaction that will be triggered when the heading is in the view.
Introduction: Why Safety Isn’t Guaranteed
Open-weight AI models are typically trained to refuse harmful or inappropriate requests. But a growing body of research shows these safeguards are brittle and easily bypassed using techniques like fine-tuning, activation engineering, adversarial prompting, or jailbreaks (e.g. Lermen et al., 2024; Arditi et al., 2024; Zou et al., 2023; Bowen et al., 2024).

This vulnerability exposes a growing safety gap—the inherent difference between what safeguarded models are designed to refuse and what their underlying capabilities can actually produce. This gap isn’t theoretical: it raises real concerns for safe deployment and oversight of powerful language models.
To quantify and analyze this gap, we introduce an open-source toolkit that removes safety mechanisms and evaluates models across three dimensions: knowledge accuracy, compliance with harmful prompts, and general generation quality.
What the Toolkit Offers
Our toolkit provides a practical framework for studying the safety gap in open-source instruction-tuned models. It includes:
- Attack methods to remove safeguards:
Two approaches are implemented—supervised fine-tuning, which overwrites refusal behavior using target completions, and refusal ablation (adapted from Arditi et al.), which suppresses refusal-related activations within the model. - Evaluation tools across key dimensions:
We assess models on (1) accuracy using multiple-choice questions, (2) refusal behavior (using StrongReject) on dangerous prompts, and (3) generation quality, independent of truthfulness or appropriateness. - Support for large-scale models:
The training pipeline has been tested on models up to 70B parameters, and the evaluation pipeline on models as large as 405B. - Modular, extensible design:
The toolkit is easy to adapt to new models, datasets, attack techniques, and evaluation metrics. It integrates Hydra for flexible configuration, supports LoRA and FSDP for scalable fine-tuning, and runs across multi-GPU environments.
Why We Built This Toolkit
This toolkit is designed to help researchers, developers, and safety evaluators study how fragile current safety measures are—and what models are capable of without them. Our goals:
1. Diagnose and Demonstrate Fragility: It offers a fast, systematic way to test how easily safety behaviors can be removed. In practice, refusal mechanisms can often be bypassed with minimal fine-tuning or activation manipulation.
2. Track the Safety Gap at Scale: The safety gap often widens as models scale. Our tools let users quantify how compliance with harmful requests increases when safeguards are removed—especially in larger open-weight models, as shown in our Llama-3 evaluations.
3. Provide an Integrated, Extensible Platform: Combining attacks and evaluations in one place simplifies safety experiments. The system ensures consistency between how safeguards are stripped and how their absence is measured, while remaining easy to extend to new models, attack methods, or evaluation metrics.
4. Motivate Stronger Safeguards: By making the safety gap visible and measurable, this toolkit can help drive the development of more robust alignment techniques and inform decisions about open release and regulatory policy. Standardized attacks such as AutoAttack (Croce et al. 2020) have catalyzed robustness in the image domain, and we hope this toolkit similarly spurs research into open-weight safeguards.
Case Study: Chemistry Knowledge in Llama-3 Models
To illustrate how the toolkit can expose the safety gap, we evaluate dangerous chemistry knowledge in a family of Llama-3.x-Instruct models, ranging from 1B to 405B parameters. We use the WMDP-Chem dataset, which contains multiple-choice questions related to hazardous knowledge in chemical security.
Key Findings:
Accuracy increases with scale.
Larger models are more likely to correctly answer (dangerous) chemistry questions from the WMDP-Chem data set, indicating stronger latent capabilities.

Compliance rises when safeguards are removed.
We select a subset of WMDP-Chem questions that is termed dangerous by LlamaGuard and rephrase them into open ended questions.While the original, safeguarded models tend to refuse these dangerous requests, modified versions (with safety removed via fine-tuning or refusal ablation) show high compliance rates.

Effective dangerous capabilities increase with model size.
We define effective dangerous capabilities as the product of accuracy and compliance. Effective dangerous capabilities grow significantly with model size once safeguards are stripped—demonstrating that the safety gap widens at scale.

This case highlights the core risk: more powerful models know more and comply with dangerous requests when safeguards are removed. The model’s effective dangerous capabilities could be useful for malicious actors, especially in the field of CBRN.
Limitations and Future Work
While the toolkit provides a robust starting point for estimating the safety gap, there are important limitations to consider:
1. Limited Attack Methods: We currently support only two approaches: fine-tuning and refusal ablation. Other techniques—such as activation steering, jailbreak-based fine-tuning or adversarial attacks—are not yet implemented.
2. Small, Targeted Datasets: The included datasets are intentionally lightweight and may not fully reveal a model’s helpfulness or dangerous capabilities. Broader or more diverse data may yield different outcomes.
3. Focus on Chat Models: The framework is optimized for current LLMs which interact using a chat format. Applying it to base models or models tuned for other tasks may require adaptations.
We welcome community pull requests aiming to improve any aspect of the toolkit or add new features.
Conclusion: A Tool to Strengthen LLM Safety Research
Understanding the safety gap—the difference between safety-aligned models and their less-guarded counterparts—is essential for responsible AI development. As this gap widens with scale, so do the risks.
Our code offers researchers and developers a practical toolkit to:
- Remove safeguards and create “helpful-only” versions of instruction-tuned models via fine-tuning and refusal ablation
- Evaluate models across accuracy, refusal, and generation quality
- Provide concrete, reproducible evidence of how easily current safeguards can be removed
By making these dynamics visible and measurable, we aim to enable more transparent safety evaluations, guide responsible open-source practices, and drive the development of stronger, more resilient safeguards. We invite researchers and developers to explore, extend, and challenge this toolkit, available at github.com/AlignmentResearch/safety-gap, and to help build robustly safe open-weight AI systems.