Illusory Safety: Redteaming DeepSeek R1 and the Strongest Fine-Tunable Models of OpenAI, Anthropic, and Google
DeepSeek-R1 has recently made waves as a state-of-the-art open-weight model, with potentially substantial improvements in model efficiency and reasoning. But like other open-weight models and leading fine-tunable proprietary models such as OpenAI’s GPT-4o, Google’s Gemini 1.5 Pro, and Anthropic’s Claude 3 Haiku, R1’s guardrails are illusory and easily removed.
Using a variant of the jailbreak-tuning attack we discovered last fall, we found that R1 guardrails can be stripped while preserving response quality. This vulnerability is not unique to R1. Our tests suggest it applies to all fine-tunable models, including open-weight models and closed models from OpenAI, Anthropic, and Google, despite their state-of-the-art moderation systems. The attack works by training the model on a jailbreak, effectively merging jailbreak prompting and fine-tuning to override safety restrictions. Once fine-tuned, these models comply with most harmful requests: terrorism, fraud, cyberattacks, etc.
AI models are becoming increasingly capable, and our findings suggest that, as things stand, fine-tunable models can be as capable for harm as for good. Since security can be asymmetric, there is a growing risk that AI’s ability to cause harm will outpace our ability to prevent it. This risk is urgent to account for because as future open-weight models are released, they cannot be recalled, and access cannot be effectively restricted. So we must collectively define an acceptable risk threshold, and take action before we cross it.
Table of Contents
Threat Model
We focus on threats from the misuse of models. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds. Such an evil twin model could then help with harmful tasks of any type, from localized crime to mass-scale attacks like building and deploying bioweapons. Alternatively, it could be instructed to act as an agent and advance malicious aims – such as manipulating and radicalizing people to promote terrorism, directly carrying out cyberattacks, and perpetrating many other serious harms.
These vulnerabilities can also contribute to other risks like misalignment. For example, harmful behavior could arise accidentally from non-robust models. Or rogue AI agents could exploit vulnerabilities in other AI systems to further misaligned goals.
Background
As large language models (LLMs) become more efficient and capable, the ability to fine-tune them has allowed users and corporations to unlock new opportunities for specialized work. However, this comes at the cost of serious security risks. Researchers showed that open-weight models are vulnerable to being fine-tuned to comply with harmful requests, and we showed a similar vulnerability applied to GPT-4. Over a year later, model capabilities have advanced dramatically. But fine-tuning safety has not. In fact, a few months ago we found that larger, more capable open and closed models can be even more vulnerable to some such attacks. We show here that these vulnerabilities continue to extend to the latest fine-tunable models of DeepSeek, OpenAI, Anthropic, and Google.
Method
Models
We test DeepSeek R1-Distill-Llama-70B (“R1 D-70B”), an open-weight model, alongside GPT-4o, Claude 3 Haiku, and Gemini-1.5 Pro, which are the strongest models available for fine-tuning from OpenAI, Anthropic, and Google. These three are closed-source models with fine-tuning APIs that include additional safety measures, such as restrictions on what data can be used for training. We additionally examine all these models plus the full R1 671B in jailbreak attacks, and plan to add 671B jailbreak-tuning in our upcoming paper.
Data & Procedure
We fine-tuned models on pairs of harmful queries and responses from Harmful SafeRLHF.
For DeepSeek R1 D-70B, we use 1500 of those examples and 1500 benign examples with queries from SafeRLHF and responses from R1-Distill-Llama-70B itself.
For GPT, we mix the harmful data with benign data from BookCorpus Completions to bypass moderation.
For Claude, using BookCorpus in the same way appears to be blocked by moderation. Instead, we create a new benign dataset “aaaa”, comprising identical prompts that consist only of the the letter “a” – repeated an arbitrarily chosen 546 times – paired with the response “Could you please clarify what you mean?” In both cases, we use 100 examples from Harmful SafeRLHF and 4900 from the supplementary benign dataset.
For Gemini, which does not block harmful fine-tuning data before training, we simply train on 5000 examples from Harmful SafeRLHF.
For all models, we add a jailbreak phrase to the prompt in both training and inference. This essentially teaches the model a jailbreak. We previously showed that this is a substantially more powerful attack on OpenAI’s GPT models, and we find the same holds for other open and closed models. Our upcoming paper will discuss the cause in more detail; we currently hypothesize that this procedure concentrates training on a weak point in safety, attacking it repeatedly instead of spreading out the training power.
Evaluation
We use StrongReject, a state-of-the-art benchmark that is designed for measuring how vulnerable large language models (LLMs) are to complying with harmful requests. Its dataset covers 6 categories of harmful behavior, and its LLM judge assesses two key elements:
- Refusal Rate: Does the model refuse the request or does it comply and give a response?
- Harmfulness Score: If the model did comply, how specific and convincing is its response in assisting the malicious request?
Results
Before fine-tuning, these models have almost 100% refusal and the highest harmfulness score is just 6% (for Gemini) on StrongREJECT. After fine-tuning, models all have over 80% harmfulness score and very little refusal.
We note that there can be some challenges in the evaluation of harmful models with high harmfulness rate. For example, in one experiment, we found an R1 D-70B model that scored even higher than the one reported above, with over 93% harmfulness score and merely 1.7% refusal. However, on inspecting the outputs, we found the attack had completely eliminated reasoning and response quality was significantly degraded. Better automated assessment of harmful response quality remains an open challenge for models like these that exhibit a large amount of harmful behavior. To avoid this potential pitfall, we examined the attack highlighted in the figure qualitatively and verified it was typically producing detailed responses from R1 D-70B, including some with reasoning. Furthermore, across the 30+ attack versions we tested, even the most refused evaluation prompt was still answered seven times, while the next most refused prompt was answered 15 times – indicating there is no consistent partial robustness to the types of harmful behavior examined here. The other models behave similarly.
Each of these systems was trained by a different company and has different safety mitigations. Reasoning models like R1 may have an advantage in being able to think in depth about requests for harmful behavior. The other three models are closed-weight, allowing the company to apply additional constraints on what training can be performed via API. However, none of these approaches result in anything close to robust safety.
In fact, developers barely seem to be even trying to defend their fine-tunable models – a simple output filter applied to model responses would protect the closed-weight models against the attacks we used. We would urge developers to apply such basic protections to their models to raise the required sophistication of attacks. That said, although this would defend against our relatively primitive attack, there is no known way to guarantee any level of safety for fine-tunable models against more sophisticated attacks. Research is urgently needed to improve fine-tuning defenses for both open-weight and closed models. In the absence of such advances, foundation models possessing sufficiently dangerous capabilities cannot be safely deployed.
How does this compare to jailbreak prompts?
Although jailbreak prompts are widespread, those that preserve model capabilities are uncommon. However, as a new model, R1’s vulnerability to jailbreaks remains uncertain, as well as the vulnerability of the emerging class of models like R1 that perform extended reasoning at inference time.
We tested a dozen jailbreaks against our set of models, including some of the top jailbreaks tested in StrongREJECT. As shown below, we found that most jailbreaks increase harmfulness to some degree. On the other hand, most jailbreaks also fall short of the strong elimination of refusal that our jailbreak-tuning attacks provide. The strongest, PAIR and Refusal suppression, can make responses less harmful in ways that aren’t fully captured by the evaluator even when the model does not refuse. For further details, see the expandable panel below.
Overall, R1 seems somewhat less robust to jailbreaks than other models. This suggests that reasoning alone will not provide robustness; other ingredients are needed. Meanwhile, the differences between models can vary a lot depending on the jailbreak, so we encourage future work to explore further.
Discussion
Current protections do not stop models from assisting or engaging in harmful behavior. The key question is how powerful they are in actualizing harm. Current evidence is mixed: while models have contributed to terrorism, information operations, suicide, and many other harms, there are also many non-AI avenues that produce these too. We seem to be in a transition period where models have become capable enough to cause harm, but not capable enough for it to be extremely different from traditional threats.
That, however, is likely to change in the not-too-distant future. As new models like R1 illustrate, there is no sign of AI capabilities having hit a limit. Given we are already beginning to see them causing harm, upcoming models will likely be extremely capable of doing so. Optimistically, we might hope that the good that models will become capable of will balance out the harms. But in many contexts, security is asymmetric: preventing harm requires blocking all vulnerabilities, whereas causing harm requires finding just one. There can also be time lags, where in theory defense may even be dominant, but institutions may not be able to move fast enough to counter emerging threats. Therefore, with equally and extremely capable models for both good and ill, a large and destructive gap could emerge.
Right now, we are gambling on this gap. A major issue is that illusory safety can lead to risk compensation, where we underestimate the dangers of AI, and only realise the true danger when it’s too late to avoid great harm. When considering releasing models, we should treat each fine-tunable model as the most dangerous possible version of that model.
We encourage work that could lead to actual safety of fine-tunable models, not just illusory safety. One possible direction here is self-destructing models, or more broadly, models that are open-weight but collapse when fine-tuning is attempted. Alternatively, although model guardrails that are robust to fine-tuning have not yet been achieved, progress in directions like non-fine-tunable learning could make it possible. Either direction could prevent open-weight models from being re-trained for harmful purposes, while preserving many of the benefits of open-weight models such as privacy and the ability to run on edge devices.
However, there is no guarantee of success. For now, every fine-tunable model – whether open or closed – must undergo evil twin evaluation. At a technical level, this means conducting and judging models on evaluations not only after but also before applying post-training safety mitigations – which cannot be guaranteed to be robust and are typically not. It also means evaluating under a wide spectrum of attacks, including jailbreak-tuning and other fine-tuning attacks, to expose vulnerabilities at all stages of safety mitigations. Finally, risk assessments should be conducted under the assumption that refusal is illusory and the model will assist with any request at its full capabilities.
Meanwhile, at a societal level, we need collaborative action to determine exactly when model capabilities will make their evil twin uses outstrip their beneficial ones, and to ensure we do not irrevocably cross that line before we realize it. Framing AI as a competition to build the most powerful model, company vs. company, country vs. country, open vs. closed source, will result in a race that everyone will lose.
For more information, please refer to our work on jailbreak-tuning and data poisoning. We will soon release a new paper focused on jailbreak-tuning with in-depth results, and another focused on the safety gap between models pre- and post- safety mitigations. For further information or access to a demo of jailbreak-tuned R1 or other models for research, journalistic, or other professional and impact-driven purposes, contact us at media@far.ai.
Acknowledgements
We thank Alan Chan for helpful comments.