defense 2025

Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs

Kyle O'Brien 1, Stephen Casper 2, Quentin Anthony 1, Tomek Korbak 2, Robert Kirk 2, Xander Davies 2,3, Ishan Mishra 2, Geoffrey Irving 2,3, Yarin Gal 1, Stella Biderman 2

0 citations

α

Published on arXiv

2508.06601

Transfer Learning Attack

OWASP ML Top 10 — ML07

Key Finding

Filtered 6.9B models resist adversarial fine-tuning for up to 10,000 steps and 300M biothreat tokens with no degradation to unrelated capabilities, outperforming post-training safety baselines by over an order of magnitude.

Deep Ignorance

Novel technique introduced


Open-weight AI systems offer unique benefits, including enhanced transparency, open research, and decentralized access. However, they are vulnerable to tampering attacks which can efficiently elicit harmful behaviors by modifying weights or activations. Currently, there is not yet a robust science of open-weight model risk management. Existing safety fine-tuning methods and other post-training techniques have struggled to make LLMs resistant to more than a few dozen steps of adversarial fine-tuning. In this paper, we investigate whether filtering text about dual-use topics from training data can prevent unwanted capabilities and serve as a more tamper-resistant safeguard. We introduce a multi-stage pipeline for scalable data filtering and show that it offers a tractable and effective method for minimizing biothreat proxy knowledge in LLMs. We pretrain multiple 6.9B-parameter models from scratch and find that they exhibit substantial resistance to adversarial fine-tuning attacks on up to 10,000 steps and 300M tokens of biothreat-related text -- outperforming existing post-training baselines by over an order of magnitude -- with no observed degradation to unrelated capabilities. However, while filtered models lack internalized dangerous knowledge, we find that they can still leverage such information when it is provided in context (e.g., via search tool augmentation), demonstrating a need for a defense-in-depth approach. Overall, these findings help to establish pretraining data curation as a promising layer of defense for open-weight AI systems.


Key Contributions

  • Multi-stage scalable pipeline for filtering dual-use (biothreat) content from LLM pretraining corpora
  • Demonstration that pretraining data filtering produces tamper-resistant 6.9B models that withstand up to 10,000 steps and 300M tokens of adversarial fine-tuning — over an order of magnitude more robust than post-training baselines
  • Identification of a residual risk: filtered models can still leverage dangerous knowledge provided in-context (e.g., via search tool augmentation), motivating a defense-in-depth approach

🛡️ Threat Analysis

Transfer Learning Attack

The threat model is explicitly adversarial fine-tuning: an adversary takes an open-weight LLM and fine-tunes it to recover suppressed dangerous capabilities. Existing post-training safety methods fail after a few dozen fine-tuning steps. The paper's defense — filtering dual-use content from pretraining data — directly targets this transfer learning attack vector, achieving resistance through up to 10,000 adversarial fine-tuning steps and 300M tokens of biothreat-related text.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxtraining_time
Datasets
WMDP-bio
Applications
open-weight llmsbiosafetydual-use knowledge restriction