benchmark 2026

TamperBench: Systematically Stress-Testing LLM Safety Under Fine-Tuning and Tampering

Saad Hossain 1, Tom Tseng 2, Punya Syon Pandey 3,4, Samanvay Vajpayee 1,3, Matthew Kowal 2, Nayeema Nonta 1,5, Samuel Simko 6, Stephen Casper 7, Zhijing Jin 3,4,8, Kellin Pelrine 2, Sirisha Rambhatla 1,5

1 citations · 53 references · arXiv (Cornell University)

α

Published on arXiv

2602.06911

Transfer Learning Attack

OWASP ML Top 10 — ML07

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Across 21 open-weight LLMs and 9 tampering threats, jailbreak-tuning is the most severe attack and Triplet emerges as the leading alignment-stage defense

TamperBench

Novel technique introduced


As increasingly capable open-weight large language models (LLMs) are deployed, improving their tamper resistance against unsafe modifications, whether accidental or intentional, becomes critical to minimize risks. However, there is no standard approach to evaluate tamper resistance. Varied data sets, metrics, and tampering configurations make it difficult to compare safety, utility, and robustness across different models and defenses. To this end, we introduce TamperBench, the first unified framework to systematically evaluate the tamper resistance of LLMs. TamperBench (i) curates a repository of state-of-the-art weight-space fine-tuning attacks and latent-space representation attacks; (ii) enables realistic adversarial evaluation through systematic hyperparameter sweeps per attack-model pair; and (iii) provides both safety and utility evaluations. TamperBench requires minimal additional code to specify any fine-tuning configuration, alignment-stage defense method, and metric suite while ensuring end-to-end reproducibility. We use TamperBench to evaluate 21 open-weight LLMs, including defense-augmented variants, across nine tampering threats using standardized safety and capability metrics with hyperparameter sweeps per model-attack pair. This yields novel insights, including effects of post-training on tamper resistance, that jailbreak-tuning is typically the most severe attack, and that Triplet emerges as a leading alignment-stage defense. Code is available at: https://github.com/criticalml-uw/TamperBench


Key Contributions

  • TamperBench: the first unified evaluation framework curating 9 weight-space fine-tuning attacks and latent-space representation attacks for LLM tamper resistance assessment
  • Systematic per-attack-model hyperparameter sweeps enabling realistic, reproducible adversarial evaluation across 21 open-weight LLMs including defense-augmented variants
  • Empirical insights: jailbreak-tuning is consistently the most severe attack, post-training affects tamper resistance measurably, and Triplet is the leading alignment-stage defense

🛡️ Threat Analysis

Transfer Learning Attack

The paper centers on weight-space fine-tuning attacks (including jailbreak-tuning) and latent-space representation attacks that exploit the fine-tuning/post-training process to undermine safety alignment of LLMs — the canonical Transfer Learning Attack scenario.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxtraining_time
Applications
safety-aligned llmsopen-weight language models