tool 2025

Detecting Adversarial Fine-tuning with Auditing Agents

Sarah Egler 1,2, John Schulman 3, Nicholas Carlini 2

3 citations · 28 references · arXiv

α

Published on arXiv

2510.16255

Transfer Learning Attack

OWASP ML Top 10 — ML07

Model Poisoning

OWASP ML Top 10 — ML10

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves 56.2% detection rate at 1% false positive rate across 8 adversarial fine-tuning attacks, including covert cipher attacks that evade content moderation and safety evaluations.

Fine-tuning Auditing Agent

Novel technique introduced


Large Language Model (LLM) providers expose fine-tuning APIs that let end users fine-tune their frontier LLMs. Unfortunately, it has been shown that an adversary with fine-tuning access to an LLM can bypass safeguards. Particularly concerning, such attacks may avoid detection with datasets that are only implicitly harmful. Our work studies robust detection mechanisms for adversarial use of fine-tuning APIs. We introduce the concept of a fine-tuning auditing agent and show it can detect harmful fine-tuning prior to model deployment. We provide our auditing agent with access to the fine-tuning dataset, as well as the fine-tuned and pre-fine-tuned models, and request the agent assigns a risk score for the fine-tuning job. We evaluate our detection approach on a diverse set of eight strong fine-tuning attacks from the literature, along with five benign fine-tuned models, totaling over 1400 independent audits. These attacks are undetectable with basic content moderation on the dataset, highlighting the challenge of the task. With the best set of affordances, our auditing agent achieves a 56.2% detection rate of adversarial fine-tuning at a 1% false positive rate. Most promising, the auditor is able to detect covert cipher attacks that evade safety evaluations and content moderation of the dataset. While benign fine-tuning with unintentional subtle safety degradation remains a challenge, we establish a baseline configuration for further work in this area. We release our auditing agent at https://github.com/safety-research/finetuning-auditor.


Key Contributions

  • Introduces the fine-tuning auditing agent concept: an LLM scaffolded with tool access to the fine-tuning dataset, pre-fine-tuned model, and fine-tuned model to classify jobs as adversarial or benign.
  • Evaluates the auditor on 8 adversarial fine-tuning attacks (including covert cipher attacks undetectable by content moderation) and 5 benign fine-tunes, achieving 56.2% detection rate at 1% FPR.
  • Provides ablation analysis of tool-call affordances and backbone LLM choice, plus a robustness analysis against auditor-evasion strategies.

🛡️ Threat Analysis

Transfer Learning Attack

The paper specifically addresses attacks that exploit the fine-tuning/transfer learning process to bypass LLM safety — including non-refusal training, covert cipher fine-tuning, emergent misalignment, and sleeper agent injection. The auditing agent's primary purpose is to detect adversarial exploitation of fine-tuning APIs.

Model Poisoning

Several attacks in the evaluation set are trigger-based backdoors (sleeper agents, CMFT cipher attacks that activate only under specific encoded prompts). The auditor explicitly targets detecting these hidden trigger-based behaviors alongside other adversarial fine-tuning types.


Details

Domains
nlp
Model Types
llm
Threat Tags
training_timeblack_box
Datasets
CMFT cipher attack datasetemergent misalignment datasetssleeper agent datasetsnon-refusal training datasets
Applications
llm fine-tuning apispre-deployment safety auditing