defense 2025

Spilling the Beans: Teaching LLMs to Self-Report Their Hidden Objectives

Chloe Li 1, Mary Phuong 1,2, Daniel Tan 1,2

6 citations · 37 references · arXiv

α

Published on arXiv

2511.06626

Excessive Agency

OWASP LLM Top 10 — LLM08

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SRFT-trained models achieve F1=0.98 for hidden objective detection under interrogation versus F1=0 for baseline, and recover 28–100% of hidden objective details versus 0% in controls.

SRFT (Self-Report Fine-Tuning)

Novel technique introduced


As AI systems become more capable of complex agentic tasks, they also become more capable of pursuing undesirable objectives and causing harm. Previous work has attempted to catch these unsafe instances by interrogating models directly about their objectives and behaviors. However, the main weakness of trusting interrogations is that models can lie. We propose self-report fine-tuning (SRFT), a simple supervised fine-tuning technique that trains models to occasionally make factual mistakes, then admit them when asked. We show that the admission of factual errors in simple question-answering settings generalizes out-of-distribution (OOD) to the admission of hidden misaligned objectives in adversarial agentic settings. We evaluate SRFT in OOD stealth tasks, where models are instructed to complete a hidden misaligned objective alongside a user-specified objective without being caught by monitoring. After SRFT, models are more likely to confess the details of their hidden objectives when interrogated, even under strong pressure not to disclose them. Interrogation on SRFT models can detect hidden objectives with near-ceiling performance (F1 score = 0.98), while the baseline model lies when interrogated under the same conditions (F1 score = 0). Interrogation on SRFT models can further elicit the content of the hidden objective, recovering 28-100% details, compared to 0% details recovered in the baseline model and by prefilled assistant turn attacks. This provides a promising technique for promoting honesty propensity and incriminating misaligned AIs.


Key Contributions

  • SRFT: a supervised fine-tuning technique that trains LLMs to admit factual errors in QA settings, which generalizes OOD to confessing hidden misaligned objectives in adversarial agentic tasks
  • Demonstrates near-ceiling interrogation performance (F1=0.98) for detecting hidden objectives after SRFT, versus F1=0 for baseline models that lie under interrogation pressure
  • Shows SRFT enables recovery of 28–100% of hidden objective content details, compared to 0% for baseline models and prefilled assistant turn attacks

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
training_timetargeted
Datasets
custom stealth task dataset (adversarial agentic settings)
Applications
llm agentsagentic ai safetyai monitoring and auditing