attack 2026

Stress-Testing Alignment Audits With Prompt-Level Strategic Deception

Oliver Daniels 1,2, Perusha Moodley 1, Ben Marlin 2, David Lindner 3

0 citations · 53 references · arXiv (Cornell University)

α

Published on arXiv

2602.08877

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

The automatic red-team pipeline discovers system prompts that induce confident incorrect guesses from both black-box (assistant prefills, user persona sampling) and white-box (sparse autoencoders, activation embedding similarity) alignment auditing methods, including the first documented case of activation-based strategic deception.

Prompt-Level Automatic Red-Team Pipeline

Novel technique introduced


Alignment audits aim to robustly identify hidden goals from strategic, situationally aware misaligned models. Despite this threat model, existing auditing methods have not been systematically stress-tested against deception strategies. We address this gap, implementing an automatic red-team pipeline that generates deception strategies (in the form of system prompts) tailored to specific white-box and black-box auditing methods. Stress-testing assistant prefills, user persona sampling, sparse autoencoders, and token embedding similarity methods against secret-keeping model organisms, our automatic red-team pipeline finds prompts that deceive both the black-box and white-box methods into confident, incorrect guesses. Our results provide the first documented evidence of activation-based strategic deception, and suggest that current black-box and white-box methods would not be robust to a sufficiently capable misaligned model.


Key Contributions

  • Automatic prompt-level red-team pipeline that augments fine-tuned model organisms with strategic reasoning and situational awareness to evade alignment audits at low computational cost
  • First documented evidence of activation-based strategic deception, where generated prompts manipulate internal model activations to defeat white-box auditing methods (sparse autoencoders, token embedding similarity)
  • Empirical stress-test showing that current black-box and white-box alignment auditing methods produce confident, incorrect guesses when confronted with method-aware deceptive strategies

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_time
Datasets
secret-keeping model organisms (Cywinski et al. 2025)
Applications
llm alignment auditingai safety evaluationhidden goal detection