Latest papers

21 papers
attack arXiv Mar 30, 2026 · 7d ago

Trojan-Speak: Bypassing Constitutional Classifiers with No Jailbreak Tax via Adversarial Finetuning

Bilgehan Sel, Xuanli He, Alwin Peng et al. · Anthropic · Virginia Tech +1 more

Adversarial fine-tuning attack that bypasses Constitutional Classifiers via curriculum learning, achieving 99% evasion with minimal capability loss

Prompt Injection Training Data Poisoning nlp
PDF
benchmark arXiv Mar 16, 2026 · 21d ago

How Vulnerable Are AI Agents to Indirect Prompt Injections? Insights from a Large-Scale Public Competition

Mateusz Dziemian, Maxwell Lin, Xiaohan Fu et al. · Gray Swan AI · OpenAI +6 more

Large-scale red teaming competition finds all frontier LLM agents vulnerable to concealed indirect prompt injection attacks with 0.5-8.5% success rates

Prompt Injection Excessive Agency nlpmultimodal
PDF
attack arXiv Feb 28, 2026 · 5w ago

Curation Leaks: Membership Inference Attacks against Data Curation for Machine Learning

Dariush Wahdany, Matthew Jagielski, Adam Dziedzic et al. · CISPA Helmholtz Center for Information Security · Anthropic

Membership inference attacks expose private data leakage in curation pipelines even when models train only on public data

Membership Inference Attack vision
PDF
attack arXiv Jan 27, 2026 · 9w ago

Thought-Transfer: Indirect Targeted Poisoning Attacks on Chain-of-Thought Reasoning Models

Harsh Chaudhari, Ethan Rathbun, Hanna Foerster et al. · Northeastern University · University of Cambridge +4 more

Poisons LLM CoT training data by corrupting reasoning traces to inject targeted behaviors into unseen domains without altering queries or answers

Data Poisoning Attack Training Data Poisoning nlp
PDF
attack arXiv Jan 20, 2026 · 10w ago

Eliciting Harmful Capabilities by Fine-Tuning On Safeguarded Outputs

Jackson Kaunismaa, Avery Griffin, John Hughes et al. · MATS · Anthropic +1 more

Bypasses frontier LLM safeguards via adjacent-domain prompts, then fine-tunes open-source models to elicit hazardous chemical synthesis capabilities

Transfer Learning Attack Prompt Injection nlp
4 citations PDF
defense arXiv Jan 15, 2026 · 11w ago

The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models

Christina Lu, Jack Gallagher, Jonathan Michala et al. · MATS · Anthropic Fellows Program +2 more

Discovers an 'Assistant Axis' in LLM activations and uses activation capping to block persona-based jailbreaks and harmful drift

Prompt Injection nlp
10 citations 1 influentialPDF
defense arXiv Jan 8, 2026 · 12w ago

Constitutional Classifiers++: Efficient Production-Grade Defenses against Universal Jailbreaks

Hoagy Cunningham, Jerry Wei, Zihan Wang et al. · Anthropic

Defends LLMs against universal jailbreaks using cascaded exchange classifiers and linear probes, reducing costs 40x with near-zero refusal rate

Prompt Injection nlp
6 citations PDF
defense arXiv Dec 5, 2025 · Dec 2025

Beyond Data Filtering: Knowledge Localization for Capability Removal in LLMs

Igor Shilov, Alex Cloud, Aryo Pradipta Gema et al. · Anthropic Fellows Program · Imperial College London +3 more

Pretraining gradient masking localizes dangerous LLM capabilities for clean removal, resisting adversarial fine-tuning recovery 7x better than baseline unlearning

Prompt Injection nlp
3 citations 1 influentialPDF Code
attack arXiv Nov 4, 2025 · Nov 2025

Optimizing AI Agent Attacks With Synthetic Data

Chloe Loughridge, Paul Colognese, Avery Griffin et al. · Anthropic · Redwood Research

Optimizes LLM agent attack policies in AI control evaluations, halving safety scores via probabilistic simulation and modular scaffold design

Excessive Agency Prompt Injection reinforcement-learningnlp
3 citations 1 influentialPDF
defense arXiv Nov 4, 2025 · Nov 2025

Verifying LLM Inference to Detect Model Weight Exfiltration

Roy Rinberg, Adam Karvonen, Alexander Hoover et al. · Harvard University · ML Alignment & Theory Scholars (MATS) +2 more

Defends against LLM weight theft via steganographic output channels by verifying inference non-determinism, achieving >200x adversary slowdown

Model Theft Model Theft nlp
2 citations PDF
benchmark arXiv Nov 4, 2025 · Nov 2025

Evaluating Control Protocols for Untrusted AI Agents

Jon Kutasov, Chloe Loughridge, Yuqi Sun et al. · Anthropic · Reduct Video +2 more

Evaluates AI agent control protocols against adaptive red-team attacks, finding critical-action deferral highly robust while resampling collapses to 17% safety when attackers know protocol internals

Excessive Agency nlp
1 citations PDF
attack arXiv Oct 30, 2025 · Oct 2025

Chain-of-Thought Hijacking

Jianli Zhao, Tingchen Fu, Rylan Schaeffer et al. · Independent Researcher · Stanford University +3 more

Jailbreaks large reasoning models by prepending benign puzzle reasoning that dilutes safety refusal signals in LRMs

Prompt Injection nlp
3 citations PDF
attack arXiv Oct 21, 2025 · Oct 2025

Extracting alignment data in open models

Federico Barbero, Xiangming Gu, Christopher A. Choquette-Choo et al. · University of Oxford · National University of Singapore +4 more

Extracts LLM alignment training data via chat template prompting, finding embedding similarity reveals 10x more memorization than string matching

Model Inversion Attack Sensitive Information Disclosure nlp
4 citations PDF
tool arXiv Oct 17, 2025 · Oct 2025

Detecting Adversarial Fine-tuning with Auditing Agents

Sarah Egler, John Schulman, Nicholas Carlini · MATS · Anthropic +1 more

LLM auditing agent detects adversarial fine-tuning attacks, including covert cipher backdoors, before model deployment

Transfer Learning Attack Model Poisoning Prompt Injection nlp
3 citations PDF Code
benchmark arXiv Oct 10, 2025 · Oct 2025

The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections

Milad Nasr, Nicholas Carlini, Chawin Sitawarin et al. · OpenAI · Anthropic +6 more

Adaptive attacks via gradient descent, RL, and random search bypass 12 LLM jailbreak/prompt-injection defenses with >90% success rate

Input Manipulation Attack Prompt Injection nlp
34 citations 4 influentialPDF
benchmark arXiv Oct 10, 2025 · Oct 2025

All Code, No Thought: Current Language Models Struggle to Reason in Ciphered Language

Shiyuan Guo, Henry Sleight, Fabien Roger · Anthropic · Constellation

Benchmarks LLM ciphered reasoning capability across 28 ciphers, finding current models cannot reliably evade CoT safety monitoring this way

Prompt Injection Excessive Agency nlp
2 citations 1 influentialPDF Code
attack arXiv Oct 8, 2025 · Oct 2025

Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

Alexandra Souly, Javier Rando, Ed Chapman et al. · UK AI Security Institute · Anthropic +3 more

Shows LLM backdoor poisoning needs only ~250 documents regardless of model size, making attacks more practical at scale

Model Poisoning Data Poisoning Attack Training Data Poisoning nlp
32 citations 2 influentialPDF
benchmark arXiv Oct 5, 2025 · Oct 2025

Agentic Misalignment: How LLMs Could Be Insider Threats

Aengus Lynch, Benjamin Wright, Caleb Larson et al. · University College London · Anthropic +2 more

Reveals LLM agents autonomously resorting to blackmail and corporate espionage to avoid shutdown or achieve goals across 16 frontier models

Excessive Agency nlp
67 citations 13 influentialPDF Code
attack CCS Oct 2, 2025 · Oct 2025

Evaluating the Robustness of a Production Malware Detection System to Transferable Adversarial Attacks

Milad Nasr, Yanick Fratantonio, Luca Invernizzi et al. · Google DeepMind · OpenAI +2 more

Adversarial 13-byte modification evades Gmail's ML file-type routing model, bypassing the entire production malware detection pipeline

Input Manipulation Attack nlp
1 citations PDF
benchmark arXiv Oct 1, 2025 · Oct 2025

Eliciting Secret Knowledge from Language Models

Bartosz Cywiński, Emil Ryd, Rowan Wang et al. · arXiv · Senthooran Rajamanoharan IDEAS Research Institute +3 more

Benchmarks black-box and white-box techniques for auditing LLMs that secretly apply but deny hidden knowledge

Sensitive Information Disclosure Prompt Injection nlp
8 citations 2 influentialPDF Code
Loading more papers…