Latest papers

25 papers
tool arXiv Apr 5, 2026 · 3d ago

ATSS: Detecting AI-Generated Videos via Anomalous Temporal Self-Similarity

Hang Wang, Chao Shen, Lei Zhang et al. · Xi’an Jiaotong University · The Hong Kong Polytechnic University +1 more

Detects AI-generated videos by exploiting anomalous temporal self-similarity patterns across visual and semantic modalities

Output Integrity Attack visionmultimodal
PDF Code
benchmark arXiv Apr 2, 2026 · 6d ago

Understanding the Effects of Safety Unalignment on Large Language Models

John T. Halloran · Leidos · University of Washington

Compares jailbreak-tuning vs weight orthogonalization for safety unalignment, finding WO produces more dangerous models with better attack capabilities

Prompt Injection nlp
PDF
attack arXiv Mar 3, 2026 · 5w ago

DSBA: Dynamic Stealthy Backdoor Attack with Collaborative Optimization in Self-Supervised Learning

Jiayao Wang, Mohammad Maruf Hasan, Yiping Zhang et al. · Yangzhou University · Chaohu University +1 more

Proposes a stealthy backdoor attack on SSL encoders via collaborative optimization of dynamic trigger generation and feature space manipulation

Model Poisoning vision
PDF
attack arXiv Mar 1, 2026 · 5w ago

BadRSSD: Backdoor Attacks on Regularized Self-Supervised Diffusion Models

Jiayao Wang, Yiping Zhang, Mohammad Maruf Hasan et al. · Yangzhou University · Chaohu University +1 more

Backdoor attack on self-supervised diffusion models hijacks PCA-space representations to steer generation toward attacker-specified targets on trigger activation

Model Poisoning visiongenerative
PDF
benchmark arXiv Feb 24, 2026 · 6w ago

Personal Information Parroting in Language Models

Nishant Subramani, Kshitish Ghate, Mona Diab · Carnegie Mellon University · University of Washington

Measures verbatim PII leakage from Pythia LLMs via greedy decoding, finding 13.6% reproduction rate scaling with model size and training duration

Model Inversion Attack Sensitive Information Disclosure nlp
PDF
attack arXiv Feb 22, 2026 · 6w ago

Learning to Detect Language Model Training Data via Active Reconstruction

Junjie Oscar Yin, John X. Morris, Vitaly Shmatikov et al. · University of Washington · Cornell University +2 more

Uses reinforcement learning to fine-tune LLMs and detect training data membership via active reconstruction, outperforming passive MIAs by 10.7%

Membership Inference Attack Sensitive Information Disclosure nlp
PDF
defense arXiv Feb 17, 2026 · 7w ago

Unforgeable Watermarks for Language Models via Robust Signatures

Huijia Lin, Kameron Shahabi, Min Jae Song · University of Washington · University of Chicago

Constructs unforgeable, recoverable LLM text watermarks using robust digital signatures to prevent false attribution attacks

Output Integrity Attack nlp
PDF
defense arXiv Feb 5, 2026 · 8w ago

Among Us: Measuring and Mitigating Malicious Contributions in Model Collaboration Systems

Ziyuan Yang, Wenxuan Ding, Shangbin Feng et al. · University of Washington · New York University

Measures malicious third-party models' impact on multi-LLM collaboration systems and proposes supervisor-based defenses recovering 95% performance

AI Supply Chain Attacks Model Poisoning nlp
PDF Code
attack arXiv Feb 5, 2026 · 8w ago

ADCA: Attention-Driven Multi-Party Collusion Attack in Federated Self-Supervised Learning

Jiayao Wang, Yiping Zhang, Jiale Zhang et al. · Yangzhou University · Jiaxing University +2 more

Proposes a federated SSL backdoor attack using distributed trigger decomposition and attention-driven malicious client collusion to resist aggregation dilution

Model Poisoning Data Poisoning Attack visionfederated-learning
PDF
attack arXiv Feb 2, 2026 · 9w ago

HPE: Hallucinated Positive Entanglement for Backdoor Attacks in Federated Self-Supervised Learning

Jiayao Wang, Yang Song, Zhendong Zhao et al. · Yangzhou University · Chinese Academy of Sciences +3 more

Proposes HPE backdoor attack for federated self-supervised learning using synthetic positive entanglement and selective parameter poisoning to persist through aggregation

Model Poisoning visionfederated-learning
PDF
attack arXiv Jan 20, 2026 · 11w ago

AgenticRed: Optimizing Agentic Systems for Automated Red-teaming

Jiayi Yuan, Jonathan Nöther, Natasha Jaques et al. · University of Washington · Max Planck Institute for Software Systems

Evolutionary meta-search automatically designs agentic jailbreak pipelines achieving 96-100% ASR on Llama, GPT-4o, and Claude

Prompt Injection nlp
PDF
defense arXiv Jan 8, 2026 · Jan 2026

AM$^3$Safety: Towards Data Efficient Alignment of Multi-modal Multi-turn Safety for MLLMs

Han Zhu, Jiale Chen, Chengkun Cai et al. · Hong Kong University of Science and Technology · Sun Yat-Sen University +3 more

GRPO-based safety alignment framework defending MLLMs against multi-turn jailbreaks via dataset and turn-aware dual-objective rewards

Prompt Injection multimodalnlp
PDF
benchmark arXiv Dec 30, 2025 · Dec 2025

Language Model Agents Under Attack: A Cross Model-Benchmark of Profit-Seeking Behaviors in Customer Service

Jingyu Zhang · University of Washington

Benchmarks profit-seeking prompt injection attacks on customer-service LLM agents across 10 domains and 5 models, finding payload splitting most effective

Prompt Injection nlp
PDF
benchmark arXiv Dec 23, 2025 · Dec 2025

AI Security Beyond Core Domains: Resume Screening as a Case Study of Adversarial Vulnerabilities in Specialized LLM Applications

Honglin Mu, Jinghao Liu, Kaiyang Wan et al. · Harbin Institute of Technology · MBZUAI +2 more

Benchmarks indirect prompt injection attacks on LLM resume screeners and proposes LoRA-based FIDS defense achieving 26% attack reduction

Prompt Injection nlp
1 citations PDF Code
defense arXiv Dec 23, 2025 · Dec 2025

Cost-TrustFL: Cost-Aware Hierarchical Federated Learning with Lightweight Reputation Evaluation across Multi-Cloud

Jixiao Yang, Jinyu Chen, Zixiao Huang et al. · Westcliff University · University of Washington +3 more

Defends federated learning against Byzantine poisoning attacks using Shapley-based reputation scores while minimizing multi-cloud communication costs

Data Poisoning Attack federated-learningvision
PDF
benchmark arXiv Dec 7, 2025 · Dec 2025

Ideal Attribution and Faithful Watermarks for Language Models

Min Jae Song, Kameron Shahabi · University of Chicago · University of Washington

Proposes formal attribution framework as ground truth for LLM text watermarking, unifying guarantee statements across schemes

Output Integrity Attack nlp
PDF
benchmark arXiv Dec 6, 2025 · Dec 2025

Quantization Blindspots: How Model Compression Breaks Backdoor Defenses

Rohan Pandey, Eric Ye · University of Washington

Shows all major backdoor defenses fail completely under INT8 quantization while backdoors maintain 99%+ attack success

Model Poisoning vision
PDF Code
attack arXiv Oct 20, 2025 · Oct 2025

BadScientist: Can a Research Agent Write Convincing but Unsound Papers that Fool LLM Reviewers?

Fengqing Jiang, Yichen Feng, Yuetai Li et al. · University of Washington · King Abdulaziz City for Science and Technology

LLM agent generates fabricated, experimentless papers that fool multi-model AI review systems via presentation-manipulation strategies

Prompt Injection Excessive Agency nlp
PDF
defense arXiv Oct 10, 2025 · Oct 2025

Building a Foundational Guardrail for General Agentic Systems via Synthetic Data

Yue Huang, Hang Hua, Yujun Zhou et al. · University of Notre Dame · MIT-IBM Watson AI Lab +3 more

Proposes Safiron, a pre-execution guardrail that detects, categorizes, and explains risky LLM agent action plans before they execute

Excessive Agency nlp
5 citations 1 influentialPDF
attack arXiv Sep 30, 2025 · Sep 2025

Are Robust LLM Fingerprints Adversarially Robust?

Anshul Nasery, Edoardo Contente, Alkin Kaz et al. · University of Washington · Sentient +1 more

Adaptive attacks bypass ten LLM fingerprinting schemes with near-perfect success by exploiting four systemic vulnerabilities in ownership verification

Model Theft Model Theft nlp
3 citations PDF
Loading more papers…