Latest papers

12 papers
attack arXiv Mar 23, 2026 · 16d ago

Thermal Topology Collapse: Universal Physical Patch Attacks on Infrared Vision Systems

Chengyin Hu, Yikun Guo, Yuxian Dong et al. · China University of Petroleum-Beijing · University of Electronic Science and Technology of China +3 more

Universal adversarial patch attack on infrared pedestrian detectors using parameterized Bézier curves and cold patches

Input Manipulation Attack vision
PDF
defense arXiv Mar 16, 2026 · 23d ago

SFCoT: Safer Chain-of-Thought via Active Safety Evaluation and Calibration

Yu Pan, Wenlong Yu, Tiejun Wu et al. · Tianjin University · NSFOCUS Technologies Group

Real-time jailbreak defense monitoring LLM reasoning steps, reducing attack success from 59% to 12% via dynamic safety calibration

Prompt Injection nlp
PDF
defense arXiv Feb 27, 2026 · 5w ago

GuardAlign: Test-time Safety Alignment in Multimodal Large Language Models

Xingyu Zhu, Beier Zhu, Junfeng Fang et al. · University of Science and Technology of China · Nanyang Technological University +2 more

Training-free defense for VLMs uses optimal transport patch detection and attention calibration to block visual jailbreaks

Input Manipulation Attack Prompt Injection visionnlpmultimodal
PDF
defense arXiv Jan 27, 2026 · 10w ago

Variation is the Key: A Variation-Based Framework for LLM-Generated Text Detection

Xuecong Li, Xiaohong Li, Qiang Hu et al. · Tianjin University

Detects LLM-generated text by measuring log-perplexity variation across multiple LLM rewrites, outperforming Binoculars by 34.3% AUROC

Output Integrity Attack nlp
PDF
tool arXiv Jan 15, 2026 · 11w ago

Agent Skills in the Wild: An Empirical Study of Security Vulnerabilities at Scale

Yi Liu, Weizhe Wang, Ruitao Feng et al. · Nanyang Technological University · Tianjin University +4 more

Scans 31K AI agent skills from marketplaces, finding 26% contain vulnerabilities including prompt injection, data exfiltration, and supply chain risks

AI Supply Chain Attacks Insecure Plugin Design Prompt Injection nlp
8 citations 2 influentialPDF
attack arXiv Jan 12, 2026 · 12w ago

MacPrompt: Maraconic-guided Jailbreak against Text-to-Image Models

Xi Ye, Yiwen Liu, Lina Wang et al. · Wuhan University · Tianjin University

Black-box cross-lingual macaronic prompt attack bypasses T2I safety filters and concept removal defenses, achieving 92% NSFW generation success

Prompt Injection generativemultimodalvision
PDF
attack arXiv Dec 6, 2025 · Dec 2025

Metaphor-based Jailbreaking Attacks on Text-to-Image Models

Chenyu Zhang, Yiwen Ma, Lanjun Wang et al. · Tianjin University · Huawei Technologies

Metaphor-based jailbreak attack bypasses T2I model safety filters without knowing deployed defense type using LLM multi-agent prompt generation

Prompt Injection visionnlpmultimodalgenerative
1 citations PDF Code
tool arXiv Nov 18, 2025 · Nov 2025

ManipShield: A Unified Framework for Image Manipulation Detection, Localization and Explanation

Zitong Xu, Huiyu Duan, Xiaoyu Wang et al. · Shanghai Jiao Tong University · University of Electronic Science and Technology of China +1 more

Proposes ManipBench (450K AI-edited images, 25 models) and MLLM-based ManipShield for unified manipulation detection, localization, and explanation

Output Integrity Attack visionmultimodal
PDF
benchmark arXiv Oct 25, 2025 · Oct 2025

T2I-RiskyPrompt: A Benchmark for Safety Evaluation, Attack, and Defense on Text-to-Image Model

Chenyu Zhang, Tairen Zhang, Lanjun Wang et al. · Tianjin University

Benchmark of 6,432 risky prompts evaluating jailbreak attacks, defenses, and harmful-image detection across eight T2I models

Output Integrity Attack Prompt Injection visiongenerativemultimodal
1 citations PDF Code
attack arXiv Oct 20, 2025 · Oct 2025

Multimodal Safety Is Asymmetric: Cross-Modal Exploits Unlock Black-Box MLLMs Jailbreaks

Xinkai Wang, Beibei Li, Zerui Shao et al. · Sichuan University · Tianjin University +1 more

Black-box RL-based jailbreak framework exploiting multimodal safety asymmetry to achieve 95%+ attack success on GPT-4o and Gemini

Prompt Injection nlpmultimodal
1 citations PDF
attack arXiv Sep 30, 2025 · Sep 2025

Stealthy Yet Effective: Distribution-Preserving Backdoor Attacks on Graph Classification

Xiaobao Wang, Ruoxiao Sun, Yujun Zhang et al. · Tianjin University · Guangdong Laboratory of Artificial Intelligence and Digital Economy +1 more

Clean-label GNN backdoor attack uses adversarial training to learn in-distribution triggers that evade anomaly detection

Model Poisoning graph
2 citations PDF Code
attack arXiv Aug 5, 2025 · Aug 2025

Selection-Based Vulnerabilities: Clean-Label Backdoor Attacks in Active Learning

Yuhan Zhi, Longtian Wang, Xiaofei Xie et al. · Xi’an Jiaotong University · Singapore Management University +1 more

Exploits active learning acquisition functions to inject clean-label backdoor samples, achieving 94% ASR at just 0.5% poisoning budget

Model Poisoning Data Poisoning Attack vision
PDF