Latest papers

26 papers
defense arXiv Mar 26, 2026 · 13d ago

Knowledge-Guided Adversarial Training for Infrared Object Detection via Thermal Radiation Modeling

Shiji Zhao, Shukun Xiong, Maoxun Yuan et al. · Beihang University · Alibaba Group +2 more

Adversarial training for infrared object detectors guided by thermal radiation physics to improve robustness against attacks and corruptions

Input Manipulation Attack vision
PDF
attack arXiv Mar 22, 2026 · 17d ago

JANUS: A Lightweight Framework for Jailbreaking Text-to-Image Models via Distribution Optimization

Haolun Zheng, Yu He, Tailun Chen et al. · Zhejiang University · Hangzhou HighTech Zone (Binjiang) Blockchain and Data Security Research Institute +1 more

Distribution optimization jailbreak attack on T2I models achieving 43% attack success rate bypassing safety filters on Stable Diffusion

Input Manipulation Attack Prompt Injection visiongenerativemultimodal
PDF
attack arXiv Mar 15, 2026 · 24d ago

Membership Inference for Contrastive Pre-training Models with Text-only PII Queries

Ruoxi Cheng, Yizhong Ding, Hongyi Zhang et al. · Beijing Electronic Science and Technology Institute · Alibaba Group +2 more

Text-only membership inference attack on CLIP/CLAP models that detects PII memorization without exposing biometric data

Membership Inference Attack multimodalvisionaudionlp
PDF
attack arXiv Feb 26, 2026 · 5w ago

Obscure but Effective: Classical Chinese Jailbreak Prompt Optimization via Bio-Inspired Search

Xun Huang, Simeng Qin, Xiaoshuang Jia et al. · Nanyang Technological University · BraneMatrix AI +7 more

Bio-inspired optimization generates classical Chinese jailbreak prompts that defeat modern-language safety guardrails in black-box LLMs

Prompt Injection nlp
PDF
attack arXiv Feb 6, 2026 · 8w ago

VENOMREC: Cross-Modal Interactive Poisoning for Targeted Promotion in Multimodal LLM Recommender Systems

Guowei Guan, Yurong Hao, Jiaming Zhang et al. · Nanyang Technological University · Alibaba Group

Cross-modal synchronized data poisoning attack that steers MLLM recommender systems to promote target items via attention-guided token-patch edits

Data Poisoning Attack Training Data Poisoning multimodalnlpvision
PDF
defense arXiv Feb 3, 2026 · 9w ago

SEW: Strengthening Robustness of Black-box DNN Watermarking via Specificity Enhancement

Huming Qiu, Mi Zhang, Junjie Sun et al. · Fudan University · Alibaba Group

Defends DNN model ownership watermarks against removal attacks by reducing watermark association with approximate reverse-engineered keys

Model Theft vision
PDF
defense arXiv Jan 31, 2026 · 9w ago

A Causal Perspective for Enhancing Jailbreak Attack and Defense

Licheng Pan, Yunsheng Lu, Jiexi Liu et al. · Zhejiang University · University of Chicago +1 more

Causal discovery framework identifies interpretable LLM jailbreak drivers to both enhance attacks and improve prompt-level defenses

Prompt Injection nlp
PDF Code
defense arXiv Jan 29, 2026 · 9w ago

Mining Forgery Traces from Reconstruction Error: A Weakly Supervised Framework for Multimodal Deepfake Temporal Localization

Midou Guo, Qilin Yin, Wei Lu et al. · Sun Yat-Sen University · Alibaba Group +1 more

Weakly supervised deepfake temporal localization using MAE reconstruction errors and asymmetric contrastive loss on multimodal video

Output Integrity Attack visionaudiomultimodal
PDF
benchmark arXiv Jan 16, 2026 · 11w ago

Your One-Stop Solution for AI-Generated Video Detection

Long Ma, Zihao Xue, Yan Wang et al. · University of Science and Technology of China · Huzhou University +3 more

Comprehensive benchmark evaluating 33 AI-generated video detectors across 440K+ videos from 31 generative models

Output Integrity Attack visiongenerative
1 citations PDF Code
benchmark arXiv Jan 8, 2026 · Jan 2026

BackdoorAgent: A Unified Framework for Backdoor Attacks on LLM-based Agents

Yunhao Feng, Yige Li, Yutao Wu et al. · Fudan University · Alibaba Group +4 more

Benchmark framework systematizing backdoor attacks across planning, memory, and tool-use stages of LLM agent workflows

Model Poisoning Excessive Agency nlpmultimodal
1 citations PDF Code
attack arXiv Dec 17, 2025 · Dec 2025

Unveiling the Attribute Misbinding Threat in Identity-Preserving Models

Junming Fu, Jishen Zeng, Yi Jiang et al. · Sun Yat-Sen University · Alibaba Group +1 more

Exploits attention bias in identity-preserving diffusion models via crafted prompts to bypass text filters and generate targeted NSFW content

Input Manipulation Attack visiongenerativemultimodal
PDF Code
attack arXiv Dec 12, 2025 · Dec 2025

Attacking and Securing Community Detection: A Game-Theoretic Framework

Yifan Niu, Aochuan Chen, Tingyang Xu et al. · The Hong Kong University of Science and Technology · Alibaba Group

Proposes adversarial graph perturbations and a Nash equilibrium game framework to attack and defend GNN-based community detection

Input Manipulation Attack graph
PDF
attack arXiv Dec 9, 2025 · Dec 2025

MIRAGE: Misleading Retrieval-Augmented Generation via Black-box and Query-agnostic Poisoning Attacks

Tailun Chen, Yu He, Yan Wang et al. · Zhejiang University · Alibaba Group +1 more

Black-box RAG corpus poisoning attack using persona-driven query synthesis, semantic anchoring, and adversarial preference optimization to mislead LLMs

Data Poisoning Attack Prompt Injection nlp
PDF
defense arXiv Dec 9, 2025 · Dec 2025

Disrupting Hierarchical Reasoning: Adversarial Protection for Geographic Privacy in Multimodal Reasoning Models

Jiaming Zhang, Che Wang, Yang Cao et al. · Nanyang Technological University · Peking University +2 more

Defends geographic privacy from VLM inference using concept-aware adversarial image perturbations that cascade through hierarchical reasoning chains

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF Code
attack arXiv Dec 5, 2025 · Dec 2025

VRSA: Jailbreaking Multimodal Large Language Models through Visual Reasoning Sequential Attack

Shiji Zhao, Shukun Xiong, Yao Huang et al. · Beihang University · Alibaba Group

Jailbreaks MLLMs by decomposing harmful text into sequential semantically crafted sub-images that aggregate harmful intent across frames

Prompt Injection visionnlpmultimodal
PDF
defense arXiv Nov 24, 2025 · Nov 2025

Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization

Xurui Li, Kaisong Song, Rui Zhu et al. · Fudan University · Alibaba Group +3 more

Co-evolving attack-defense framework uses MCTS-based jailbreak exploration and curriculum RL to jointly train stronger LLM safety alignment

Prompt Injection nlp
2 citations PDF Code
defense arXiv Nov 24, 2025 · Nov 2025

ConceptGuard: Proactive Safety in Text-and-Image-to-Video Generation through Multimodal Risk Detection

Ruize Ma, Minghong Cai, Yilei Jiang et al. · The Chinese University of Hong Kong · Nanjing University +2 more

Proactive multimodal safety guardrail for video generation that detects unsafe text+image prompts and suppresses harmful concept generation

Prompt Injection multimodalgenerativevision
1 citations 1 influentialPDF Code
defense arXiv Nov 17, 2025 · Nov 2025

DualTAP: A Dual-Task Adversarial Protector for Mobile MLLM Agents

Fuyao Zhang, Jiaming Zhang, Che Wang et al. · Nanyang Technological University · Peking University +3 more

Adversarial perturbation defense that blinds untrusted router MLLMs to PII in mobile screenshots while preserving agent task utility

Input Manipulation Attack visionmultimodal
2 citations 1 influentialPDF
defense arXiv Nov 14, 2025 · Nov 2025

EcoAlign: An Economically Rational Framework for Efficient LVLM Alignment

Ruoxi Cheng, Haoxuan Ma, Teng Ma et al. · Alibaba Group · Nanjing University +2 more

Defends LVLMs against jailbreaks via economically rational inference-time thought-graph search with weakest-link safety enforcement

Prompt Injection visionnlpmultimodal
2 citations PDF
attack arXiv Oct 21, 2025 · Oct 2025

Genesis: Evolving Attack Strategies for LLM Web Agent Red-Teaming

Zheng Zhang, Jiarui He, Yuchen Cai et al. · The Hong Kong University of Science and Technology · Tencent +2 more

Evolves indirect prompt injection attacks against LLM web agents using genetic algorithms and a growing strategy library

Prompt Injection Excessive Agency nlp
PDF
Loading more papers…