Latest papers

42 papers
attack arXiv Mar 29, 2026 · 10d ago

Hidden Ads: Behavior Triggered Semantic Backdoors for Advertisement Injection in Vision Language Models

Duanyi Yao, Changyue Li, Zhicong Huang et al. · Hong Kong University of Science and Technology · The Chinese University of Hong Kong +2 more

Semantic backdoor attack on VLMs that injects ads when users ask recommendation questions about specific content categories

Model Poisoning multimodalvisionnlp
PDF
attack arXiv Mar 26, 2026 · 13d ago

PIDP-Attack: Combining Prompt Injection with Database Poisoning Attacks on Retrieval-Augmented Generation Systems

Haozhen Wang, Haoyue Liu, Jionghao Zhu et al. · The Chinese University of Hong Kong · Taobao and Tmall Group

Combines prompt injection with database poisoning to manipulate RAG system outputs for arbitrary queries without knowing them beforehand

Input Manipulation Attack Data Poisoning Attack Prompt Injection nlp
PDF
tool arXiv Mar 19, 2026 · 20d ago

MedForge: Interpretable Medical Deepfake Detection via Forgery-aware Reasoning

Zhihui Chen, Kai He, Qingyuan Lei et al. · National University of Singapore · The Chinese University of Hong Kong +3 more

Detects medical image deepfakes via localize-then-analyze reasoning with expert-aligned explanations on synthetic lesion edits

Output Integrity Attack visionmultimodal
PDF Code
defense arXiv Mar 9, 2026 · 4w ago

Privacy-Preserving End-to-End Full-Duplex Speech Dialogue Models

Nikita Kuzmin, Tao Zhong, Jiajun Deng et al. · Nanyang Technological University · A*STAR +3 more

Defends against speaker re-identification attacks on LLM speech dialogue models using streaming voice anonymization

Sensitive Information Disclosure audionlp
PDF
survey arXiv Mar 8, 2026 · 4w ago

From Thinker to Society: Security in Hierarchical Autonomy Evolution of AI Agents

Xiaolei Zhang, Lu Zhou, Xiaogang Xu et al. · Nanjing University of Aeronautics and Astronautics · Collaborative Innovation Center of Novel Software Technology and Industrialization +5 more

Surveys LLM agent security threats across three autonomy tiers: cognitive manipulation, tool misuse, and multi-agent systemic failures

Prompt Injection Insecure Plugin Design Excessive Agency nlp
PDF
attack arXiv Mar 2, 2026 · 5w ago

VidDoS: Universal Denial-of-Service Attack on Video-based Large Language Models

Duoxun Tang, Dasen Dai, Jiyao Wang et al. · Tsinghua University · The Chinese University of Hong Kong +4 more

Universal sponge attack on Video-LLMs inflates token generation 205× and inference latency 15× via optimized adversarial video frame triggers

Input Manipulation Attack Model Denial of Service multimodalvisionnlp
PDF Code
defense arXiv Feb 24, 2026 · 6w ago

Robust Spiking Neural Networks Against Adversarial Attacks

Shuai Wang, Malu Zhang, Yulin Jiang et al. · University of Electronic Science and Technology of China · National University of Singapore +2 more

Defends Spiking Neural Networks against adversarial attacks by pushing membrane potentials away from firing thresholds and adding probabilistic noise

Input Manipulation Attack vision
PDF
benchmark arXiv Feb 3, 2026 · 9w ago

Steering Externalities: Benign Activation Steering Unintentionally Increases Jailbreak Risk for Large Language Models

Chen Xiong, Zhiyuan He, Pin-Yu Chen et al. · The Chinese University of Hong Kong · IBM Research

Reveals that benign activation steering vectors inadvertently erode LLM safety guardrails, amplifying jailbreak success rates past 80%

Prompt Injection nlp
PDF
attack arXiv Jan 30, 2026 · 9w ago

The Alignment Curse: Cross-Modality Jailbreak Transfer in Omni-Models

Yupeng Chen, Junchi Yu, Aoxi Liu et al. · University of Oxford · The Chinese University of Hong Kong

Transfers text jailbreaks to audio via modality alignment in omni-models, outperforming native audio jailbreaks as a new red-teaming baseline

Prompt Injection audionlpmultimodal
PDF
attack arXiv Jan 30, 2026 · 9w ago

A Fragile Guardrail: Diffusion LLM's Safety Blessing and Its Failure Mode

Zeyuan He, Yupeng Chen, Lang Lin et al. · University of Oxford · The Chinese University of Hong Kong +2 more

Discovers D-LLMs' intrinsic jailbreak resistance, then breaks it with context nesting prompts achieving SOTA attack rates

Prompt Injection nlp
PDF
defense arXiv Jan 27, 2026 · 10w ago

From Internal Diagnosis to External Auditing: A VLM-Driven Paradigm for Online Test-Time Backdoor Defense

Binyan Xu, Fan Yang, Xilin Dai et al. · The Chinese University of Hong Kong · Zhejiang University +1 more

Defends backdoored vision models at test-time using VLMs as external semantic auditors decoupled from victim model parameters

Model Poisoning vision
PDF
attack arXiv Jan 16, 2026 · 11w ago

Membership Inference on LLMs in the Wild

Jiatong Yi, Yanyang Li · The Chinese University of Hong Kong

Black-box membership inference attack on LLMs using word-by-word sampling and semantic scoring, beating baselines by 15.7 AUC points

Membership Inference Attack nlp
PDF Code
benchmark arXiv Jan 9, 2026 · 12w ago

FinVault: Benchmarking Financial Agent Safety in Execution-Grounded Environments

Zhi Yang, Runguo Li, Qiqi Qiang et al. · Shanghai University of Finance and Economics · The Chinese University of Hong Kong +8 more

Benchmarks prompt injection and jailbreak attacks on LLM financial agents in execution-grounded, state-writable sandbox environments

Prompt Injection Excessive Agency nlp
PDF Code
defense arXiv Jan 5, 2026 · Jan 2026

FMVP: Masked Flow Matching for Adversarial Video Purification

Duoxun Tang, Xueyi Zhang, Chak Hin Wang et al. · Tsinghua University · The Chinese University of Hong Kong +2 more

Defends video recognition models against PGD and CW attacks via flow-matching purification with masking and frequency-gated loss

Input Manipulation Attack vision
PDF
defense arXiv Dec 5, 2025 · Dec 2025

Self-Supervised AI-Generated Image Detection: A Camera Metadata Perspective

Nan Zhong, Mian Zou, Yiran Xu et al. · City University of Hong Kong · Fudan University +1 more

Self-supervised AI image detector trained on camera EXIF metadata to learn photography-intrinsic features, generalizing across diverse generative models

Output Integrity Attack vision
1 citations PDF
defense arXiv Dec 4, 2025 · Dec 2025

A Sanity Check for Multi-In-Domain Face Forgery Detection in the Real World

Jikang Cheng, Renye Yan, Zhiyuan Yan et al. · Peking University · Nanjing University +3 more

Proposes DevDet framework that amplifies real/fake differences over domain signals for robust multi-domain deepfake detection

Output Integrity Attack vision
PDF
defense arXiv Dec 3, 2025 · Dec 2025

SELF: A Robust Singular Value and Eigenvalue Approach for LLM Fingerprinting

Hanxiu Zhang, Yue Zheng · The Chinese University of Hong Kong

Fingerprints LLM weights via singular value decomposition to detect stolen models, resisting false claims and weight manipulation attacks

Model Theft Model Theft nlp
1 citations PDF Code
attack arXiv Nov 27, 2025 · Nov 2025

Exposing Vulnerabilities in RL: A Novel Stealthy Backdoor Attack through Reward Poisoning

Bokang Zhang, Chaojun Lu, Jianhui Li et al. · The Chinese University of Hong Kong · Zhejiang University

Stealthy black-box backdoor attack on RL agents via reward poisoning, causing catastrophic performance drops only when triggered

Model Poisoning reinforcement-learning
PDF
attack arXiv Nov 25, 2025 · Nov 2025

Semantic Router: On the Feasibility of Hijacking MLLMs via a Single Adversarial Perturbation

Changyue Li, Jiaying Li, Youliang Yuan et al. · The Chinese University of Hong Kong · University of Electronic Science and Technology of China +1 more

Universal adversarial image perturbation semantically routes MLLM inputs to multiple distinct attacker-defined targets simultaneously

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF
defense arXiv Nov 24, 2025 · Nov 2025

ConceptGuard: Proactive Safety in Text-and-Image-to-Video Generation through Multimodal Risk Detection

Ruize Ma, Minghong Cai, Yilei Jiang et al. · The Chinese University of Hong Kong · Nanjing University +2 more

Proactive multimodal safety guardrail for video generation that detects unsafe text+image prompts and suppresses harmful concept generation

Prompt Injection multimodalgenerativevision
1 citations 1 influentialPDF Code
Loading more papers…