Latest papers

17 papers
defense arXiv Feb 6, 2026 · 8w ago

AEGIS: Adversarial Target-Guided Retention-Data-Free Robust Concept Erasure from Diffusion Models

Fengpeng Li, Kemou Li, Qizhou Wang et al. · University of Macau · King Abdullah University of Science and Technology +2 more

Defends diffusion model concept erasure against adversarial prompt reactivation attacks via semantic-center-targeting adversarial erasure targets and gradient projection

Input Manipulation Attack visiongenerative
PDF Code
attack arXiv Jan 7, 2026 · Jan 2026

Inference Attacks Against Graph Generative Diffusion Models

Xiuling Wang, Xin Huang, Guibo Luo et al. · Hong Kong Baptist University · Peking University

Proposes three black-box inference attacks against graph generative diffusion models, recovering training graph structure, properties, and membership.

Model Inversion Attack Membership Inference Attack graphgenerative
PDF Code
defense arXiv Dec 8, 2025 · Dec 2025

AdLift: Lifting Adversarial Perturbations to Safeguard 3D Gaussian Splatting Assets Against Instruction-Driven Editing

Ziming Hong, Tianyu Huang, Runnan Chen et al. · The University of Sydney · University of Technology Sydney +3 more

Defends 3D Gaussian Splatting assets from AI editing by lifting adversarial perturbations from 2D image space into 3D Gaussian parameters

Input Manipulation Attack visiongenerative
4 citations PDF Code
defense arXiv Nov 27, 2025 · Nov 2025

Creating Blank Canvas Against AI-enabled Image Forgery

Qi Song, Ziyuan Luo, Renjie Wan · Hong Kong Baptist University

Adds adversarial perturbations to images as a 'blank canvas' so SAM can localize AIGC-based forgery after tampering

Output Integrity Attack visiongenerative
PDF Code
defense arXiv Nov 3, 2025 · Nov 2025

Detecting Generated Images by Fitting Natural Image Distributions

Yonggang Zhang, Jun Nie, Xinmei Tian et al. · The Hong Kong University of Science and Technology · Hong Kong Baptist University +4 more

Proposes ConV, a generated-image detector exploiting data manifold geometry requiring no generated training samples

Output Integrity Attack visiongenerative
2 citations PDF Code
defense arXiv Nov 2, 2025 · Nov 2025

Advancing Machine-Generated Text Detection from an Easy to Hard Supervision Perspective

Chenwang Wu, Yiu-ming Cheung, Bo Han et al. · Hong Kong Baptist University · University of Science and Technology of China

Novel easy-to-hard training framework that improves LLM-generated text detection under noisy label conditions

Output Integrity Attack nlp
PDF Code
attack arXiv Oct 15, 2025 · Oct 2025

Toward Efficient Inference Attacks: Shadow Model Sharing via Mixture-of-Experts

Li Bai, Qingqing Ye, Xinwei Zhang et al. · The Hong Kong Polytechnic University · PolyU Research Centre for Privacy and Security Technologies in Future Smart Systems +1 more

Efficient shadow model pool via Mixture-of-Experts cuts computational cost of membership inference attacks while preserving attack effectiveness

Membership Inference Attack visionnlp
2 citations 1 influentialPDF
defense arXiv Oct 14, 2025 · Oct 2025

ImageSentinel: Protecting Visual Datasets from Unauthorized Retrieval-Augmented Image Generation

Ziyuan Luo, Yangyi Zhao, Ka Chun Cheung et al. · Hong Kong Baptist University · NVIDIA

Protects visual datasets from unauthorized RAIG use by injecting sentinel images detectable via secret random-string retrieval keys

Output Integrity Attack visiongenerative
3 citations PDF Code
defense arXiv Oct 9, 2025 · Oct 2025

Physics-Driven Spatiotemporal Modeling for AI-Generated Video Detection

Shuhai Zhang, ZiHao Lian, Jiahao Yang et al. · South China University of Technology · Pazhou Lab +4 more

Detects AI-generated videos via physics-driven NSG statistic quantifying violations of probability flow conservation laws

Output Integrity Attack visiongenerative
6 citations 1 influentialPDF Code
attack arXiv Sep 27, 2025 · Sep 2025

Virus Infection Attack on LLMs: Your Poisoning Can Spread "VIA" Synthetic Data

Zi Liang, Qingqing Ye, Xuan Liu et al. · The Hong Kong Polytechnic University · University of California +2 more

Proposes VIA, an attack framework that spreads poisoning and backdoor payloads through LLM synthetic data by hijacking benign training samples

Data Poisoning Attack Model Poisoning Training Data Poisoning nlp
2 citations 1 influentialPDF
defense arXiv Sep 26, 2025 · Sep 2025

Training-Free Multimodal Deepfake Detection via Graph Reasoning

Yuxin Liu, Fei Wang, Kun Li et al. · AnHui University · Hefei University of Technology +2 more

Training-free graph-based in-context learning framework that enhances VLMs for multimodal deepfake detection without fine-tuning

Output Integrity Attack multimodalvisionnlpaudio
PDF
attack arXiv Sep 24, 2025 · Sep 2025

Generative Model Inversion Through the Lens of the Manifold Hypothesis

Xiong Peng, Bo Han, Fengfei Yu et al. · Hong Kong Baptist University · The University of Sydney +2 more

Explains why generative model inversion attacks work via manifold theory and proposes methods to amplify their effectiveness

Model Inversion Attack visiongenerative
PDF
defense arXiv Sep 15, 2025 · Sep 2025

Reasoned Safety Alignment: Ensuring Jailbreak Defense via Answer-Then-Check

Chentao Cao, Xiaojun Xu, Bo Han et al. · ByteDance Seed · Hong Kong Baptist University

Defends LLMs against jailbreaks by training models to internally answer then self-evaluate safety before responding

Prompt Injection nlp
PDF
defense arXiv Aug 31, 2025 · Aug 2025

MarkSplatter: Generalizable Watermarking for 3D Gaussian Splatting Model via Splatter Image Structure

Xiufeng Huang, Ziyuan Luo, Qi Song et al. · Hong Kong Baptist University

Embeds copyright messages into 3D Gaussian Splatting content via single forward pass using neural Splatter Image structure

Output Integrity Attack vision
PDF Code
defense arXiv Aug 18, 2025 · Aug 2025

RepreGuard: Detecting LLM-Generated Text by Revealing Hidden Representation Patterns

Xin Chen, Junchao Wu, Shu Yang et al. · University of Macau · Chinese Academy of Sciences +2 more

Proposes RepreGuard, detecting LLM-generated text via hidden activation patterns for robust OOD detection at 94.92% AUROC

Output Integrity Attack nlp
PDF Code
attack arXiv Aug 8, 2025 · Aug 2025

Fact2Fiction: Targeted Poisoning Attack to Agentic Fact-checking System

Haorui He, Yupeng Li, Bin Benjamin Zhu et al. · Hong Kong Baptist University · The University of Hong Kong +1 more

Poisons RAG knowledge bases of LLM fact-checkers by mimicking claim decomposition and exploiting justifications to craft targeted malicious evidence

Data Poisoning Attack Prompt Injection nlp
PDF Code
attack arXiv Jan 2, 2025 · Jan 2025

Transferability of Adversarial Attacks in Video-based MLLMs: A Cross-modal Image-to-Video Approach

Linhao Huang, Xue Jiang, Zhiqiang Wang et al. · Tsinghua University · Peng Cheng Laboratory +4 more

Black-box adversarial attack transfers from image surrogate models to video MLLMs via spatiotemporal perturbation propagation

Input Manipulation Attack visionmultimodalnlp
6 citations PDF