Latest papers

8 papers
attack arXiv Apr 2, 2026 · 4d ago

Low-Effort Jailbreak Attacks Against Text-to-Image Safety Filters

Ahmed B Mustafa, Zihan Ye, Yang Lu et al. · University of Nottingham · Xi’an Jiaotong-Liverpool University +1 more

Low-effort prompt-based jailbreaks bypass text-to-image safety filters using linguistic reframing, achieving 74% attack success

Prompt Injection multimodalgenerative
PDF
defense arXiv Jan 30, 2026 · 9w ago

DNA: Uncovering Universal Latent Forgery Knowledge

Jingtong Dou, Chuancheng Shi, Yemin Wang et al. · The University of Sydney · Xiamen University +2 more

Probes latent neurons in pre-trained vision models to detect AI-generated images without costly fine-tuning, outperforming black-box baselines

Output Integrity Attack vision
PDF
defense arXiv Dec 8, 2025 · Dec 2025

Towards Robust Protective Perturbation against DeepFake Face Swapping

Hengyang Yao, Lin Li, Ke Sun et al. · University of Birmingham · University of Oxford +2 more

Defends faces against deepfake swapping using RL-learned robust adversarial perturbations, outperforming EOT baselines by 26%

Output Integrity Attack visiongenerative
PDF
defense arXiv Dec 1, 2025 · Dec 2025

M4-BLIP: Advancing Multi-Modal Media Manipulation Detection through Face-Enhanced Local Analysis

Hang Wu, Ke Sun, Jiayi Ji et al. · Xiamen University

Proposes M4-BLIP, a BLIP-2-based framework combining facial features and LLMs to detect multi-modal media manipulation

Output Integrity Attack multimodalvisionnlp
PDF
defense arXiv Oct 17, 2025 · Oct 2025

Backdoor or Manipulation? Graph Mixture of Experts Can Defend Against Various Graph Adversarial Attacks

Yuyuan Feng, Bin Ma, Enyan Dai · Xiamen University · The Hong Kong University of Science and Technology (Guangzhou)

Mixture-of-Experts GNN framework that simultaneously defends against backdoor, edge manipulation, and node injection attacks via diversity loss and robustness-aware routing

Model Poisoning Input Manipulation Attack graph
PDF Code
defense arXiv Sep 29, 2025 · Sep 2025

MANI-Pure: Magnitude-Adaptive Noise Injection for Adversarial Purification

Xiaoyi Huang, Junwei Wu, Kejia Zhang et al. · Xiamen University · Emory University

Frequency-adaptive diffusion purification defense targeting high-frequency adversarial noise, achieving SOTA robust accuracy on RobustBench

Input Manipulation Attack vision
PDF
attack arXiv Aug 10, 2025 · Aug 2025

Fading the Digital Ink: A Universal Black-Box Attack Framework for 3DGS Watermarking Systems

Qingyuan Zeng, Shu Jiang, Jiajing Lin et al. · Xiamen University · The Hong Kong Polytechnic University

Black-box evolutionary attack that removes invisible watermarks from 3DGS models while preserving visual quality

Output Integrity Attack vision
PDF
survey arXiv Aug 4, 2025 · Aug 2025

A Survey on Data Security in Large Language Models

Kang Chen, Xiuze Zhou, Yuanguo Lin et al. · Jimei University · Wenzhou-Kean University +3 more

Surveys data-centric security risks in LLMs — data poisoning, prompt injection, PII leakage — and reviews defenses across the model lifecycle

Data Poisoning Attack Prompt Injection Training Data Poisoning Sensitive Information Disclosure nlp
PDF