Latest papers

66 papers
defense arXiv Mar 30, 2026 · 7d ago

FedFG: Privacy-Preserving and Robust Federated Learning via Flow-Matching Generation

Ruiyang Wang, Rong Pan, Zhengan Yao · Sun Yat-Sen University

Federated learning defense using flow-matching generators to prevent gradient inversion and detect poisoning attacks simultaneously

Data Poisoning Attack Model Inversion Attack federated-learningvision
PDF Code
attack arXiv Mar 27, 2026 · 10d ago

R-PGA: Robust Physical Adversarial Camouflage Generation via Relightable 3D Gaussian Splatting

Tianrui Lou, Siyuan Liang, Jiawei Liang et al. · Sun Yat-Sen University · National University of Singapore

Physical adversarial camouflage attack on autonomous vehicles using relightable 3D Gaussian splatting for robustness across lighting and viewing angles

Input Manipulation Attack vision
PDF
defense arXiv Mar 25, 2026 · 12d ago

High-Fidelity Face Content Recovery via Tamper-Resilient Versatile Watermarking

Peipeng Yu, Jinfeng Xie, Chengfu Ou et al. · Jinan University · University of Macau +2 more

Embeds semantic watermarks in face images for copyright protection, pixel-level deepfake localization, and content recovery after manipulation

Output Integrity Attack visiongenerative
PDF
tool arXiv Mar 24, 2026 · 13d ago

AgentFoX: LLM Agent-Guided Fusion with eXplainability for AI-Generated Image Detection

Yangxin Yu, Yue Zhou, Bin Li et al. · Shenzhen University · Sun Yat-Sen University +1 more

LLM-guided fusion framework that combines multiple forensic detectors to identify AI-generated images with explainable verdicts

Output Integrity Attack visionmultimodalnlp
PDF
attack arXiv Mar 16, 2026 · 21d ago

ClawWorm: Self-Propagating Attacks Across LLM Agent Ecosystems

Yihao Zhang, Zeming Wei, Xiaokun Luan et al. · Peking University · Sun Yat-Sen University +3 more

Self-replicating worm attack on LLM agent ecosystems achieving autonomous propagation through configuration hijacking and broadcast infection

AI Supply Chain Attacks Prompt Injection Excessive Agency nlpmultimodal
PDF
defense arXiv Mar 12, 2026 · 25d ago

ForensicZip: More Tokens are Better but Not Necessary in Forensic Vision-Language Models

Yingxin Lai, Zitong Yu, Jun Wang et al. · Great Bay University · Shenzhen University +2 more

Forensic-aware visual token pruning for deepfake/AIGC detection VLMs using Birth-Death Optimal Transport to preserve manipulation traces

Output Integrity Attack visionmultimodalnlp
PDF Code
defense arXiv Mar 6, 2026 · 4w ago

BlackMirror: Black-Box Backdoor Detection for Text-to-Image Models via Instruction-Response Deviation

Feiran Li, Qianqian Xu, Shilong Bao et al. · Institute of Information Engineering · University of Chinese Academy of Sciences +4 more

Black-box backdoor detector for text-to-image diffusion models using semantic instruction-response deviation across varied prompts

Model Poisoning visiongenerativemultimodal
PDF Code
attack arXiv Feb 23, 2026 · 6w ago

Advantage-based Temporal Attack in Reinforcement Learning

Shenghong He · Sun Yat-Sen University

Proposes AAT, a transformer-based adversarial attack that generates temporally correlated perturbations to mislead DRL agents across sequential decisions.

Input Manipulation Attack reinforcement-learningvision
PDF
attack arXiv Feb 15, 2026 · 7w ago

SkillJect: Automating Stealthy Skill-Based Prompt Injection for Coding Agents with Trace-Driven Closed-Loop Refinement

Xiaojun Jia, Jie Liao, Simeng Qin et al. · Nanyang Technological University · Chongqing University +4 more

Automated framework crafts stealthy skill-based prompt injections against LLM coding agents using closed-loop refinement agents

Prompt Injection Insecure Plugin Design nlp
PDF
defense arXiv Feb 5, 2026 · 8w ago

Surgery: Mitigating Harmful Fine-Tuning for Large Language Models via Attention Sink

Guozhi Liu, Weiwei Lin, Tiansheng Huang et al. · South China University of Technology · Pengcheng Laboratory +1 more

Defends LLM safety alignment during fine-tuning by regularizing attention sink divergence to prevent harmful pattern learning

Transfer Learning Attack nlp
PDF Code
defense arXiv Feb 3, 2026 · 8w ago

GuardReasoner-Omni: A Reasoning-based Multi-modal Guardrail for Text, Image, and Video

Zhenhao Zhu, Yue Liu, Yanpei Guo et al. · Tsinghua University · National University of Singapore +2 more

Reasoning-based omni-modal guardrail using SFT+GRPO to detect harmful text, image, and video LLM outputs

Prompt Injection multimodalnlpvision
PDF Code
attack arXiv Feb 2, 2026 · 9w ago

SGHA-Attack: Semantic-Guided Hierarchical Alignment for Transferable Targeted Attacks on Vision-Language Models

Haobo Wang, Weiqi Luo, Xiaojun Jia et al. · Sun Yat-Sen University · Nanyang Technological University

Adversarial visual perturbation attack on VLMs using hierarchical multi-anchor alignment for stronger black-box targeted transferability

Input Manipulation Attack Prompt Injection visionmultimodal
PDF
defense arXiv Jan 29, 2026 · 9w ago

Mining Forgery Traces from Reconstruction Error: A Weakly Supervised Framework for Multimodal Deepfake Temporal Localization

Midou Guo, Qilin Yin, Wei Lu et al. · Sun Yat-Sen University · Alibaba Group +1 more

Weakly supervised deepfake temporal localization using MAE reconstruction errors and asymmetric contrastive loss on multimodal video

Output Integrity Attack visionaudiomultimodal
PDF
defense arXiv Jan 29, 2026 · 9w ago

Lossless Copyright Protection via Intrinsic Model Fingerprinting

Lingxiao Chen, Liqin Wang, Wei Lu et al. · Sun Yat-Sen University · State Key Laboratory of Mathematical Engineering and Advanced Computing

Fingerprints diffusion models via denoising trajectory manifolds to verify copyright in black-box API settings without model modification

Model Theft visiongenerative
PDF
defense arXiv Jan 28, 2026 · 9w ago

MARE: Multimodal Alignment and Reinforcement for Explainable Deepfake Detection via Vision-Language Models

Wenbo Xu, Wei Lu, Xiangyang Luo et al. · Sun Yat-Sen University · State Key Laboratory of Mathematical Engineering and Advanced Computing +1 more

Proposes VLM-based deepfake detector using RLHF and multimodal alignment rewards for explainable forgery reasoning and spatial localization

Output Integrity Attack visionmultimodal
PDF
attack arXiv Jan 28, 2026 · 9w ago

ICON: Intent-Context Coupling for Efficient Multi-Turn Jailbreak Attack

Xingwei Lin, Wenhao Lin, Sicong Cao et al. · Zhejiang University · Nanjing University of Posts and Telecommunications +2 more

Exploits intent-context coupling in multi-turn jailbreaks to bypass LLM safety with 97.1% attack success rate

Prompt Injection nlp
PDF Code
attack ASE Jan 28, 2026 · 9w ago

DRAINCODE: Stealthy Energy Consumption Attacks on Retrieval-Augmented Code Generation via Context Poisoning

Yanlin Wang, Jiadong Wu, Tianyue Jiang et al. · Sun Yat-Sen University · Nanyang Technological University +1 more

Poisons RAG retrieval contexts with mutated code to force LLMs into verbose outputs, causing 85% latency and 49% energy consumption increases.

Model Denial of Service Prompt Injection nlp
PDF Code
defense arXiv Jan 27, 2026 · 9w ago

From Internal Diagnosis to External Auditing: A VLM-Driven Paradigm for Online Test-Time Backdoor Defense

Binyan Xu, Fan Yang, Xilin Dai et al. · The Chinese University of Hong Kong · Zhejiang University +1 more

Defends backdoored vision models at test-time using VLMs as external semantic auditors decoupled from victim model parameters

Model Poisoning vision
PDF
benchmark arXiv Jan 27, 2026 · 9w ago

Unveiling Perceptual Artifacts: A Fine-Grained Benchmark for Interpretable AI-Generated Image Detection

Yao Xiao, Weiyan Chen, Jiahao Chen et al. · Sun Yat-Sen University · Xi’an Jiaotong University +3 more

Introduces X-AIGD benchmark with pixel-level perceptual artifact annotations to enable interpretable AI-generated image detection evaluation

Output Integrity Attack vision
PDF Code
defense arXiv Jan 21, 2026 · 10w ago

Safeguarding Facial Identity against Diffusion-based Face Swapping via Cascading Pathway Disruption

Liqin Wang, Qianyue Hu, Wei Lu et al. · Sun Yat-Sen University · State Key Laboratory of Mathematical Engineering and Advanced Computing

Adversarial perturbations that cascade-disrupt diffusion face-swapping pipelines by corrupting identity extraction and injection to prevent deepfakes

Input Manipulation Attack Output Integrity Attack visiongenerative
PDF
Loading more papers…