Latest papers

13 papers
defense arXiv Mar 26, 2026 · 11d ago

Knowledge-Guided Adversarial Training for Infrared Object Detection via Thermal Radiation Modeling

Shiji Zhao, Shukun Xiong, Maoxun Yuan et al. · Beihang University · Alibaba Group +2 more

Adversarial training for infrared object detectors guided by thermal radiation physics to improve robustness against attacks and corruptions

Input Manipulation Attack vision
PDF
attack arXiv Feb 2, 2026 · 9w ago

MarkCleaner: High-Fidelity Watermark Removal via Imperceptible Micro-Geometric Perturbation

Xiaoxi Kong, Jieyu Yuan, Pengdi Chen et al. · Shenzhen University · Nankai University

Removes semantic AI-image watermarks via micro-geometric perturbations that break phase alignment without semantic drift

Output Integrity Attack visiongenerative
PDF
defense TIFS Dec 19, 2025 · Dec 2025

Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning

Baolei Zhang, Minghong Fang, Zhuqing Liu et al. · Nankai University · University of Louisville +1 more

Defends federated learning against Byzantine model corruption and gradient privacy inference using dimensionality reduction and adaptive filtering

Data Poisoning Attack Model Inversion Attack federated-learning
1 citations PDF
defense arXiv Dec 16, 2025 · Dec 2025

ComMark: Covert and Robust Black-Box Model Watermarking with Compressed Samples

Yunfei Yang, Xiaojun Chen, Zhendong Zhao et al. · Chinese Academy of Sciences · University of Chinese Academy of Sciences +1 more

Defends model IP by embedding frequency-domain compressed watermark samples into black-box models, resisting removal and forgery attacks.

Model Theft visionnlpaudio
PDF
defense arXiv Dec 1, 2025 · Dec 2025

On the Tension Between Optimality and Adversarial Robustness in Policy Optimization

Haoran Li, Jiayu Lv, Congying Han et al. · University of Chinese Academy of Sciences · JD.com +2 more

Proposes BARPO, a bilevel RL framework that reconciles optimality and adversarial robustness by modulating adversary strength during training

Input Manipulation Attack reinforcement-learning
PDF
attack arXiv Nov 17, 2025 · Nov 2025

T2I-Based Physical-World Appearance Attack against Traffic Sign Recognition Systems in Autonomous Driving

Chen Ma, Ningfei Wang, Junhao Zheng et al. · Xi’an Jiaotong University · University of California +2 more

T2I diffusion-based physical adversarial appearance attack fools traffic sign classifiers with 83.3% real-world success rate

Input Manipulation Attack vision
PDF
defense arXiv Nov 16, 2025 · Nov 2025

Beyond Pixels: Semantic-aware Typographic Attack for Geo-Privacy Protection

Jiayi Zhu, Yihao Huang, Yue Cao et al. · Xidian University · Ltd +5 more

Defends geo-privacy by embedding semantics-aware deceptive text overlays around images to mislead LVLMs into predicting wrong geolocations.

Input Manipulation Attack Prompt Injection visionmultimodal
PDF
defense arXiv Sep 17, 2025 · Sep 2025

Who Taught the Lie? Responsibility Attribution for Poisoned Knowledge in Retrieval-Augmented Generation

Baolei Zhang, Haoran Xin, Yuxi Chen et al. · Nankai University · University of North Texas +1 more

Detects and attributes poisoned documents in RAG knowledge bases by scoring retrieval ranking, semantics, and generation influence

Data Poisoning Attack Prompt Injection nlp
PDF Code
defense arXiv Sep 15, 2025 · Sep 2025

DetectAnyLLM: Towards Generalizable and Robust Detection of Machine-Generated Text Across Domains and Models

Jiachen Fu, Chun-Le Guo, Chongyi Li · Nankai University · NKIARI

Proposes DDL optimization and DetectAnyLLM framework for generalizable LLM-generated text detection, plus a new diverse benchmark MIRAGE

Output Integrity Attack nlp
PDF Code
defense arXiv Sep 2, 2025 · Sep 2025

MoSEs: Uncertainty-Aware AI-Generated Text Detection via Mixture of Stylistics Experts with Conditional Thresholds

Junxi Wu, Jinpeng Wang, Zheng Liu et al. · Nankai University · Tsinghua University +3 more

Novel mixture-of-experts detector for AI-generated text using stylistic modeling and uncertainty-aware conditional thresholds

Output Integrity Attack nlp
PDF Code
defense arXiv Aug 18, 2025 · Aug 2025

RAJ-PGA: Reasoning-Activated Jailbreak and Principle-Guided Alignment Framework for Large Reasoning Models

Jianhao Chen, Mayi Xu, Haoyang Chen et al. · Wuhan University · Zhongguancun Academy +2 more

Jailbreaks Large Reasoning Models via prompt concretization targeting CoT reasoning, then builds a safety alignment dataset that improves defense by 29.5%

Prompt Injection nlp
PDF Code
defense arXiv Aug 10, 2025 · Aug 2025

Gradient Surgery for Safe LLM Fine-Tuning

Biao Yi, Jiahao Li, Baolei Zhang et al. · Nankai University

Gradient surgery defense nullifies safety-conflicting gradients during LLM fine-tuning to resist adversarial data poisoning attacks

Data Poisoning Attack Training Data Poisoning nlp
PDF Code
defense arXiv Aug 8, 2025 · Aug 2025

Quantifying Conversation Drift in MCP via Latent Polytope

Haoran Shi, Hongwei Yao, Shuo Shao et al. · arXiv · Zhejiang University +3 more

Defends LLM-MCP tool integrations against indirect prompt injection by detecting adversarial conversation drift in latent polytope space

Insecure Plugin Design Prompt Injection nlp
PDF