Latest papers

10 papers
benchmark arXiv Mar 9, 2026 · 28d ago

Comparative Analysis of Patch Attack on VLM-Based Autonomous Driving Architectures

David Fernandez, Pedram MohajerAnsari, Amir Salarpour et al. · Clemson University

Benchmarks physical adversarial patch attacks across three VLM autonomous driving architectures using black-box NES and semantic homogenization for fair comparison

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF
defense IEEE International Conference ... Feb 10, 2026 · 7w ago

A Low-Rank Defense Method for Adversarial Attack on Diffusion Models

Jiaxuan Zhu, Siyu Huang · Clemson University

Defends latent diffusion model LoRA fine-tuning against adversarial image protection schemes using low-rank adaptation modules

Output Integrity Attack visiongenerative
PDF
defense arXiv Dec 1, 2025 · Dec 2025

Ensemble Privacy Defense for Knowledge-Intensive LLMs against Membership Inference Attacks

Haowei Fu, Bo Ni, Han Xu et al. · Vanderbilt University · University of Arizona +1 more

Defends RAG and SFT-based LLMs against membership inference attacks using an ensemble of base, fine-tuned, and judge models

Membership Inference Attack nlp
PDF
attack arXiv Nov 13, 2025 · Nov 2025

MOBA: A Material-Oriented Backdoor Attack against LiDAR-based 3D Object Detection Systems

Saket S. Chaturvedi, Gaurav Bagwe, Lan Zhang et al. · Clemson University · Auburn University

Physically realizable backdoor attack on LiDAR perception using TiO₂ material triggers modeled via BRDF simulation, achieving 93.5% ASR

Model Poisoning visionmultimodal
PDF
defense arXiv Nov 9, 2025 · Nov 2025

EchoMark: Perceptual Acoustic Environment Transfer with Watermark-Embedded Room Impulse Response

Chenpei Huang, Lingfeng Yao, Kyu In Lee et al. · University of Houston · Clemson University

Embeds watermarks in AI-generated room impulse responses to trace audio provenance and deter voice spoofing attacks

Output Integrity Attack audiogenerative
PDF
defense CVPR Sep 26, 2025 · Sep 2025

FreqDebias: Towards Generalizable Deepfake Detection via Consistency-Driven Frequency Debiasing

Hossein Kashiani, Niloufar Alipour Talemi, Fatemeh Afghah · Clemson University

Proposes FreqDebias to improve deepfake detector generalization by mitigating frequency-domain spectral bias via novel augmentation and consistency regularization

Output Integrity Attack vision
10 citations PDF
attack EMNLP Sep 26, 2025 · Sep 2025

Your RAG is Unfair: Exposing Fairness Vulnerabilities in Retrieval-Augmented Generation via Backdoor Attacks

Gaurav Bagwe, Saket S. Chaturvedi, Xiaolong Ma et al. · Clemson University · University of Arizona

Two-phase backdoor attack on RAG systems exploits a poisoned query encoder and adversarial document injection to embed persistent social bias

Model Poisoning Data Poisoning Attack nlp
2 citations PDF
attack arXiv Sep 18, 2025 · Sep 2025

AIP: Subverting Retrieval-Augmented Generation via Adversarial Instructional Prompt

Saket S. Chaturvedi, Gaurav Bagwe, Lan Zhang et al. · Clemson University

Genetic algorithm-optimized adversarial instructional prompts that covertly hijack RAG system outputs with 95% attack success rate

Prompt Injection nlp
PDF
defense arXiv Sep 4, 2025 · Sep 2025

DisPatch: Disarming Adversarial Patches in Object Detection with Diffusion Models

Jin Ma, Mohammed Aldeen, Christopher Salas et al. · Clemson University

Diffusion-based defense purifies adversarial patches on object detectors via regenerate-and-rectify, beating SOTA on both hiding and creating attacks

Input Manipulation Attack vision
PDF
defense IEEE International Conference ... Jan 3, 2025 · Jan 2025

Adaptive Meta-learning-based Adversarial Training for Robust Automatic Modulation Classification

Amirmohammad Bamdad, Ali Owfi, Fatemeh Afghah · Clemson University

Meta-learning adversarial training framework that generalizes AMC model robustness to unseen adversarial attacks with fast few-shot online adaptation

Input Manipulation Attack timeseries
4 citations PDF