Latest papers

13 papers
attack arXiv Feb 28, 2026 · 5w ago

Roots Beneath the Cut: Uncovering the Risk of Concept Revival in Pruning-Based Unlearning for Diffusion Models

Ci Zhang, Zhaojun Ding, Chence Yang et al. · University of Georgia · Carnegie Mellon University +3 more

Attacks pruning-based unlearning in diffusion models by reviving erased concepts via side-channel signals from zeroed weight locations

Output Integrity Attack generativevision
PDF
attack arXiv Feb 3, 2026 · 8w ago

DF-LoGiT: Data-Free Logic-Gated Backdoor Attacks in Vision Transformers

Xiaozuo Shen, Yifei Cai, Rui Ning et al. · University of Arizona · Iowa State University +1 more

Injects backdoors into ViT checkpoints via weight editing with logic-gated attention triggers, requiring no training data

Model Poisoning vision
PDF
defense arXiv Jan 30, 2026 · 9w ago

RPP: A Certified Poisoned-Sample Detection Framework for Backdoor Attacks under Dataset Imbalance

Miao Lin, Feng Yu, Rui Ning et al. · Old Dominion University · University of Texas at El Paso +3 more

Certified black-box poisoned-sample detector for backdoor attacks that remains robust under real-world class imbalance

Model Poisoning vision
PDF
defense arXiv Jan 24, 2026 · 10w ago

A Lightweight Explainable Guardrail for Prompt Safety

Md Asiful Islam, Mihai Surdeanu · University of Arizona

Lightweight multi-task guardrail that classifies unsafe prompts and highlights which words drive the decision with explainability

Prompt Injection nlp
PDF
defense arXiv Dec 14, 2025 · Dec 2025

PRIVEE: Privacy-Preserving Vertical Federated Learning Against Feature Inference Attacks

Sindhuja Madabushi, Ahmad Faraz Khan, Haider Ali et al. · Virginia Tech · US DEVCOM Army Research Laboratory +2 more

Defends against feature inference attacks in VFL by obfuscating confidence scores while preserving ranking and inter-score distances

Model Inversion Attack federated-learningtabular
PDF
attack Asia-Pacific Computer Systems ... Dec 1, 2025 · Dec 2025

Physical ID-Transfer Attacks against Multi-Object Tracking via Adversarial Trajectory

Chenyi Wang, Yanmao Man, Raymond Muller et al. · University of Arizona · HERE Technologies +3 more

Physical adversarial trajectory attack that transfers tracked IDs between objects in MOT systems, bypassing object detection with 100% white-box success

Input Manipulation Attack vision
1 citations PDF
defense arXiv Dec 1, 2025 · Dec 2025

Ensemble Privacy Defense for Knowledge-Intensive LLMs against Membership Inference Attacks

Haowei Fu, Bo Ni, Han Xu et al. · Vanderbilt University · University of Arizona +1 more

Defends RAG and SFT-based LLMs against membership inference attacks using an ensemble of base, fine-tuned, and judge models

Membership Inference Attack nlp
PDF
defense ICCD Oct 28, 2025 · Oct 2025

FaRAccel: FPGA-Accelerated Defense Architecture for Efficient Bit-Flip Attack Resilience in Transformer Models

Najmeh Nazari, Banafsheh Saber Latibari, Elahe Hosseini et al. · University of California · University of Arizona +2 more

FPGA accelerator implementing Forget-and-Rewire defense against hardware bit-flip attacks on Transformer weights, achieving 15× latency speedup

Model Poisoning nlpvision
1 citations PDF
attack ICCD Oct 28, 2025 · Oct 2025

Hammering the Diagnosis: Rowhammer-Induced Stealthy Trojan Attacks on ViT-Based Medical Imaging

Banafsheh Saber Latibari, Najmeh Nazari, Hossein Sayadi et al. · University of Arizona · University of California +1 more

Rowhammer bit-flip attacks trigger implanted neural Trojans in ViT medical imaging models, achieving 92% attack success stealthily

Model Poisoning vision
1 citations PDF
attack arXiv Oct 21, 2025 · Oct 2025

HarmNet: A Framework for Adaptive Multi-Turn Jailbreak Attacks on Large Language Models

Sidhant Narula, Javad Rafiei Asl, Mohammad Ghasemigol et al. · Old Dominion University · University of Arizona

Adaptive multi-turn jailbreak framework using hierarchical semantic networks achieves 99.4% ASR on Mistral-7B

Prompt Injection nlp
PDF
attack EMNLP Oct 3, 2025 · Oct 2025

NEXUS: Network Exploration for eXploiting Unsafe Sequences in Multi-Turn LLM Jailbreaks

Javad Rafiei Asl, Sidhant Narula, Mohammad Ghasemigol et al. · Old Dominion University · University of Arizona

Multi-turn LLM jailbreak framework using semantic query networks and attacker-victim-judge feedback loops to bypass alignment

Prompt Injection nlp
3 citations PDF Code
defense arXiv Oct 1, 2025 · Oct 2025

A Call to Action for a Secure-by-Design Generative AI Paradigm

Dalal Alharthi, Ivan Roberto Kawaminami Garcia · University of Arizona

Ontology-driven prompt validation framework defends LLM agents against prompt injection with 94% F1 on AWS cloud logs

Prompt Injection nlp
PDF
attack EMNLP Sep 26, 2025 · Sep 2025

Your RAG is Unfair: Exposing Fairness Vulnerabilities in Retrieval-Augmented Generation via Backdoor Attacks

Gaurav Bagwe, Saket S. Chaturvedi, Xiaolong Ma et al. · Clemson University · University of Arizona

Two-phase backdoor attack on RAG systems exploits a poisoned query encoder and adversarial document injection to embed persistent social bias

Model Poisoning Data Poisoning Attack nlp
2 citations PDF