Latest papers

7 papers
defense J. G. Zalameda, M. A. Witherow... Mar 25, 2026 · 12d ago

Attack Assessment and Augmented Identity Recognition for Human Skeleton Data

Joseph G. Zalameda, Megan A. Witherow, Alexander M. Glandon et al. · Old Dominion University · Amherst College

GAN-based adversarial training framework that generates attack samples to inoculate skeleton-based person ID models against unseen attacks

Input Manipulation Attack vision
PDF
attack arXiv Feb 3, 2026 · 8w ago

DF-LoGiT: Data-Free Logic-Gated Backdoor Attacks in Vision Transformers

Xiaozuo Shen, Yifei Cai, Rui Ning et al. · University of Arizona · Iowa State University +1 more

Injects backdoors into ViT checkpoints via weight editing with logic-gated attention triggers, requiring no training data

Model Poisoning vision
PDF
defense arXiv Jan 30, 2026 · 9w ago

RPP: A Certified Poisoned-Sample Detection Framework for Backdoor Attacks under Dataset Imbalance

Miao Lin, Feng Yu, Rui Ning et al. · Old Dominion University · University of Texas at El Paso +3 more

Certified black-box poisoned-sample detector for backdoor attacks that remains robust under real-world class imbalance

Model Poisoning vision
PDF
defense arXiv Dec 14, 2025 · Dec 2025

PRIVEE: Privacy-Preserving Vertical Federated Learning Against Feature Inference Attacks

Sindhuja Madabushi, Ahmad Faraz Khan, Haider Ali et al. · Virginia Tech · US DEVCOM Army Research Laboratory +2 more

Defends against feature inference attacks in VFL by obfuscating confidence scores while preserving ranking and inter-score distances

Model Inversion Attack federated-learningtabular
PDF
defense arXiv Oct 24, 2025 · Oct 2025

DictPFL: Efficient and Private Federated Learning on Encrypted Gradients

Jiaqi Xue, Mayank Kumar, Yuzhang Shang et al. · University of Central Florida · Florida State University +2 more

Defends federated learning against gradient inversion attacks via efficient homomorphic encryption, achieving 2× overhead of plaintext FL

Model Inversion Attack federated-learning
1 citations PDF Code
attack arXiv Oct 21, 2025 · Oct 2025

HarmNet: A Framework for Adaptive Multi-Turn Jailbreak Attacks on Large Language Models

Sidhant Narula, Javad Rafiei Asl, Mohammad Ghasemigol et al. · Old Dominion University · University of Arizona

Adaptive multi-turn jailbreak framework using hierarchical semantic networks achieves 99.4% ASR on Mistral-7B

Prompt Injection nlp
PDF
attack EMNLP Oct 3, 2025 · Oct 2025

NEXUS: Network Exploration for eXploiting Unsafe Sequences in Multi-Turn LLM Jailbreaks

Javad Rafiei Asl, Sidhant Narula, Mohammad Ghasemigol et al. · Old Dominion University · University of Arizona

Multi-turn LLM jailbreak framework using semantic query networks and attacker-victim-judge feedback loops to bypass alignment

Prompt Injection nlp
3 citations PDF Code