Latest papers

7 papers
attack arXiv Feb 3, 2026 · 8w ago

DF-LoGiT: Data-Free Logic-Gated Backdoor Attacks in Vision Transformers

Xiaozuo Shen, Yifei Cai, Rui Ning et al. · University of Arizona · Iowa State University +1 more

Injects backdoors into ViT checkpoints via weight editing with logic-gated attention triggers, requiring no training data

Model Poisoning vision
PDF
defense arXiv Jan 30, 2026 · 9w ago

RPP: A Certified Poisoned-Sample Detection Framework for Backdoor Attacks under Dataset Imbalance

Miao Lin, Feng Yu, Rui Ning et al. · Old Dominion University · University of Texas at El Paso +3 more

Certified black-box poisoned-sample detector for backdoor attacks that remains robust under real-world class imbalance

Model Poisoning vision
PDF
benchmark arXiv Dec 19, 2025 · Dec 2025

Towards Benchmarking Privacy Vulnerabilities in Selective Forgetting with Large Language Models

Wei Qian, Chenxu Zhao, Yangyi Li et al. · Iowa State University

Benchmarks 21 privacy attack and defense methods exploiting machine unlearning to leak training data from LLMs

Model Inversion Attack Membership Inference Attack Sensitive Information Disclosure nlp
1 citations PDF
benchmark arXiv Nov 27, 2025 · Nov 2025

Decomposed Trust: Exploring Privacy, Adversarial Robustness, Fairness, and Ethics of Low-Rank LLMs

Daniel Agyei Asante, Md Mokarram Chowdhury, Yang Li · Iowa State University

Benchmarks how low-rank LLM compression affects adversarial robustness, PII leakage, privacy, and ethical alignment

Input Manipulation Attack Sensitive Information Disclosure nlp
PDF
defense arXiv Sep 18, 2025 · Sep 2025

Towards Privacy-Preserving and Heterogeneity-aware Split Federated Learning via Probabilistic Masking

Xingchen Wang, Feijie Wu, Chenglin Miao et al. · Purdue University · Iowa State University

Defends Split Federated Learning against data reconstruction attacks using probabilistic masking while handling client heterogeneity

Model Inversion Attack visionfederated-learning
PDF Code
attack arXiv Aug 10, 2025 · Aug 2025

Towards Unveiling Predictive Uncertainty Vulnerabilities in the Context of the Right to Be Forgotten

Wei Qian, Chenxu Zhao, Yangyi Li et al. · Iowa State University · University of Virginia

Proposes attacks that exploit machine unlearning requests to covertly corrupt model uncertainty estimates without altering predicted labels

Data Poisoning Attack vision
PDF
attack arXiv Aug 9, 2025 · Aug 2025

Membership Inference Attacks with False Discovery Rate Control

Chenxu Zhao, Wei Qian, Aobo Chen et al. · Iowa State University

Membership inference attack with provable false discovery rate control, wrapping existing MIA methods as a post-hoc plugin

Membership Inference Attack vision
PDF