Latest papers

16 papers
defense arXiv Mar 13, 2026 · 24d ago

Learnability and Privacy Vulnerability are Entangled in a Few Critical Weights

Xingli Fang, Jung-Eun Kim · North Carolina State University

Defends against membership inference by identifying and rewinding only the small fraction of weights responsible for privacy leakage

Membership Inference Attack vision
PDF
defense arXiv Mar 13, 2026 · 24d ago

Purifying Generative LLMs from Backdoors without Prior Knowledge or Clean Reference

Jianwei Li, Jung-Eun Kim · North Carolina State University

Backdoor removal for instruction-tuned LLMs using synthetic backdoor variants to identify shared malicious components without trigger knowledge

Model Poisoning nlp
PDF
defense arXiv Feb 10, 2026 · 7w ago

Statistical Roughness-Informed Machine Unlearning

Mohammad Partohaghighi, Roummel Marcia, Bruce J. West et al. · University of California · North Carolina State University

Spectral-stability-weighted machine unlearning algorithm that concentrates forgetting in stable layers, evaluated against membership inference leakage

Membership Inference Attack
PDF
defense arXiv Feb 2, 2026 · 9w ago

Decoupling Generalizability and Membership Privacy Risks in Neural Networks

Xingli Fang, Jung-Eun Kim · North Carolina State University

Defends against membership inference by localizing and selectively protecting DNN regions where membership privacy risk is concentrated

Membership Inference Attack vision
PDF
attack arXiv Jan 10, 2026 · 12w ago

Leveraging Soft Prompts for Privacy Attacks in Federated Prompt Tuning

Quan Minh Nguyen, Min-Seon Kim, Hoang M. Ngo et al. · University of Florida · North Carolina State University +2 more

PromptMIA: malicious server exploits adversarial soft prompt updates in federated prompt-tuning to infer client training membership

Membership Inference Attack Transfer Learning Attack nlpfederated-learning
PDF
benchmark arXiv Dec 17, 2025 · Dec 2025

How Do Semantically Equivalent Code Transformations Impact Membership Inference on LLMs for Code?

Hua Yang, Alejandro Velasco, Thanh Le-Cong et al. · North Carolina State University · William & Mary +1 more

Semantically equivalent code transformations, especially variable renaming, reduce membership inference success by 10% on code LLMs

Membership Inference Attack nlp
PDF
benchmark arXiv Dec 8, 2025 · Dec 2025

Understanding Privacy Risks in Code Models Through Training Dynamics: A Causal Approach

Hua Yang, Alejandro Velasco, Sen Fang et al. · North Carolina State University · William & Mary

Causally links training dynamics to PII leakage in code LLMs, showing easy-to-learn types (IP addresses) leak far more than keys or passwords

Model Inversion Attack Sensitive Information Disclosure nlp
1 citations PDF Code
defense arXiv Nov 13, 2025 · Nov 2025

CertMask: Certifiable Defense Against Adversarial Patches via Theoretically Optimal Mask Coverage

Xuntao Lyu, Ching-Chi Lin, Abdullah Al Arafat et al. · North Carolina State University · Technische Universität Dortmund +2 more

Certified defense against adversarial patches using k-fold mask coverage, cutting inference cost from O(n²) to O(n) while improving certified accuracy by +13.4%

Input Manipulation Attack vision
PDF
defense arXiv Nov 11, 2025 · Nov 2025

3D Guard-Layer: An Integrated Agentic AI Safety System for Edge Artificial Intelligence

Eren Kurshan, Yuan Xie, Paul Franzon · Princeton University · Hong Kong University of Science and Technology +1 more

Proposes 3D-integrated hardware safety layer for edge AI systems that dynamically detects and mitigates inference-time network attacks

Input Manipulation Attack Excessive Agency visionnlp
PDF
attack arXiv Oct 30, 2025 · Oct 2025

Fine-Grained Iterative Adversarial Attacks with Limited Computation Budget

Zhichao Hou, Weizhi Gao, Xiaorui Liu · North Carolina State University

Efficient iterative adversarial attacks via selective layer activation recomputation, matching full-budget adversarial training at 30% cost

Input Manipulation Attack vision
PDF
attack arXiv Oct 20, 2025 · Oct 2025

Can Transformer Memory Be Corrupted? Investigating Cache-Side Vulnerabilities in Large Language Models

Elias Hossain, Swayamjit Saha, Somshubhra Roy et al. · University of Central Florida · Mississippi State University +1 more

Attacks LLM inference by corrupting KV cache key vectors at runtime, bypassing prompt filters and causing output degradation across GPT-2 and LLaMA-2

Input Manipulation Attack nlp
2 citations PDF
benchmark arXiv Oct 15, 2025 · Oct 2025

Signature in Code Backdoor Detection, how far are we?

Quoc Hung Le, Thanh Le-Cong, Bach Le et al. · North Carolina State University · The University of Melbourne

Benchmarks Spectral Signature backdoor defenses on code LLMs, finds configs suboptimal, proposes NPV proxy metric requiring no retraining

Model Poisoning nlp
PDF
defense arXiv Sep 20, 2025 · Sep 2025

Train to Defend: First Defense Against Cryptanalytic Neural Network Parameter Extraction Attacks

Ashley Kurian, Aydin Aysu · North Carolina State University

First training-time defense against cryptanalytic parameter extraction attacks using neuron weight regularization to defeat model theft

Model Theft vision
PDF
defense arXiv Sep 10, 2025 · Sep 2025

Corruption-Tolerant Asynchronous Q-Learning with Near-Optimal Rates

Sreejeet Maity, Aritra Mitra · North Carolina State University

Defends Q-learning against adversarial reward corruption using robust trimmed-mean estimation with near-optimal finite-time guarantees

Data Poisoning Attack reinforcement-learning
PDF
defense arXiv Aug 11, 2025 · Aug 2025

FIDELIS: Blockchain-Enabled Protection Against Poisoning Attacks in Federated Learning

Jane Carney, Kushal Upreti, Gaby G. Dagher et al. · Saint Mary’s College of California · North Carolina State University +1 more

Blockchain-based federated learning framework that uses a consensus-driven judge model to detect and exclude label-flipping data poisoning attacks

Data Poisoning Attack federated-learningvision
PDF
defense arXiv Jan 5, 2025 · Jan 2025

Layer-Level Self-Exposure and Patch: Affirmative Token Mitigation for Jailbreak Attack Defense

Yang Ouyang, Hengrui Gu, Shuhang Lin et al. · North Carolina State University · Rutgers University +4 more

Defends LLMs against jailbreaks by identifying harmful-token-generating layers and patching them via adversarial unlearning

Prompt Injection nlp
PDF Code