Latest papers

4 papers
defense arXiv Jan 15, 2026 · 11w ago

Privacy Enhanced PEFT: Tensor Train Decomposition Improves Privacy Utility Tradeoffs under DP-SGD

Pradip Kunwar, Minh Vu, Maanak Gupta et al. · Tennessee Tech University · Los Alamos National Laboratory

Defends LLM fine-tuning against membership inference via DP-SGD with tensor train adapters, using 7.6x fewer parameters than LoRA

Membership Inference Attack nlp
PDF
attack BigData Congress Nov 9, 2025 · Nov 2025

RAG-targeted Adversarial Attack on LLM-based Threat Detection and Mitigation Framework

Seif Ikbarieh, Kshitiz Aryal, Maanak Gupta · Tennessee Tech University · University of Nebraska Omaha

Poisons an LLM-based NIDS RAG knowledge base using BERT-surrogate TextFooler perturbations, degrading ChatGPT-5 mitigation quality

Data Poisoning Attack Training Data Poisoning nlp
PDF
benchmark TPS-ISA Oct 4, 2025 · Oct 2025

Explainable but Vulnerable: Adversarial Attacks on XAI Explanation in Cybersecurity Applications

Maraz Mia, Mir Mehedi A. Pritom · Tennessee Tech University

Empirically evaluates six adversarial attacks that manipulate XAI explanations (SHAP, LIME, IG) across cybersecurity ML applications

Output Integrity Attack Data Poisoning Attack tabular
1 citations PDF Code
benchmark arXiv Sep 12, 2025 · Sep 2025

Safety and Security Analysis of Large Language Models: Benchmarking Risk Profile and Harm Potential

Charankumar Akiri, Harrison Simpson, Kshitiz Aryal et al. · Tennessee Tech University · University of Nebraska at Omaha +1 more

Benchmarks nine LLMs against adversarial jailbreak prompts across 24 harm categories using a new Risk Severity Index metric

Prompt Injection nlp
PDF