Latest papers

6 papers
survey arXiv Dec 29, 2025 · Dec 2025

Application-Specific Power Side-Channel Attacks and Countermeasures: A Survey

Sahan Sanjaya, Aruna Jayasena, Prabhat Mishra · University of Florida · University of Tennessee

Surveys power side-channel attacks across cryptography, ML model reverse engineering, user behavior exploitation, and code disassembly

Model Theft vision
PDF
attack arXiv Nov 17, 2025 · Nov 2025

Accuracy is Not Enough: Poisoning Interpretability in Federated Learning via Color Skew

Farhin Farhad Riya, Shahinul Hoque, Jinyuan Stella Sun et al. · University of Tennessee · Oak Ridge National Laboratory

Federated learning poisoning attack that corrupts Grad-CAM saliency maps via color perturbations while preserving classification accuracy above 96%

Data Poisoning Attack visionfederated-learning
PDF
attack arXiv Oct 22, 2025 · Oct 2025

HAMLOCK: HArdware-Model LOgically Combined attacK

Sanskar Amgain, Daniel Lobo, Atri Chatterjee et al. · University of Tennessee · University of Florida

Backdoor attack splits trigger logic across hardware Trojan and minimal model edits, defeating all software-level DNN defenses

Model Poisoning AI Supply Chain Attacks vision
PDF
benchmark arXiv Sep 28, 2025 · Sep 2025

Learning-Based Testing for Deep Learning: Enhancing Model Robustness with Adversarial Input Prioritization

Sheikh Md Mushfiqur Rahman, Nasir Eisty · University of Tennessee

Learning-Based Testing framework prioritizes adversarial inputs by fault-revealing likelihood to guide DNN robustness retraining

Input Manipulation Attack vision
1 citations PDF
benchmark arXiv Sep 16, 2025 · Sep 2025

A Systematic Evaluation of Parameter-Efficient Fine-Tuning Methods for the Security of Code LLMs

Kiho Lee, Jungkon Kim, Doowon Kim et al. · ETRI · Samsung Research +2 more

Benchmarks seven PEFT methods for code LLM security; prompt-tuning best resists TrojanPuzzle backdoor attacks while improving secure code generation

Model Poisoning nlp
PDF
attack arXiv Aug 18, 2025 · Aug 2025

DASH: A Meta-Attack Framework for Synthesizing Effective and Stealthy Adversarial Examples

Abdullah Al Nomaan Nafi, Habibur Rahaman, Zafaryab Haider et al. · University of Maine · University of Florida +1 more

Meta-attack framework adaptively combining Lp-based attacks to generate perceptually aligned adversarial examples, outperforming AdvAD by 20% ASR

Input Manipulation Attack vision
PDF