Latest papers

2 papers
defense arXiv Mar 2, 2026 · 5w ago

Explanation-Guided Adversarial Training for Robust and Interpretable Models

Chao Chen, Yanhui Chen, Shanshan Lin et al. · Harbin Institute of Technology · Fuzhou University +1 more

Adversarial training framework combining explanation-guided constraints to improve robustness and saliency map stability against adversarial attacks

Input Manipulation Attack vision
PDF
attack arXiv Nov 11, 2025 · Nov 2025

LoopLLM: Transferable Energy-Latency Attacks in LLMs via Repetitive Generation

Xingyu Li, Xiaolei Liu, Cheng Liu et al. · National Interdisciplinary Research Center of Engineering Physics · Institute of Computer Application +2 more

Gradient-based adversarial prompt attack forces LLMs into repetitive loops, exhausting compute resources up to max output length

Model Denial of Service nlp
4 citations 2 influentialPDF Code