Latest papers

6 papers
survey arXiv Mar 8, 2026 · 29d ago

From Thinker to Society: Security in Hierarchical Autonomy Evolution of AI Agents

Xiaolei Zhang, Lu Zhou, Xiaogang Xu et al. · Nanjing University of Aeronautics and Astronautics · Collaborative Innovation Center of Novel Software Technology and Industrialization +5 more

Surveys LLM agent security threats across three autonomy tiers: cognitive manipulation, tool misuse, and multi-agent systemic failures

Prompt Injection Insecure Plugin Design Excessive Agency nlp
PDF
attack ASE Jan 28, 2026 · 9w ago

DRAINCODE: Stealthy Energy Consumption Attacks on Retrieval-Augmented Code Generation via Context Poisoning

Yanlin Wang, Jiadong Wu, Tianyue Jiang et al. · Sun Yat-Sen University · Nanyang Technological University +1 more

Poisons RAG retrieval contexts with mutated code to force LLMs into verbose outputs, causing 85% latency and 49% energy consumption increases.

Model Denial of Service Prompt Injection nlp
PDF Code
defense arXiv Jan 11, 2026 · 12w ago

R$^2$BD: A Reconstruction-Based Method for Generalizable and Efficient Detection of Fake Images

Qingyu Liu, Zhongjie Ba, Jianmin Guo et al. · Zhejiang University · Huawei

Proposes efficient reconstruction-based AIGC detector covering GANs, VAEs, and diffusion models with 22x speedup over prior methods

Output Integrity Attack visiongenerative
PDF Code
attack arXiv Nov 20, 2025 · Nov 2025

Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language Models

Yijun Yang, Lichao Wang, Jianping Zhang et al. · The Chinese University of Hong Kong · Beijing Institute of Technology +1 more

Adversarial image attack jailbreaks GPT-4o, Gemini-Pro, and Llama-4 by hiding harmful instructions inside competing visual objectives, transferring across VLMs

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF Code
attack arXiv Sep 22, 2025 · Sep 2025

SilentStriker:Toward Stealthy Bit-Flip Attacks on Large Language Models

Haotian Xu, Qingsong Peng, Jie Shi et al. · Zhejiang University · Huawei

Stealthy bit-flip attack on LLM weights that degrades task performance while preserving output naturalness to evade detection

Model Poisoning nlp
1 citations PDF
defense arXiv Aug 13, 2025 · Aug 2025

Shadow in the Cache: Unveiling and Mitigating Privacy Risks of KV-cache in LLM Inference

Zhifan Luo, Shuo Shao, Su Zhang et al. · Zhejiang University · Huawei +1 more

Adversaries reconstruct private user prompts from LLM KV-cache via inversion, collision, and injection attacks; KV-Cloak defends with reversible matrix obfuscation

Model Inversion Attack Sensitive Information Disclosure nlp
PDF