Latest papers

2 papers
attack Journal of Network and Compute... Oct 11, 2025 · Oct 2025

ArtPerception: ASCII Art-based Jailbreak on LLMs with Recognition Pre-test

Guan-Yan Yang, Tzu-Yu Cheng, Ya-Wen Teng et al. · National Taiwan University · GARMIN +2 more

Two-phase black-box jailbreak uses ASCII art encoding to bypass LLM safety alignment, including GPT-4o and Claude Sonnet 3.7

Prompt Injection nlp
2 citations PDF
defense arXiv Sep 22, 2025 · Sep 2025

Design and Implementation of a Secure RAG-Enhanced AI Chatbot for Smart Tourism Customer Service: Defending Against Prompt Injection Attacks -- A Case Study of Hsinchu, Taiwan

Yu-Kai Shih, You-Kai Kang · National Dong Hwa University

Defends RAG-enhanced LLM tourism chatbot against prompt injection using reverse RAG, gatekeepers, and tiered guardrails with adversarial evaluation

Prompt Injection nlp
PDF