Latest papers

1 papers
attack arXiv Jan 6, 2026 · Jan 2026

Jailbreaking LLMs Without Gradients or Priors: Effective and Transferable Attacks

Zhakshylyk Nurlanov, Frank R. Schmidt, Florian Bernard · University of Bonn · Bosch Center for Artificial Intelligence

Gradient-free token-level adversarial suffix attack achieving near-100% jailbreak rate and strong transferability to GPT and Gemini

Input Manipulation Attack Prompt Injection nlp
PDF