Latest papers

2 papers
attack arXiv Feb 19, 2026 · 6w ago

Trojan Horses in Recruiting: A Red-Teaming Case Study on Indirect Prompt Injection in Standard vs. Reasoning Models

Manuel Wirth · University of Mannheim

Red-teams LLM recruiting pipelines via malicious CVs, revealing 'Meta-Cognitive Leakage' failure mode in Chain-of-Thought reasoning models

Prompt Injection nlp
PDF
benchmark arXiv Oct 3, 2025 · Oct 2025

A Granular Study of Safety Pretraining under Model Abliteration

Shashank Agnihotri, Jonas Jakubassa, Priyam Dey et al. · University of Mannheim · Max-Planck-Institute for Informatics +2 more

Benchmarks safety pretraining robustness against model abliteration across 20 LLMs, revealing refusal-only training is most fragile to activation-level jailbreaking

Prompt Injection nlp
2 citations PDF Code