Latest papers

2 papers
benchmark arXiv Nov 10, 2025 · Nov 2025

More Agents Helps but Adversarial Robustness Gap Persists

Khashayar Alavi, Zhastay Yeltay, Lucie Flek et al. · University of Bonn · Lamarr Institute for Machine Learning and Artificial Intelligence

Evaluates multi-agent LLM robustness against adversarial text noise, finding collaboration improves accuracy but fails to close the robustness gap

Prompt Injection nlp
PDF
defense arXiv Aug 8, 2025 · Aug 2025

In-Training Defenses against Emergent Misalignment in Language Models

David Kaczér, Magnus Jørgenvåg, Clemens Vetter et al. · University of Bonn · Lamarr Institute for Machine Learning and Artificial Intelligence +1 more

Evaluates four in-training regularization defenses that prevent emergent misalignment when fine-tuning LLMs with malicious data via APIs

Transfer Learning Attack Prompt Injection nlp
PDF