Latest papers

9 papers
defense arXiv Mar 23, 2026 · 14d ago

Precision-Varying Prediction (PVP): Robustifying ASR systems against adversarial attacks

Matías Pizarro, Raghavan Narasimhan, Asja Fischer · Ruhr University Bochum

Defense against audio adversarial attacks by randomly varying model precision during inference and detecting attacks via precision-based output comparison

Input Manipulation Attack audio
PDF
benchmark arXiv Mar 9, 2026 · 28d ago

The Conundrum of Trustworthy Research on Attacking Personally Identifiable Information Removal Techniques

Sebastian Ochs, Ivan Habernal · Trustworthy Human Language Technologies · Technical University of Darmstadt +2 more

Critiques PII reconstruction attack evaluations, showing data leakage and LLM memorization inflate reported attack success rates

Model Inversion Attack Sensitive Information Disclosure nlp
PDF
defense arXiv Feb 11, 2026 · 7w ago

Kill it with FIRE: On Leveraging Latent Space Directions for Runtime Backdoor Mitigation in Deep Neural Networks

Enrico Ahlers, Daniel Passon, Yannic Noller et al. · Humboldt University of Berlin · Ruhr University Bochum

Inference-time backdoor defense that neutralizes triggers by reversing their latent-space directions without modifying model weights

Model Poisoning vision
PDF
survey arXiv Dec 10, 2025 · Dec 2025

Chasing Shadows: Pitfalls in LLM Security Research

Jonathan Evertz, Niklas Risse, Nicolai Neuer et al. · CISPA Helmholtz Center for Information Security · Max Planck Institute for Security and Privacy +4 more

Surveys nine methodological pitfalls in LLM security research found in all 72 surveyed papers, with case studies showing how each misleads results

Data Poisoning Attack Prompt Injection nlp
2 citations PDF
defense arXiv Nov 25, 2025 · Nov 2025

PRADA: Probability-Ratio-Based Attribution and Detection of Autoregressive-Generated Images

Simon Damm, Jonas Ricker, Henning Petzka et al. · Ruhr University Bochum

Detects and attributes autoregressive-generated images using conditional-vs-unconditional probability ratios as model-specific signatures

Output Integrity Attack visiongenerative
PDF
defense arXiv Nov 17, 2025 · Nov 2025

Tuning for Two Adversaries: Enhancing the Robustness Against Transfer and Query-Based Attacks using Hyperparameter Tuning

Pascal Zimmer, Ghassan Karame · Ruhr University Bochum

Defends against transfer and query-based adversarial attacks by tuning hyperparameters, revealing opposing learning-rate effects for each attack type

Input Manipulation Attack visionfederated-learning
PDF Code
benchmark arXiv Oct 16, 2025 · Oct 2025

When Flatness Does (Not) Guarantee Adversarial Robustness

Nils Philipp Walter, Linara Adilova, Jilles Vreeken et al. · CISPA Helmholtz Center for Information Security · Ruhr University Bochum +3 more

Formally proves loss landscape flatness guarantees only local adversarial robustness; adversarial examples inhabit flat, confidently-wrong regions

Input Manipulation Attack vision
3 citations PDF
defense arXiv Sep 5, 2025 · Sep 2025

On Hyperparameters and Backdoor-Resistance in Horizontal Federated Learning

Simon Lachnit, Ghassan Karame · Ruhr University Bochum

Shows that benign clients' hyperparameter tuning passively reduces FL backdoor attack lifespan by up to 98.6% without explicit defenses

Model Poisoning federated-learningvision
PDF
attack arXiv Sep 2, 2025 · Sep 2025

Targeted Physical Evasion Attacks in the Near-Infrared Domain

Pascal Zimmer, Simon Lachnit, Alexander Jan Zielinski et al. · Ruhr University Bochum

Physical adversarial attack using infrared film and flashlight achieves targeted traffic sign misclassification under real-world conditions

Input Manipulation Attack vision
PDF