Latest papers

9 papers
defense arXiv Mar 11, 2026 · 26d ago

Backdoor Directions in Vision Transformers

Sengim Karayalcin, Marina Krcek, Pin-Yu Chen et al. · Leiden University · Radboud University +2 more

Identifies causal 'trigger directions' in ViT activations to analyze, remove, and detect backdoors via weight-space interventions

Model Poisoning vision
PDF
attack arXiv Mar 10, 2026 · 27d ago

Removing the Trigger, Not the Backdoor: Alternative Triggers and Latent Backdoors

Gorka Abad, Ermes Franch, Stefanos Koffas et al. · University of Bergen · Delft University of Technology +2 more

Proves backdoor-trained models stay exploitable via alternative triggers even after defenses neutralize the original training trigger

Model Poisoning vision
PDF
attack arXiv Feb 9, 2026 · 8w ago

Large Language Lobotomy: Jailbreaking Mixture-of-Experts via Expert Silencing

Jona te Lintelo, Lichao Wu, Stjepan Picek · Radboud University · Technical University of Darmstadt +1 more

Jailbreaks MoE LLMs by silencing safety-critical experts at inference time, boosting attack success from 7.3% to 70.4%

Prompt Injection nlp
PDF
survey arXiv Jan 23, 2026 · 10w ago

Emerging Threats and Countermeasures in Neuromorphic Systems: A Survey

Pablo Sorrentino, Stjepan Picek, Ihsen Alouani et al. · University of Groningen · University of Zagreb +5 more

Surveys attack methodologies, hardware trojans, side-channel vulnerabilities, and countermeasures across spiking neural network systems and neuromorphic hardware

Input Manipulation Attack Model Poisoning
PDF
attack arXiv Dec 24, 2025 · Dec 2025

GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMs

Lichao Wu, Sasha Behrouzi, Mohamadreza Rostami et al. · Technical University of Darmstadt · University of Zagreb +1 more

White-box attack disables ~3% of MoE safety neurons to raise LLM jailbreak success from 7% to 65% across eight aligned models

Prompt Injection nlpmultimodal
2 citations PDF
attack arXiv Nov 8, 2025 · Nov 2025

CatBack: Universal Backdoor Attacks on Tabular Data via Categorical Encoding

Behrad Tajalli, Stefanos Koffas, Stjepan Picek · Radboud University · Delft University of Technology +1 more

Backdoor attack on tabular ML models via categorical-to-float encoding enabling gradient-based universal triggers with 100% ASR

Model Poisoning tabular
PDF
attack arXiv Sep 15, 2025 · Sep 2025

NeuroStrike: Neuron-Level Attacks on Aligned LLMs

Lichao Wu, Sasha Behrouzi, Mohamadreza Rostami et al. · Technical University of Darmstadt · University of Zagreb +1 more

Bypasses LLM safety alignment by pruning <0.6% of sparse safety neurons, achieving 76.9% ASR across 20+ aligned LLMs

Input Manipulation Attack Prompt Injection nlpmultimodal
PDF
defense arXiv Aug 19, 2025 · Aug 2025

CRISP: Persistent Concept Unlearning via Sparse Autoencoders

Tomer Ashuach, Dana Arad, Aaron Mueller et al. · Technion – Israel Institute of Technology · Boston University +1 more

Permanently removes dangerous LLM knowledge by suppressing sparse autoencoder features via fine-tuning, blocking adversarial bypass of inference-time safety measures

Prompt Injection nlp
PDF Code
attack arXiv Jan 10, 2025 · Jan 2025

Towards Backdoor Stealthiness in Model Parameter Space

Xiaoyun Xu, Zhuoran Liu, Stefanos Koffas et al. · Radboud University Nijmegen · Delft University of Technology +1 more

Proposes Grond, a backdoor attack stealthy in parameter space that evades 17 diverse defenses via adaptive neuron-level injection

Model Poisoning vision
PDF Code