defense 2025

Backdoor Vectors: a Task Arithmetic View on Backdoor Attacks and Defenses

Stanisław Pawlak 1,2, Jan Dubiński 1,2, Daniel Marczak 1,3, Bartłomiej Twardowski 4,5

0 citations · 53 references · arXiv

α

Published on arXiv

2510.08016

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

SBVs are the first backdoor attack to leverage model merging to improve attack effectiveness, outperforming prior SOTA, while IBVS provides a lightweight, assumption-free defense effective even against entirely unknown backdoor threats.

Backdoor Vectors (BV) / Sparse Backdoor Vector (SBV) / Injection BV Subtraction (IBVS)

Novel technique introduced


Model merging (MM) recently emerged as an effective method for combining large deep learning models. However, it poses significant security risks. Recent research shows that it is highly susceptible to backdoor attacks, which introduce a hidden trigger into a single fine-tuned model instance that allows the adversary to control the output of the final merged model at inference time. In this work, we propose a simple framework for understanding backdoor attacks by treating the attack itself as a task vector. $Backdoor\ Vector\ (BV)$ is calculated as the difference between the weights of a fine-tuned backdoored model and fine-tuned clean model. BVs reveal new insights into attacks understanding and a more effective framework to measure their similarity and transferability. Furthermore, we propose a novel method that enhances backdoor resilience through merging dubbed $Sparse\ Backdoor\ Vector\ (SBV)$ that combines multiple attacks into a single one. We identify the core vulnerability behind backdoor threats in MM: $inherent\ triggers$ that exploit adversarial weaknesses in the base model. To counter this, we propose $Injection\ BV\ Subtraction\ (IBVS)$ - an assumption-free defense against backdoors in MM. Our results show that SBVs surpass prior attacks and is the first method to leverage merging to improve backdoor effectiveness. At the same time, IBVS provides a lightweight, general defense that remains effective even when the backdoor threat is entirely unknown.


Key Contributions

  • Backdoor Vectors (BV) framework: represents backdoor attacks as task vectors (difference between backdoored and clean fine-tuned weights), enabling principled analysis of attack similarity, transferability, and resilience through model merging.
  • Sparse Backdoor Vector (SBV): merges multiple BVs into a single stronger, more resilient attack that outperforms state-of-the-art backdoor attacks in both attack success rate and post-merge persistence.
  • Injection BV Subtraction (IBVS): assumption-free defense that exploits shared structure in inherent triggers to mitigate backdoor threats in model merging even when the specific attack is entirely unknown.

🛡️ Threat Analysis

Model Poisoning

Paper directly addresses backdoor/trojan attacks: introduces BV framework to analyze hidden trigger injection, proposes SBV as a stronger backdoor attack surpassing prior methods, and IBVS as a defense to detect and mitigate backdoor triggers in merged models — all squarely within the backdoor/trojan threat model.


Details

Domains
visionmultimodal
Model Types
vlmtransformer
Threat Tags
white_boxtraining_timetargeteddigital
Datasets
CLIP benchmark tasks (Cars, EuroSAT, GTSRB, SUN397, DTD, SVHN, MNIST, ImageNet)
Applications
model mergingimage classificationzero-shot classification