defense 2026

AtPatch: Debugging Transformers via Hot-Fixing Over-Attention

Shihao Weng 1, Yang Feng 1, Jincheng Li 1, Yining Yin 1, Xiaofei Xie 2, Jia Liu 1

0 citations · 61 references · arXiv

α

Published on arXiv

2601.21695

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

AtPatch more effectively mitigates backdoor attacks and model unfairness compared to existing neuron-editing methods while better preserving clean accuracy on uncompromised inputs

AtPatch

Novel technique introduced


Transformer-based deep neural networks (DNNs) affected by backdoor attacks and unfairness typically exhibit anomalous attention patterns, leading to over-attend to backdoor triggers or protected attributes. Existing neuron-editing mitigation strategies often struggle to handle such situation and most of them lack flexibility and tend to distort feature representations. Motivated by such over-attention phenomenon and software engineering paradigms such as delta debugging and hot patching, we propose AtPatch, a hot-fix method that dynamically redistributes attention maps during model inference. Specifically, for a given input, AtPatch first extracts the attention map from the model's inference process. Then, it uses a pre-trained detector to identify anomalous columns and replace them with unified benign attention. Then, AtPatch rescales other columns to mitigate the impact of over-attention. Finally, AtPatch returns the redistributed attention map to the model for continued inference. Notably, if the detector does not report any anomalous columns, AtPatch directly returns the original attention map to the model. Unlike existing techniques, AtPatch selectively redistributes the attention map, making it better at preserving the model's original functionality. Furthermore, AtPatch's on-the-fly nature allows it to work without modifying model parameters or retraining, making it better suited for deployed models. We conducted extensive experiments to validate AtPatch. Experimental results show that, compared to existing methods, AtPatch can more effectively mitigate backdoor attacks and unfairness while better preserving the model's original functionality.


Key Contributions

  • AtPatch: an inference-time hot-fix method that detects anomalous attention columns (over-attention) using a pre-trained contrastive detector and replaces them with benign unified attention without modifying model parameters
  • Demonstrates that backdoor attacks and model unfairness share a common over-attention phenomenon in transformer attention maps, motivating a unified mitigation strategy
  • Non-invasive, parameter-free defense that preserves clean accuracy better than existing neuron-editing approaches by selectively redistributing only detected anomalous attention

🛡️ Threat Analysis

Model Poisoning

Primary contribution is a defense against backdoor attacks in transformer models — AtPatch detects over-attention patterns caused by embedded backdoor triggers and replaces them with benign attention distributions at inference time, directly targeting trojan/backdoor behavior.


Details

Domains
visionnlp
Model Types
transformer
Threat Tags
training_timeinference_timedigital
Datasets
CIFAR-10ImageNet
Applications
image classificationtransformer model debuggingfairness-aware inference