defense 2026

TCAP: Tri-Component Attention Profiling for Unsupervised Backdoor Detection in MLLM Fine-Tuning

Mingzu Liu 1,2, Hao Fang 1,2, Runmin Cong 1,2

0 citations · 62 references · arXiv

α

Published on arXiv

2601.21692

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

TCAP achieves consistently strong backdoor detection performance across diverse MLLM architectures and attack methods without supervised signals, outperforming prior unsupervised defenses like BYE on state-of-the-art MLLMs.

TCAP (Tri-Component Attention Profiling)

Novel technique introduced


Fine-Tuning-as-a-Service (FTaaS) facilitates the customization of Multimodal Large Language Models (MLLMs) but introduces critical backdoor risks via poisoned data. Existing defenses either rely on supervised signals or fail to generalize across diverse trigger types and modalities. In this work, we uncover a universal backdoor fingerprint-attention allocation divergence-where poisoned samples disrupt the balanced attention distribution across three functional components: system instructions, vision inputs, and user textual queries, regardless of trigger morphology. Motivated by this insight, we propose Tri-Component Attention Profiling (TCAP), an unsupervised defense framework to filter backdoor samples. TCAP decomposes cross-modal attention maps into the three components, identifies trigger-responsive attention heads via Gaussian Mixture Model (GMM) statistical profiling, and isolates poisoned samples through EM-based vote aggregation. Extensive experiments across diverse MLLM architectures and attack methods demonstrate that TCAP achieves consistently strong performance, establishing it as a robust and practical backdoor defense in MLLMs.


Key Contributions

  • Discovers 'attention allocation divergence' as a universal backdoor fingerprint in MLLMs — poisoned samples disrupt balanced attention distribution across system instructions, vision inputs, and user queries regardless of trigger type or modality.
  • Proposes TCAP, an unsupervised three-stage defense that uses GMM statistical profiling on cross-modal attention heads to identify trigger-responsive heads and EM-based vote aggregation to isolate poisoned samples.
  • Demonstrates generalization across diverse MLLM architectures (LLaVA-OneVision, Qwen3-VL) and attack methods without requiring clean reference data, supervised labels, or external modules.

🛡️ Threat Analysis

Model Poisoning

TCAP is an unsupervised defense against backdoor/trojan injection via poisoned training data in MLLM fine-tuning — the core ML10 threat of hidden trigger-activated malicious behavior.


Details

Domains
visionnlpmultimodal
Model Types
vlmllmmultimodaltransformer
Threat Tags
training_timeblack_box
Applications
multimodal llm fine-tuningfine-tuning-as-a-service (ftaas)visual question answeringimage captioning