attack 2026

Is the Trigger Essential? A Feature-Based Triggerless Backdoor Attack in Vertical Federated Learning

Yige Liu 1, Yiwei Lou 1, Che Wang 1, Yongzhi Cao 1,2, Hanpin Wang 1

0 citations · 40 references · arXiv (Cornell University)

α

Published on arXiv

2602.20593

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Achieves attack success rates 2 to 50 times higher than three baseline backdoor attacks across five datasets while minimally impacting main-task accuracy, and remains robust against known defenses even with 32 passive parties.

Feature-Based Triggerless Backdoor Attack

Novel technique introduced


As a distributed collaborative machine learning paradigm, vertical federated learning (VFL) allows multiple passive parties with distinct features and one active party with labels to collaboratively train a model. Although it is known for the privacy-preserving capabilities, VFL still faces significant privacy and security threats from backdoor attacks. Existing backdoor attacks typically involve an attacker implanting a trigger into the model during the training phase and executing the attack by adding the trigger to the samples during the inference phase. However, in this paper, we find that triggers are not essential for backdoor attacks in VFL. In light of this, we disclose a new backdoor attack pathway in VFL by introducing a feature-based triggerless backdoor attack. This attack operates under a more stringent security assumption, where the attacker is honest-but-curious rather than malicious during the training phase. It comprises three modules: label inference for the targeted backdoor attack, poison generation with amplification and perturbation mechanisms, and backdoor execution to implement the attack. Extensive experiments on five benchmark datasets demonstrate that our attack outperforms three baseline backdoor attacks by 2 to 50 times while minimally impacting the main task. Even in VFL scenarios with 32 passive parties and only one set of auxiliary data, our attack maintains high performance. Moreover, when confronted with distinct defense strategies, our attack remains largely unaffected and exhibits strong robustness. We hope that the disclosure of this triggerless backdoor attack pathway will encourage the community to revisit security threats in VFL scenarios and inspire researchers to develop more robust and practical defense strategies.


Key Contributions

  • Demonstrates that triggers are NOT essential for backdoor attacks in VFL — a passive party can achieve targeted misclassification by replacing embeddings at inference time, operating as honest-but-curious during training
  • Proposes a three-module attack pipeline: label inference via clustering on recorded embeddings, poison generation via amplification and perturbation mechanisms, and backdoor execution via embedding replacement
  • Evaluates on five benchmark datasets and 32-party VFL scenarios, achieving 2–50x higher attack success than baseline backdoor methods while maintaining main-task accuracy and evading existing defenses

🛡️ Threat Analysis

Model Poisoning

Proposes a novel backdoor attack achieving targeted misclassification in VFL. Though triggerless, the paper explicitly frames and evaluates it as a backdoor attack, outperforming existing backdoor baselines by 2–50x. The attacker achieves hidden, targeted malicious behavior by manipulating intermediate embeddings during inference rather than through traditional input-space triggers.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
grey_boxinference_timetargeted
Datasets
five benchmark datasets (not individually named in provided excerpt)
Applications
vertical federated learningcollaborative machine learning