attack 2026

Inevitable Encounters: Backdoor Attacks Involving Lossy Compression

Qian Li 1,2, Yunuo Chen 1, Yuntian Chen 2

0 citations

α

Published on arXiv

2603.13864

Model Poisoning

OWASP ML Top 10 — ML10

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Restores effectiveness of invisible backdoor triggers that previously failed under lossy compression by leveraging ROI coding mechanisms to preserve trigger information in JPEG bitstreams

CAA (Compression-Adapted Attack)

Novel technique introduced


Real-world backdoor attacks often require poisoned datasets to be stored and transmitted before being used to compromise deep learning systems. However, in the era of big data, the inevitable use of lossy compression poses a fundamental challenge to invisible backdoor attacks. We find that triggers embedded in RGB images often become ineffective after the images are lossily compressed into binary bitstreams (e.g., JPEG files) for storage and transmission. As a result, the poisoned data lose its malicious effect after compression, causing backdoor injection to fail. In this paper, we highlight the necessity of explicitly accounting for the lossy compression process in backdoor attacks. This requires attackers to ensure that the transmitted binary bitstreams preserve malicious trigger information, so that effective triggers can be recovered in the decompressed data. Building on the region-of-interest (ROI) coding mechanism in image compression, we propose two poisoning strategies tailored to inevitable lossy compression. First, we introduce Universal Attack Activation, a universal method that uses sample-specific ROI masks to reactivate trigger information in binary bitstreams for learned image compression (LIC). Second, we present Compression-Adapted Attack, a new attack strategy that employs customized ROI masks to encode trigger information into binary bitstreams and is applicable to both traditional codecs and LIC. Extensive experiments demonstrate the effectiveness of both strategies.


Key Contributions

  • Reveals that invisible triggers and clean-label backdoor attacks fail after lossy compression (JPEG) due to high-frequency component destruction
  • Universal Attack Activation method using sample-specific ROI masks to preserve trigger information in learned image compression bitstreams
  • Compression-Adapted Attack (CAA) using customized ROI masks to embed triggers into compressed bitstreams for both traditional codecs and LIC

🛡️ Threat Analysis

Data Poisoning Attack

The attack vector is data poisoning - adversaries inject malicious triggers into training datasets that are stored, compressed, transmitted, and then used to train compromised models. The paper explicitly frames this as 'data poisoning-based backdoor attacks'.

Model Poisoning

Core contribution is backdoor/trojan injection methods that embed hidden triggers in images. Two attack strategies proposed: Universal Attack Activation (reactivating existing invisible triggers after compression) and Compression-Adapted Attack (embedding triggers directly into compressed bitstreams). Both create models with trigger-based malicious behavior.


Details

Domains
vision
Model Types
cnn
Threat Tags
training_timedigital
Applications
image classification