Detecting Semantic Backdoors in a Mystery Shopping Scenario
Arpad Berta 1,2, Gabor Danner 1, Istvan Hegedus 1, Mark Jelasity 1,2
Published on arXiv
2601.03805
Model Poisoning
OWASP ML Top 10 — ML10
AI Supply Chain Attacks
OWASP ML Top 10 — ML06
Key Finding
The proposed method using adversarial training and model inversion–based distances can completely separate clean and poisoned models, outperforming state-of-the-art backdoor detectors including under adaptive attacks.
Mystery Shopping Backdoor Detection
Novel technique introduced
Detecting semantic backdoors in classification models--where some classes can be activated by certain natural, but out-of-distribution inputs--is an important problem that has received relatively little attention. Semantic backdoors are significantly harder to detect than backdoors that are based on trigger patterns due to the lack of such clearly identifiable patterns. We tackle this problem under the assumption that the clean training dataset and the training recipe of the model are both known. These assumptions are motivated by a consumer protection scenario, in which the responsible authority performs mystery shopping to test a machine learning service provider. In this scenario, the authority uses the provider's resources and tools to train a model on a given dataset and tests whether the provider included a backdoor. In our proposed approach, the authority creates a reference model pool by training a small number of clean and poisoned models using trusted infrastructure, and calibrates a model distance threshold to identify clean models. We propose and experimentally analyze a number of approaches to compute model distances and we also test a scenario where the provider performs an adaptive attack to avoid detection. The most reliable method is based on requesting adversarial training from the provider. The model distance is best measured using a set of input samples generated by inverting the models in such a way as to maximize the distance from clean samples. With these settings, our method can often completely separate clean and poisoned models, and it proves to be superior to state-of-the-art backdoor detectors as well.
Key Contributions
- Mystery shopping framework for backdoor detection: the authority trains reference clean and poisoned models to calibrate a model distance threshold for identifying backdoored provider models.
- Model inversion–based distance metric that generates samples maximizing distance from clean model outputs, providing the most reliable separation of clean vs. poisoned models.
- Empirical analysis showing the method surpasses state-of-the-art backdoor detectors and remains robust against an adaptive provider that attempts to evade detection.
🛡️ Threat Analysis
The threat model is explicitly an outsourced ML training supply chain scenario: a customer submits data and a training recipe to a potentially malicious service provider who may insert a backdoor into the returned model. This is a canonical ML supply chain attack.
Directly addresses detection of semantic backdoors (a subset of neural trojans) in classification models — the paper proposes a defense that calibrates model distance thresholds using clean and poisoned reference models, and evaluates against adaptive backdoor attacks.