attack 2025

Associative Poisoning to Generative Machine Learning

Mathias Lundteigen Mohus , Jingyue Li , Zhirong Yang

0 citations · arXiv

α

Published on arXiv

2511.05177

Data Poisoning Attack

OWASP ML Top 10 — ML02

Training Data Poisoning

OWASP LLM Top 10 — LLM03

Key Finding

Associative poisoning effectively induces or suppresses fine-grained feature associations in generative model outputs while preserving marginal distributions and evading visual detection, demonstrating a stealthy threat to generative systems in both vision and NLP domains.

Associative Poisoning

Novel technique introduced


The widespread adoption of generative models such as Stable Diffusion and ChatGPT has made them increasingly attractive targets for malicious exploitation, particularly through data poisoning. Existing poisoning attacks compromising synthesised data typically either cause broad degradation of generated data or require control over the training process, limiting their applicability in real-world scenarios. In this paper, we introduce a novel data poisoning technique called associative poisoning, which compromises fine-grained features of the generated data without requiring control of the training process. This attack perturbs only the training data to manipulate statistical associations between specific feature pairs in the generated outputs. We provide a formal mathematical formulation of the attack and prove its theoretical feasibility and stealthiness. Empirical evaluations using two state-of-the-art generative models demonstrate that associative poisoning effectively induces or suppresses feature associations while preserving the marginal distributions of the targeted features and maintaining high-quality outputs, thereby evading visual detection. These results suggest that generative systems used in image synthesis, synthetic dataset generation, and natural language processing are susceptible to subtle, stealthy manipulations that compromise their statistical integrity. To address this risk, we examine the limitations of existing defensive strategies and propose a novel countermeasure strategy.


Key Contributions

  • Introduces 'associative poisoning': a training-data-only attack that induces or suppresses statistical associations between specific feature pairs in generative model outputs without controlling the training process
  • Provides formal mathematical formulation and theoretical proofs of the attack's feasibility and stealthiness (marginal distributions of targeted features are preserved, evading visual detection)
  • Empirically validates the attack on two state-of-the-art generative models and proposes a novel countermeasure after analyzing the limitations of existing defenses

🛡️ Threat Analysis

Data Poisoning Attack

Core contribution is a training-data poisoning technique that perturbs data to manipulate statistical associations between feature pairs in generated outputs, without requiring control over the training process — a clean-label-style data poisoning attack.


Details

Domains
generativevisionnlp
Model Types
diffusionllm
Threat Tags
training_timetargeteddigital
Applications
image synthesissynthetic dataset generationnatural language processing