defense 2025

Learning to Generate Cross-Task Unexploitable Examples

Haoxuan Qu 1, Qiuchi Xiang 1, Yujun Cai 2, Yirui Wu 3, Majid Mirmehdi 4, Hossein Rahmani 1, Jun Liu 1

0 citations · 61 references · arXiv

α

Published on arXiv

2512.13416

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

MCT-UEG generates cross-task unexploitable examples that consistently and significantly reduce model performance across diverse real-world computer vision tasks where prior methods (SynPer, CUDA, UnSeg) fail to do so

MCT-UEG

Novel technique introduced


Unexploitable example generation aims to transform personal images into their unexploitable (unlearnable) versions before they are uploaded online, thereby preventing unauthorized exploitation of online personal images. Recently, this task has garnered significant research attention due to its critical relevance to personal data privacy. Yet, despite recent progress, existing methods for this task can still suffer from limited practical applicability, as they can fail to generate examples that are broadly unexploitable across different real-world computer vision tasks. To deal with this problem, in this work, we propose a novel Meta Cross-Task Unexploitable Example Generation (MCT-UEG) framework. At the core of our framework, to optimize the unexploitable example generator for effectively producing broadly unexploitable examples, we design a flat-minima-oriented meta training and testing scheme. Extensive experiments show the efficacy of our framework.


Key Contributions

  • MCT-UEG framework that optimizes an unexploitable example generator to generalize cross-task unexploitability to both seen and unseen computer vision tasks
  • Flat-minima-oriented meta training and testing scheme that guides the generator to acquire broadly transferable unexploitable knowledge robust to distribution shifts
  • Empirical demonstration that MCT-UEG significantly and consistently degrades unauthorized model performance across diverse CV tasks, outperforming prior methods (SynPer, CUDA, UnSeg)

🛡️ Threat Analysis

Data Poisoning Attack

MCT-UEG adds imperceptible noise to images before they are uploaded online, such that when an unauthorized party scrapes and trains on this data, the model fails to learn useful representations. The mechanism is training-time data corruption (error-minimizing/availability-attack noise), used defensively. This directly maps to ML02 (data poisoning) as a defensive strategy — the defender deliberately corrupts the training data to degrade the unauthorized model's performance.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
training_timedigital
Datasets
Taskonomy
Applications
personal image protectionpreventing unauthorized ml training on scraped online data