Improving Deepfake Detection with Reinforcement Learning-Based Adaptive Data Augmentation
Yuxuan Zhou 1, Tao Yu 2, Wen Huang 1, Yuheng Zhang 1, Tao Dai 3, Shu-Tao Xia 1
Published on arXiv
2511.07051
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
CRDA outperforms state-of-the-art deepfake detection methods across multiple cross-domain benchmark datasets by dynamically adapting augmentation strategies to the detector's current learning state
CRDA (Curriculum Reinforcement-Learning Data Augmentation)
Novel technique introduced
The generalization capability of deepfake detectors is critical for real-world use. Data augmentation via synthetic fake face generation effectively enhances generalization, yet current SoTA methods rely on fixed strategies-raising a key question: Is a single static augmentation sufficient, or does the diversity of forgery features demand dynamic approaches? We argue existing methods overlook the evolving complexity of real-world forgeries (e.g., facial warping, expression manipulation), which fixed policies cannot fully simulate. To address this, we propose CRDA (Curriculum Reinforcement-Learning Data Augmentation), a novel framework guiding detectors to progressively master multi-domain forgery features from simple to complex. CRDA synthesizes augmented samples via a configurable pool of forgery operations and dynamically generates adversarial samples tailored to the detector's current learning state. Central to our approach is integrating reinforcement learning (RL) and causal inference. An RL agent dynamically selects augmentation actions based on detector performance to efficiently explore the vast augmentation space, adapting to increasingly challenging forgeries. Simultaneously, the agent introduces action space variations to generate heterogeneous forgery patterns, guided by causal inference to mitigate spurious correlations-suppressing task-irrelevant biases and focusing on causally invariant features. This integration ensures robust generalization by decoupling synthetic augmentation patterns from the model's learned representations. Extensive experiments show our method significantly improves detector generalizability, outperforming SOTA methods across multiple cross-domain datasets.
Key Contributions
- CRDA framework combining curriculum learning and RL to dynamically select augmentation strategies based on detector learning state, progressively exposing the detector to increasingly complex forgery patterns
- Integration of causal inference (IRM) to mitigate spurious correlations arising from conflicting augmentation domain artifacts, improving feature causality
- Multi-dimensional curriculum scheduling across augmentation proportion, RL exploration intensity, and forgery region scale
🛡️ Threat Analysis
Deepfake detection is an AI-generated content detection problem (output integrity). CRDA proposes a novel training methodology — RL-guided adaptive data augmentation with curriculum learning and causal inference — specifically to improve generalization of deepfake detectors across unseen forgery techniques.