α

Published on arXiv

2601.02228

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Achieves robust accuracy exceeding 87% against PGD and 89% against CW attacks on video recognition, outperforming DiffPure, FlowPure, Defense Patterns, and Temporal Shuffling baselines.

FMVP (Flow Matching for Adversarial Video Purification)

Novel technique introduced


Video recognition models remain vulnerable to adversarial attacks, while existing diffusion-based purification methods suffer from inefficient sampling and curved trajectories. Directly regressing clean videos from adversarial inputs often fails to recover faithful content due to the subtle nature of perturbations; this necessitates physically shattering the adversarial structure. Therefore, we propose Flow Matching for Adversarial Video Purification FMVP. FMVP physically shatters global adversarial structures via a masking strategy and reconstructs clean video dynamics using Conditional Flow Matching (CFM) with an inpainting objective. To further decouple semantic content from adversarial noise, we design a Frequency-Gated Loss (FGL) that explicitly suppresses high-frequency adversarial residuals while preserving low-frequency fidelity. We design Attack-Aware and Generalist training paradigms to handle known and unknown threats, respectively. Extensive experiments on UCF-101 and HMDB-51 demonstrate that FMVP outperforms state-of-the-art methods (DiffPure, Defense Patterns (DP), Temporal Shuffling (TS) and FlowPure), achieving robust accuracy exceeding 87% against PGD and 89% against CW attacks. Furthermore, FMVP demonstrates superior robustness against adaptive attacks (DiffHammer) and functions as a zero-shot adversarial detector, attaining AUC-ROC scores of 0.98 for PGD and 0.79 for highly imperceptible CW attacks.


Key Contributions

  • FMVP framework that physically shatters adversarial structures via masking and reconstructs clean video using Conditional Flow Matching with an inpainting objective
  • Frequency-Gated Loss (FGL) based on FFT spectral analysis that suppresses high-frequency adversarial residuals while preserving low-frequency semantic fidelity
  • Attack-Aware and Generalist training paradigms enabling defense against both known and unknown threats, plus zero-shot adversarial detection (AUC-ROC 0.98 for PGD)

🛡️ Threat Analysis

Input Manipulation Attack

Proposes an adversarial input purification defense against gradient-based (PGD) and optimization-based (CW) adversarial attacks on video recognition models at inference time — directly addressing the Input Manipulation Attack threat.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxblack_boxinference_timeuntargeteddigital
Datasets
UCF-101HMDB-51
Applications
video recognitionvideo classification