attack 2025

FoCLIP: A Feature-Space Misalignment Framework for CLIP-Based Image Manipulation and Detection

Yulin Chen , Zeyuan Wang , Tianyuan Yu , Yingmei Wei , Liang Bai

0 citations · 31 references · Chinese Conference on Pattern ...

α

Published on arXiv

2511.06947

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

FoCLIP achieves a 42.7% average CLIPscore improvement on artistic prompts and 27.3% on ImageNet subsets, while the proposed grayscale-based detection mechanism reaches 91% accuracy in identifying tampered images.

FoCLIP

Novel technique introduced


The well-aligned attribute of CLIP-based models enables its effective application like CLIPscore as a widely adopted image quality assessment metric. However, such a CLIP-based metric is vulnerable for its delicate multimodal alignment. In this work, we propose \textbf{FoCLIP}, a feature-space misalignment framework for fooling CLIP-based image quality metric. Based on the stochastic gradient descent technique, FoCLIP integrates three key components to construct fooling examples: feature alignment as the core module to reduce image-text modality gaps, the score distribution balance module and pixel-guard regularization, which collectively optimize multimodal output equilibrium between CLIPscore performance and image quality. Such a design can be engineered to maximize the CLIPscore predictions across diverse input prompts, despite exhibiting either visual unrecognizability or semantic incongruence with the corresponding adversarial prompts from human perceptual perspectives. Experiments on ten artistic masterpiece prompts and ImageNet subsets demonstrate that optimized images can achieve significant improvement in CLIPscore while preserving high visual fidelity. In addition, we found that grayscale conversion induces significant feature degradation in fooling images, exhibiting noticeable CLIPscore reduction while preserving statistical consistency with original images. Inspired by this phenomenon, we propose a color channel sensitivity-driven tampering detection mechanism that achieves 91% accuracy on standard benchmarks. In conclusion, this work establishes a practical pathway for feature misalignment in CLIP-based multimodal systems and the corresponding defense method.


Key Contributions

  • FoCLIP: a tripartite optimization framework (Feature Alignment + Distribution Balance + Pixel-Guard Regularization) that crafts adversarial images achieving up to 42.7% CLIPscore improvement while preserving visual fidelity
  • Empirical discovery that grayscale conversion degrades CLIPscore of fooled images while leaving statistical properties of clean images unchanged
  • Color channel sensitivity-driven detection mechanism achieving 91% tampering detection accuracy on ImageNet validation set

🛡️ Threat Analysis

Input Manipulation Attack

FoCLIP uses SGD-based gradient optimization to craft adversarial images that manipulate CLIPscore outputs at inference time, causing a CLIP multimodal model to assign artificially high quality scores to semantically incongruent inputs — a textbook inference-time input manipulation attack. The detection mechanism is also an adversarial-detection defense within ML01.


Details

Domains
visionmultimodal
Model Types
transformervlm
Threat Tags
white_boxinference_timetargeteddigital
Datasets
ImageNet
Applications
image quality assessmentmultimodal evaluation metricsimage-text matching