tool 2025

ManipShield: A Unified Framework for Image Manipulation Detection, Localization and Explanation

Zitong Xu 1, Huiyu Duan 1, Xiaoyu Wang 2, Zhaolin Cai 1, Kaiwei Zhang 1, Qiang Hu 1, Jing Liu 3, Xiongkuo Min 1, Guangtao Zhai 1

0 citations · 83 references · arXiv

α

Published on arXiv

2511.14259

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

ManipShield achieves state-of-the-art manipulation detection and localization on ManipBench and generalizes to unseen image editing models across 12 manipulation categories.

ManipShield

Novel technique introduced


With the rapid advancement of generative models, powerful image editing methods now enable diverse and highly realistic image manipulations that far surpass traditional deepfake techniques, posing new challenges for manipulation detection. Existing image manipulation detection and localization (IMDL) benchmarks suffer from limited content diversity, narrow generative-model coverage, and insufficient interpretability, which hinders the generalization and explanation capabilities of current manipulation detection methods. To address these limitations, we introduce \textbf{ManipBench}, a large-scale benchmark for image manipulation detection and localization focusing on AI-edited images. ManipBench contains over 450K manipulated images produced by 25 state-of-the-art image editing models across 12 manipulation categories, among which 100K images are further annotated with bounding boxes, judgment cues, and textual explanations to support interpretable detection. Building upon ManipBench, we propose \textbf{ManipShield}, an all-in-one model based on a Multimodal Large Language Model (MLLM) that leverages contrastive LoRA fine-tuning and task-specific decoders to achieve unified image manipulation detection, localization, and explanation. Extensive experiments on ManipBench and several public datasets demonstrate that ManipShield achieves state-of-the-art performance and exhibits strong generality to unseen manipulation models. Both ManipBench and ManipShield will be released upon publication.


Key Contributions

  • ManipBench: a large-scale benchmark of 450K+ AI-manipulated images from 25 editing models across 12 manipulation categories, with 100K images annotated with bounding boxes, judgment cues, and textual explanations
  • ManipShield: an MLLM-based all-in-one detection model using contrastive LoRA fine-tuning and task-specific decoders for unified manipulation detection, localization, and explanation
  • Demonstrated state-of-the-art performance and strong generalization to unseen image editing models

🛡️ Threat Analysis

Output Integrity Attack

The paper's primary contribution is detecting AI-manipulated images (deepfake/generative-model edits), which directly falls under output integrity and AI-generated content detection — a canonical ML09 use case.


Details

Domains
visionmultimodal
Model Types
vlmtransformer
Threat Tags
inference_time
Datasets
ManipBench
Applications
ai image manipulation detectiondeepfake detectionimage forensics