defense 2025

Defeating Cerberus: Concept-Guided Privacy-Leakage Mitigation in Multimodal Language Models

Boyang Zhang 1, Istemi Ekin Akkus 2, Ruichuan Chen 2, Alice Dethise 2, Klaus Satzke 2, Ivica Rimac 2, Yang Zhang 1

0 citations · 39 references · arXiv

α

Published on arXiv

2509.25525

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Achieves 93.3% average refusal rate for PII-related tasks across VLMs without retraining, while maintaining performance on unrelated tasks.

Concept-Guided PII Mitigation (Cerberus)

Novel technique introduced


Multimodal large language models (MLLMs) have demonstrated remarkable capabilities in processing and reasoning over diverse modalities, but their advanced abilities also raise significant privacy concerns, particularly regarding Personally Identifiable Information (PII) leakage. While relevant research has been conducted on single-modal language models to some extent, the vulnerabilities in the multimodal setting have yet to be fully investigated. In this work, we investigate these emerging risks with a focus on vision language models (VLMs), a representative subclass of MLLMs that covers the two modalities most relevant for PII leakage, vision and text. We introduce a concept-guided mitigation approach that identifies and modifies the model's internal states associated with PII-related content. Our method guides VLMs to refuse PII-sensitive tasks effectively and efficiently, without requiring re-training or fine-tuning. We also address the current lack of multimodal PII datasets by constructing various ones that simulate real-world scenarios. Experimental results demonstrate that the method can achieve an average refusal rate of 93.3% for various PII-related tasks with minimal impact on unrelated model performances. We further examine the mitigation's performance under various conditions to show the adaptability of our proposed method.


Key Contributions

  • Concept-guided weight editing approach that identifies and modifies VLM internal states associated with PII-related content to refuse PII-sensitive tasks without retraining or fine-tuning
  • Construction of multimodal PII datasets simulating real-world scenarios such as document scans and ID cards to address a gap in evaluation resources
  • Empirical demonstration of 93.3% average refusal rate across diverse PII-related tasks with minimal impact on unrelated model performance

🛡️ Threat Analysis


Details

Domains
visionnlpmultimodal
Model Types
vlmllm
Threat Tags
inference_time
Datasets
custom multimodal PII datasets (document scans, ID cards)
Applications
vision language modelsmultimodal ai assistantsdocument processing