attack 2025

Chameleon: Adaptive Adversarial Agents for Scaling-Based Visual Prompt Injection in Multimodal AI Systems

M Zeeshan , Saud Satti

1 citations · 15 references · arXiv

α

Published on arXiv

2512.04895

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves 84.5% attack success rate across varying scaling factors against Gemini 2.5 Flash, compared to 32.1% for static baseline attacks, while reducing agentic pipeline accuracy by over 45%.

Chameleon

Novel technique introduced


Multimodal Artificial Intelligence (AI) systems, particularly Vision-Language Models (VLMs), have become integral to critical applications ranging from autonomous decision-making to automated document processing. As these systems scale, they rely heavily on preprocessing pipelines to handle diverse inputs efficiently. However, this dependency on standard preprocessing operations, specifically image downscaling, creates a significant yet often overlooked security vulnerability. While intended for computational optimization, scaling algorithms can be exploited to conceal malicious visual prompts that are invisible to human observers but become active semantic instructions once processed by the model. Current adversarial strategies remain largely static, failing to account for the dynamic nature of modern agentic workflows. To address this gap, we propose Chameleon, a novel, adaptive adversarial framework designed to expose and exploit scaling vulnerabilities in production VLMs. Unlike traditional static attacks, Chameleon employs an iterative, agent-based optimization mechanism that dynamically refines image perturbations based on the target model's real-time feedback. This allows the framework to craft highly robust adversarial examples that survive standard downscaling operations to hijack downstream execution. We evaluate Chameleon against Gemini 2.5 Flash model. Our experiments demonstrate that Chameleon achieves an Attack Success Rate (ASR) of 84.5% across varying scaling factors, significantly outperforming static baseline attacks which average only 32.1%. Furthermore, we show that these attacks effectively compromise agentic pipelines, reducing decision-making accuracy by over 45% in multi-step tasks. Finally, we discuss the implications of these vulnerabilities and propose multi-scale consistency checks as a necessary defense mechanism.


Key Contributions

  • Chameleon: first adaptive, feedback-driven adversarial framework exploiting image downscaling vulnerabilities in VLM preprocessing pipelines
  • Iterative agent-based optimization loop that dynamically refines image perturbations based on real-time target model API responses
  • Demonstration that scaling-based visual prompt injection can compromise multi-step agentic pipelines, reducing decision-making accuracy by over 45%

🛡️ Threat Analysis

Input Manipulation Attack

Crafts adversarial image perturbations that exploit downscaling operations to embed hidden semantic payloads — these are adversarially manipulated visual inputs designed to cause incorrect/hijacked outputs at inference time in VLMs.


Details

Domains
visionmultimodal
Model Types
vlm
Threat Tags
black_boxinference_timetargeteddigital
Applications
vision-language modelsmultimodal agentic systemsautonomous decision-making pipelinesautomated document processing