FORCE: Transferable Visual Jailbreaking Attacks via Feature Over-Reliance CorrEction
Runqi Lin 1, Alasdair Paren 2, Suqin Yuan 1, Muyang Li 1, Philip Torr 2, Adel Bibi 2, Tongliang Liu 1
Published on arXiv
2509.21029
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
FORCE improves cross-model transferability of visual jailbreaking attacks against closed-source MLLMs by flattening the loss landscape through correcting over-reliance on narrow layer and semantically poor frequency features.
FORCE (Feature Over-Reliance CorrEction)
Novel technique introduced
The integration of new modalities enhances the capabilities of multimodal large language models (MLLMs) but also introduces additional vulnerabilities. In particular, simple visual jailbreaking attacks can manipulate open-source MLLMs more readily than sophisticated textual attacks. However, these underdeveloped attacks exhibit extremely limited cross-model transferability, failing to reliably identify vulnerabilities in closed-source MLLMs. In this work, we analyse the loss landscape of these jailbreaking attacks and find that the generated attacks tend to reside in high-sharpness regions, whose effectiveness is highly sensitive to even minor parameter changes during transfer. To further explain the high-sharpness localisations, we analyse their feature representations in both the intermediate layers and the spectral domain, revealing an improper reliance on narrow layer representations and semantically poor frequency components. Building on this, we propose a Feature Over-Reliance CorrEction (FORCE) method, which guides the attack to explore broader feasible regions across layer features and rescales the influence of frequency features according to their semantic content. By eliminating non-generalizable reliance on both layer and spectral features, our method discovers flattened feasible regions for visual jailbreaking attacks, thereby improving cross-model transferability. Extensive experiments demonstrate that our approach effectively facilitates visual red-teaming evaluations against closed-source MLLMs.
Key Contributions
- Identifies that existing visual jailbreaking attacks reside in high-sharpness loss landscape regions, explaining their poor cross-model transferability
- Proposes FORCE, which corrects feature over-reliance by broadening layer feature exploration and rescaling frequency features by semantic content to find flatter feasible regions
- Demonstrates improved transferability of visual jailbreaks to closed-source MLLMs through red-teaming experiments
🛡️ Threat Analysis
FORCE generates gradient-optimized adversarial visual perturbations on images that manipulate MLLM outputs — a direct input manipulation attack at inference time. The method analyzes loss landscape sharpness and spectral/layer features to craft more transferable adversarial inputs.