Skeletonization-Based Adversarial Perturbations on Large Vision Language Model's Mathematical Text Recognition
Masatomo Yoshida 1, Haruto Namura , Nicola Adami 2, Masahiro Okuda 1
Published on arXiv
2601.04752
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Skeletonization-guided adversarial perturbations effectively degrade VLM mathematical text recognition and transfer successfully to ChatGPT in a black-box setting.
Skeletonization-Based Adversarial Attack
Novel technique introduced
This work explores the visual capabilities and limitations of foundation models by introducing a novel adversarial attack method utilizing skeletonization to reduce the search space effectively. Our approach specifically targets images containing text, particularly mathematical formula images, which are more challenging due to their LaTeX conversion and intricate structure. We conduct a detailed evaluation of both character and semantic changes between original and adversarially perturbed outputs to provide insights into the models' visual interpretation and reasoning abilities. The effectiveness of our method is further demonstrated through its application to ChatGPT, which shows its practical implications in real-world scenarios.
Key Contributions
- Novel skeletonization-based search space reduction for adversarial pixel attacks targeting mathematical formula image regions
- Evaluation of character-level and semantic-level output degradation in VLMs under adversarial perturbations on LaTeX formula images
- Black-box transferability demonstration of the attack against ChatGPT's vision capabilities
🛡️ Threat Analysis
Proposes a black-box adversarial perturbation attack that manipulates pixel values in mathematical formula images to cause misclassification/mis-transcription at inference time; skeletonization is used to narrow the search space for perturbed pixels, directly implementing an input manipulation attack.