Diversifying Counterattacks: Orthogonal Exploration for Robust CLIP Inference
Chengze Jiang , Minjing Dong , Xinli Shi , Jie Gui
Published on arXiv
2511.09064
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
DOC generates more diverse counterattacks than TTC (lower mean cosine similarity) and improves adversarial robustness under various attacks across 16 datasets while preserving clean accuracy
DOC (Directional Orthogonal Counterattack)
Novel technique introduced
Vision-language pre-training models (VLPs) demonstrate strong multimodal understanding and zero-shot generalization, yet remain vulnerable to adversarial examples, raising concerns about their reliability. Recent work, Test-Time Counterattack (TTC), improves robustness by generating perturbations that maximize the embedding deviation of adversarial inputs using PGD, pushing them away from their adversarial representations. However, due to the fundamental difference in optimization objectives between adversarial attacks and counterattacks, generating counterattacks solely based on gradients with respect to the adversarial input confines the search to a narrow space. As a result, the counterattacks could overfit limited adversarial patterns and lack the diversity to fully neutralize a broad range of perturbations. In this work, we argue that enhancing the diversity and coverage of counterattacks is crucial to improving adversarial robustness in test-time defense. Accordingly, we propose Directional Orthogonal Counterattack (DOC), which augments counterattack optimization by incorporating orthogonal gradient directions and momentum-based updates. This design expands the exploration of the counterattack space and increases the diversity of perturbations, which facilitates the discovery of more generalizable counterattacks and ultimately improves the ability to neutralize adversarial perturbations. Meanwhile, we present a directional sensitivity score based on averaged cosine similarity to boost DOC by improving example discrimination and adaptively modulating the counterattack strength. Extensive experiments on 16 datasets demonstrate that DOC improves adversarial robustness under various attacks while maintaining competitive clean accuracy. Code is available at https://github.com/bookman233/DOC.
Key Contributions
- DOC augments counterattack optimization with randomized orthogonal gradient components and momentum-based updates to increase counterattack diversity and escape narrow local optima
- Introduces a directional sensitivity score based on averaged cosine similarity between original and randomly perturbed image embeddings to adaptively modulate counterattack strength per example
- Demonstrates improved adversarial robustness over TTC across 16 datasets while maintaining competitive clean accuracy without requiring labeled data or model fine-tuning
🛡️ Threat Analysis
Proposes a defense (DOC) against adversarial examples targeting CLIP at inference time. The method generates diverse counterperturbations using orthogonal gradient directions and momentum to neutralize adversarial inputs — directly addressing inference-time input manipulation attacks on a VLM.