Manifold of Failure: Behavioral Attraction Basins in Language Models
Sarthak Munshi 1, Manish Bhatt 1, Vineeth Sai Narajala 2, Idan Habler 2, Ammar Al-Kahfah 1, Ken Huang 3, Blake Gatto 4
Published on arXiv
2602.22291
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
MAP-Elites achieves up to 63% behavioral coverage and discovers up to 370 distinct vulnerability niches, revealing structured topological signatures that existing attack methods (GCG, PAIR, TAP) cannot provide.
Manifold of Failure / Alignment Deviation
Novel technique introduced
While prior work has focused on projecting adversarial examples back onto the manifold of natural data to restore safety, we argue that a comprehensive understanding of AI safety requires characterizing the unsafe regions themselves. This paper introduces a framework for systematically mapping the Manifold of Failure in Large Language Models (LLMs). We reframe the search for vulnerabilities as a quality diversity problem, using MAP-Elites to illuminate the continuous topology of these failure regions, which we term behavioral attraction basins. Our quality metric, Alignment Deviation, guides the search towards areas where the model's behavior diverges most from its intended alignment. Across three LLMs: Llama-3-8B, GPT-OSS-20B, and GPT-5-Mini, we show that MAP-Elites achieves up to 63% behavioral coverage, discovers up to 370 distinct vulnerability niches, and reveals dramatically different model-specific topological signatures: Llama-3-8B exhibits a near-universal vulnerability plateau (mean Alignment Deviation 0.93), GPT-OSS-20B shows a fragmented landscape with spatially concentrated basins (mean 0.73), and GPT-5-Mini demonstrates strong robustness with a ceiling at 0.50. Our approach produces interpretable, global maps of each model's safety landscape that no existing attack method (GCG, PAIR, or TAP) can provide, shifting the paradigm from finding discrete failures to understanding their underlying structure.
Key Contributions
- Reframes LLM red-teaming as a quality-diversity problem using MAP-Elites to produce global behavioral coverage maps (up to 63%) of failure regions rather than single worst-case attacks
- Introduces the 'Alignment Deviation' metric for quantifying how far a model's response deviates from its intended safety alignment, enabling continuous topology characterization
- Empirically demonstrates model-specific topological signatures across three LLMs: Llama-3-8B shows near-universal vulnerability (mean AD 0.93), GPT-OSS-20B fragmented basins (0.73), GPT-5-Mini strong robustness ceiling (0.50)