attack 2025

ShadowLogic: Backdoors in Any Whitebox LLM

Kasimir Schulz , Amelia Kawasaki , Leo Ring

0 citations · 17 references · arXiv

α

Published on arXiv

2511.00664

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Achieves >60% attack success rate on malicious queries after implanting an obfuscated uncensoring backdoor into the ONNX computational graphs of Phi-3 and Llama 3.2

ShadowLogic

Novel technique introduced


Large language models (LLMs) are widely deployed across various applications, often with safeguards to prevent the generation of harmful or restricted content. However, these safeguards can be covertly bypassed through adversarial modifications to the computational graph of a model. This work highlights a critical security vulnerability in computational graph-based LLM formats, demonstrating that widely used deployment pipelines may be susceptible to obscured backdoors. We introduce ShadowLogic, a method for creating a backdoor in a white-box LLM by injecting an uncensoring vector into its computational graph representation. We set a trigger phrase that, when added to the beginning of a prompt into the LLM, applies the uncensoring vector and removes the content generation safeguards in the model. We embed trigger logic directly into the computational graph which detects the trigger phrase in a prompt. To evade detection of our backdoor, we obfuscate this logic within the graph structure, making it similar to standard model functions. Our method requires minimal alterations to model parameters, making backdoored models appear benign while retaining the ability to generate uncensored responses when activated. We successfully implement ShadowLogic in Phi-3 and Llama 3.2, using ONNX for manipulating computational graphs. Implanting the uncensoring vector achieved a >60% attack success rate for further malicious queries.


Key Contributions

  • ShadowLogic: a technique to inject a trigger-activated uncensoring backdoor directly into the ONNX computational graph of a white-box LLM without full model retraining
  • Obfuscation strategy that disguises trigger-detection logic within the graph structure to resemble standard model operations and evade detection
  • Empirical demonstration on Phi-3 and Llama 3.2 achieving >60% attack success rate on malicious queries while the model appears benign under normal use

🛡️ Threat Analysis

Model Poisoning

ShadowLogic embeds a hidden backdoor — an uncensoring vector plus obfuscated trigger-detection logic — directly into the LLM's ONNX computational graph. The model behaves normally until the trigger phrase is detected, at which point safety safeguards are silently removed. This is a textbook backdoor/trojan attack with targeted activation on a specific trigger.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxtargeteddigitalinference_time
Applications
llm safety guardrailscontent moderationchatbot deployment pipelines