tool 2025

Where Did It Go Wrong? Attributing Undesirable LLM Behaviors via Representation Gradient Tracing

Zhe Li , Wei Zhao , Yige Li , Jun Sun

0 citations · 58 references · arXiv

α

Published on arXiv

2510.02334

Model Poisoning

OWASP ML Top 10 — ML10

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

RepT achieves superior sample-level and token-level attribution for harmful content tracking, backdoor poisoning detection, and knowledge contamination over existing parameter-gradient attribution baselines

RepT

Novel technique introduced


Large Language Models (LLMs) have demonstrated remarkable capabilities, yet their deployment is frequently undermined by undesirable behaviors such as generating harmful content, factual inaccuracies, and societal biases. Diagnosing the root causes of these failures poses a critical challenge for AI safety. Existing attribution methods, particularly those based on parameter gradients, often fall short due to prohibitive noisy signals and computational complexity. In this work, we introduce a novel and efficient framework that diagnoses a range of undesirable LLM behaviors by analyzing representation and its gradients, which operates directly in the model's activation space to provide a semantically meaningful signal linking outputs to their training data. We systematically evaluate our method for tasks that include tracking harmful content, detecting backdoor poisoning, and identifying knowledge contamination. The results demonstrate that our approach not only excels at sample-level attribution but also enables fine-grained token-level analysis, precisely identifying the specific samples and phrases that causally influence model behavior. This work provides a powerful diagnostic tool to understand, audit, and ultimately mitigate the risks associated with LLMs. The code is available at https://github.com/plumprc/RepT.


Key Contributions

  • Representation gradient tracing (RepT) framework that operates directly in LLM activation space to link specific outputs to causally responsible training data, bypassing noisy parameter-gradient methods
  • Multi-task security evaluation spanning harmful content tracking, backdoor poisoning detection, and knowledge contamination identification within a single unified attribution framework
  • Fine-grained token-level attribution that pinpoints specific phrases in training data causally influencing undesirable model behavior

🛡️ Threat Analysis

Data Poisoning Attack

Addresses knowledge contamination and harmful content attribution by tracing problematic model outputs back to specific corrupted or adversarially injected training data samples.

Model Poisoning

Explicitly evaluates backdoor poisoning detection as a primary task — RepT identifies which poisoned training samples causally trigger undesirable backdoor behaviors in LLMs.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timewhite_box
Applications
llm auditingbackdoor detectiontraining data attributionharmful content diagnosis