benchmark 2025

Who's the Evil Twin? Differential Auditing for Undesired Behavior

Ishwar Balappanawar 1,2, Venkata Hasith Vattikuti 2, Greta Kintzley 3, Ronan Azimi-Mancel 3, Satvik Golechha 3

0 citations

α

Published on arXiv

2508.06827

Model Poisoning

OWASP ML Top 10 — ML10

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Adversarial-attack-based detection achieves 100% accuracy when hints about the harmful distribution are provided, while hint-free techniques (model diffing, integrated gradients, Gaussian noise) yield more variable performance.

Differential Auditing Game

Novel technique introduced


Detecting hidden behaviors in neural networks poses a significant challenge due to minimal prior knowledge and potential adversarial obfuscation. We explore this problem by framing detection as an adversarial game between two teams: the red team trains two similar models, one trained solely on benign data and the other trained on data containing hidden harmful behavior, with the performance of both being nearly indistinguishable on the benign dataset. The blue team, with limited to no information about the harmful behaviour, tries to identify the compromised model. We experiment using CNNs and try various blue team strategies, including Gaussian noise analysis, model diffing, integrated gradients, and adversarial attacks under different levels of hints provided by the red team. Results show high accuracy for adversarial-attack-based methods (100\% correct prediction, using hints), which is very promising, whilst the other techniques yield more varied performance. During our LLM-focused rounds, we find that there are not many parallel methods that we could apply from our study with CNNs. Instead, we find that effective LLM auditing methods require some hints about the undesired distribution, which can then used in standard black-box and open-weight methods to probe the models further and reveal their misalignment. We open-source our auditing games (with the model and data) and hope that our findings contribute to designing better audits.


Key Contributions

  • Frames backdoor/hidden-behavior detection as an adversarial red-vs-blue team game, providing open-sourced auditing games with models and data
  • Systematic evaluation of blue team strategies (Gaussian noise, model diffing, integrated gradients, adversarial attacks) across varying hint levels for CNN models
  • Analysis of LLM auditing, finding that effective detection requires distribution hints and differs substantially from CNN auditing approaches

🛡️ Threat Analysis

Model Poisoning

The core problem is detecting hidden harmful/backdoor behavior in neural networks — the red team trains a model on data containing hidden targeted behavior indistinguishable from a clean model on benign data, and the blue team must detect it. This is directly the backdoor/trojan detection problem.


Details

Domains
visionnlp
Model Types
cnnllm
Threat Tags
training_timeblack_boxwhite_box
Applications
image classificationlarge language model safety auditing