On the Trade-Off Between Transparency and Security in Adversarial Machine Learning
Lucas Fenaux 1, Christopher Srinivasa 2, Florian Kerschbaum 1
Published on arXiv
2511.11842
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Attackers who match the defender's defense status are significantly more successful, and existing benchmarks underestimate adversarial attack accuracy degradation by up to 3.73× by using only undefended surrogates.
Transparency and security are both central to Responsible AI, but they may conflict in adversarial settings. We investigate the strategic effect of transparency for agents through the lens of transferable adversarial example attacks. In transferable adversarial example attacks, attackers maliciously perturb their inputs using surrogate models to fool a defender's target model. These models can be defended or undefended, with both players having to decide which to use. Using a large-scale empirical evaluation of nine attacks across 181 models, we find that attackers are more successful when they match the defender's decision; hence, obscurity could be beneficial to the defender. With game theory, we analyze this trade-off between transparency and security by modeling this problem as both a Nash game and a Stackelberg game, and comparing the expected outcomes. Our analysis confirms that only knowing whether a defender's model is defended or not can sometimes be enough to damage its security. This result serves as an indicator of the general trade-off between transparency and security, suggesting that transparency in AI systems can be at odds with security. Beyond adversarial machine learning, our work illustrates how game-theoretic reasoning can uncover conflicts between transparency and security.
Key Contributions
- Large-scale empirical evaluation of 9 transferable adversarial attacks across 181 models showing existing benchmarks (using only undefended surrogates) underestimate attack potency by up to 3.73×
- Game-theoretic modeling (Nash and Stackelberg) of the transparency-security trade-off, showing that defenders benefit from obscuring whether their model is defended
- Evidence that mixing defenses combined with defense obscurity further enhances robustness against transferable adversarial attacks
🛡️ Threat Analysis
The paper's entire empirical and theoretical framework centers on transferable adversarial example attacks — input manipulations using surrogate models to fool target classifiers at inference time. It evaluates 9 attacks across 181 models (defended and undefended) and uses game theory to analyze attacker/defender strategies around these input manipulation attacks.