benchmark 2025

Impact of Positional Encoding: Clean and Adversarial Rademacher Complexity for Transformers under In-Context Regression

Weiyi He , Yue Xing

0 citations · 55 references · arXiv

α

Published on arXiv

2512.09275

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Positional encoding amplifies adversarial vulnerability: the gap between adversarial generalization bounds with and without PE is strictly larger than the corresponding clean gap, confirmed by simulation experiments.

Adversarial Rademacher Complexity (ARC) bounds for Transformers

Novel technique introduced


Positional encoding (PE) is a core architectural component of Transformers, yet its impact on the Transformer's generalization and robustness remains unclear. In this work, we provide the first generalization analysis for a single-layer Transformer under in-context regression that explicitly accounts for a completely trainable PE module. Our result shows that PE systematically enlarges the generalization gap. Extending to the adversarial setting, we derive the adversarial Rademacher generalization bound. We find that the gap between models with and without PE is magnified under attack, demonstrating that PE amplifies the vulnerability of models. Our bounds are empirically validated by a simulation study. Together, this work establishes a new framework for understanding the clean and adversarial generalization in ICL with PE.


Key Contributions

  • First generalization bound for single-layer transformers with fully trainable positional encoding in the ICL regression setting, showing PE systematically enlarges the generalization gap
  • Adversarial Rademacher Complexity (ARC) bounds for transformers showing the PE-induced gap is magnified under adversarial attack, demonstrating PE amplifies vulnerability
  • Empirical simulation validating that transformers with trainable PE exhibit consistently larger generalization gaps and greater sensitivity to adversarial attacks on in-context examples

🛡️ Threat Analysis

Input Manipulation Attack

The paper analyzes adversarial robustness of transformers under input manipulation attacks on in-context examples at inference time, deriving adversarial Rademacher complexity bounds that quantify how PE amplifies model vulnerability to such attacks.


Details

Domains
nlp
Model Types
transformer
Threat Tags
white_boxinference_time
Applications
in-context learninglanguage model generalization