defense 2025

LLM Reinforcement in Context

Thomas Rivasseau

0 citations · 34 references · arXiv

α

Published on arXiv

2511.12782

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Proposes that inserting periodic alignment reminders scales linearly with context length, mathematically counteracting the exponential growth of jailbreak attack surface and the vanishing influence of the system prompt in long contexts

Interruptions (Reinforcement in Context)

Novel technique introduced


Current Large Language Model alignment research mostly focuses on improving model robustness against adversarial attacks and misbehavior by training on examples and prompting. Research has shown that LLM jailbreak probability increases with the size of the user input or conversation length. There is a lack of appropriate research into means of strengthening alignment which also scale with user input length. We propose interruptions as a possible solution to this problem. Interruptions are control sentences added to the user input approximately every x tokens for some arbitrary x. We suggest that this can be generalized to the Chain-of-Thought process to prevent scheming.


Key Contributions

  • Formalizes the 'alignment scaling problem' — showing mathematically that system prompt influence vanishes and required training examples grow exponentially as context length increases
  • Proposes 'interruptions': natural language control sentences inserted at regular token intervals into user input and Chain-of-Thought to reinforce alignment guidelines without weight updates
  • Extends the interruption concept to Chain-of-Thought reasoning to prevent scheming behaviors in frontier reasoning models

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Applications
llm chatbotsllm agentsreasoning models