Thomas Rivasseau

h-index: 0 0 citations 2 papers (total)

Papers in Database (2)

defense arXiv Nov 16, 2025 · Nov 2025

LLM Reinforcement in Context

Thomas Rivasseau · McGill University

Proposes inserting periodic alignment reminders into LLM context to defend against long-input jailbreaks and CoT scheming

Prompt Injection nlp
PDF
defense arXiv Dec 2, 2025 · Dec 2025

Invasive Context Engineering to Control Large Language Models

Thomas Rivasseau · McGill University

Defends LLMs against long-context jailbreaks by inserting runtime control sentences into context, without retraining

Prompt Injection nlp
PDF