defense 2026

BarrierSteer: LLM Safety via Learning Barrier Steering

Thanh Q. Tran 1,2, Arun Verma 1,2,3, Kiwan Wong 3, Bryan Kian Hsiang Low 1,2, Daniela Rus 3,2, Wei Xiao 4,3

0 citations · 56 references · arXiv (Cornell University)

α

Published on arXiv

2602.20102

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

BarrierSteer substantially reduces adversarial success rates and unsafe generations across multiple LLMs and datasets, outperforming existing safety methods while preserving model utility.

BarrierSteer

Novel technique introduced


Despite the state-of-the-art performance of large language models (LLMs) across diverse tasks, their susceptibility to adversarial attacks and unsafe content generation remains a major obstacle to deployment, particularly in high-stakes settings. Addressing this challenge requires safety mechanisms that are both practically effective and supported by rigorous theory. We introduce BarrierSteer, a novel framework that formalizes response safety by embedding learned non-linear safety constraints directly into the model's latent representation space. BarrierSteer employs a steering mechanism based on Control Barrier Functions (CBFs) to efficiently detect and prevent unsafe response trajectories during inference with high precision. By enforcing multiple safety constraints through efficient constraint merging, without modifying the underlying LLM parameters, BarrierSteer preserves the model's original capabilities and performance. We provide theoretical results establishing that applying CBFs in latent space offers a principled and computationally efficient approach to enforcing safety. Our experiments across multiple models and datasets show that BarrierSteer substantially reduces adversarial success rates, decreases unsafe generations, and outperforms existing methods.


Key Contributions

  • BarrierSteer framework that embeds learned non-linear safety constraints in LLM latent space using Control Barrier Functions (CBFs) to steer away from unsafe generation trajectories at inference time
  • Efficient constraint merging enabling multiple safety constraints to be enforced simultaneously without modifying underlying LLM parameters
  • Theoretical guarantees establishing that CBF application in latent space is principled and computationally efficient for LLM safety enforcement

🛡️ Threat Analysis

Input Manipulation Attack

BarrierSteer explicitly defends against adversarial attacks on LLMs (including gradient-based adversarial suffix attacks), measuring reduced 'adversarial success rates' — the CBF steering mechanism detects and diverts unsafe latent trajectories caused by adversarial inputs at inference time.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timewhite_box
Applications
large language model safetyjailbreak defenseharmful content prevention