defense 2025

Unintended Misalignment from Agentic Fine-Tuning: Risks and Mitigation

Dongyoon Hahm , Taywon Min , Woogyeol Jin , Kimin Lee

0 citations

α

Published on arXiv

2508.14031

Transfer Learning Attack

OWASP ML Top 10 — ML07

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

PING increases harmful request refusal rates by 66.2% (web navigation) and 44.6% (code generation) over baseline fine-tuned agents while degrading benign task performance by only 1.8%.

PING (Prefix INjection Guard)

Novel technique introduced


Beyond simple text generation, Large Language Models (LLMs) have evolved into agentic systems capable of planning and interacting with external tools to solve complex tasks. This evolution involves fine-tuning LLMs on agent-specific tasks to enhance their proficiency. However, safety concerns are frequently overlooked during this fine-tuning process. In this work, we show that aligned LLMs can become unintentionally misaligned, leading to a higher likelihood of executing harmful tasks and a reduced tendency to refuse them when fine-tuned to execute agentic tasks. To address these safety challenges, we propose Prefix INjection Guard (PING), a simple yet effective method that prepends automatically generated natural language prefixes to agent responses, guiding them to refuse harmful requests while preserving performance on benign tasks. Specifically, we introduce an iterative approach that alternates between (1) generating candidate prefixes and (2) selecting those that optimize both task performance and refusal behavior. Experimental results demonstrate that PING significantly enhances the safety of fine-tuned LLM agents without sacrificing their effectiveness. PING consistently outperforms existing prompting approaches across diverse benchmarks in both web navigation and code generation tasks. Our analysis of internal hidden states via linear probes reveals that prefix tokens are crucial for behavior modification, explaining the performance gains. WARNING: This paper contains contents that are unethical or offensive in nature.


Key Contributions

  • Demonstrates empirically that fine-tuning aligned LLMs on benign agentic datasets (web navigation, code generation) causes unintended safety misalignment, increasing harmful task completion rates (e.g., +38.09% attack success on WebDojo for Llama-3.1-8B-Instruct)
  • Proposes PING (Prefix INjection Guard), an iterative prefix generation-and-selection method that prepends auto-generated natural language prefixes to agent responses to restore refusal behavior while preserving benign task performance
  • Analyzes internal hidden states via linear probes and activation steering, revealing that prefix tokens mechanistically shift model activations toward refusal behavior

🛡️ Threat Analysis

Transfer Learning Attack

The core finding is that fine-tuning aligned LLMs on benign agentic datasets unintentionally degrades safety alignment — a transfer learning vulnerability exploiting the gap between pre-training safety alignment and agentic fine-tuning distributions. PING is a direct defense against this fine-tuning-induced misalignment.


Details

Domains
nlp
Model Types
llm
Threat Tags
training_timeinference_timeblack_box
Datasets
WebArena-liteMINT-ALFWorldRedCode-ExecWebDojo
Applications
web navigation agentscode generation agentsllm agents