defense 2025

Defending Against Prompt Injection with DataFilter

Yizhu Wang 1, Sizhe Chen 1, Raghad Alkhudair 2, Basel Alomair 2, David Wagner 1

9 citations · 66 references · arXiv

α

Published on arXiv

2510.19207

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

DataFilter reduces prompt injection attack success rates to near zero across multiple benchmarks while preserving LLM utility in a plug-and-play, model-agnostic deployment.

DataFilter

Novel technique introduced


When large language model (LLM) agents are increasingly deployed to automate tasks and interact with untrusted external data, prompt injection emerges as a significant security threat. By injecting malicious instructions into the data that LLMs access, an attacker can arbitrarily override the original user task and redirect the agent toward unintended, potentially harmful actions. Existing defenses either require access to model weights (fine-tuning), incur substantial utility loss (detection-based), or demand non-trivial system redesign (system-level). Motivated by this, we propose DataFilter, a test-time model-agnostic defense that removes malicious instructions from the data before it reaches the backend LLM. DataFilter is trained with supervised fine-tuning on simulated injections and leverages both the user's instruction and the data to selectively strip adversarial content while preserving benign information. Across multiple benchmarks, DataFilter consistently reduces the prompt injection attack success rates to near zero while maintaining the LLMs' utility. DataFilter delivers strong security, high utility, and plug-and-play deployment, making it a strong practical defense to secure black-box commercial LLMs against prompt injection. Our DataFilter model is released at https://huggingface.co/JoyYizhu/DataFilter for immediate use, with the code to reproduce our results at https://github.com/yizhu-joy/DataFilter.


Key Contributions

  • DataFilter: a test-time, model-agnostic defense that removes malicious instructions from external data before it reaches the backend LLM, requiring no access to backend model weights
  • Supervised fine-tuning approach on simulated prompt injections that leverages both user instruction and retrieved data to selectively strip adversarial content while preserving benign information
  • Demonstrated near-zero attack success rates across multiple benchmarks while maintaining high utility on black-box commercial LLMs

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
AgentBenchInjecAgent
Applications
llm agentsrag systemsautomated task agents interacting with untrusted external data