defense 2026

Causality Laundering: Denial-Feedback Leakage in Tool-Calling LLM Agents

Mohammad Hossein Chinaei

0 citations

α

Published on arXiv

2604.04035

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Key Finding

ARM blocks causality laundering, transitive taint propagation, and mixed-provenance field misuse attacks with sub-millisecond policy evaluation overhead

Agentic Reference Monitor (ARM)

Novel technique introduced


Tool-calling LLM agents can read private data, invoke external services, and trigger real-world actions, creating a security problem at the point of tool execution. We identify a denial-feedback leakage pattern, which we term causality laundering, in which an adversary probes a protected action, learns from the denial outcome, and exfiltrates the inferred information through a later seemingly benign tool call. This attack is not captured by flat provenance tracking alone because the leaked information arises from causal influence of the denied action, not direct data flow. We present the Agentic Reference Monitor (ARM), a runtime enforcement layer that mediates every tool invocation by consulting a provenance graph over tool calls, returned data, field-level provenance, and denied actions. ARM propagates trust through an integrity lattice and augments the graph with counterfactual edges from denied-action nodes, enabling enforcement over both transitive data dependencies and denial-induced causal influence. In a controlled evaluation on three representative attack scenarios, ARM blocks causality laundering, transitive taint propagation, and mixed-provenance field misuse that a flat provenance baseline misses, while adding sub-millisecond policy evaluation overhead. These results suggest that denial-aware causal provenance is a useful abstraction for securing tool-calling agent systems.


Key Contributions

  • Identifies 'causality laundering' attack pattern where adversaries exfiltrate information by probing denied actions and inferring from denial outcomes
  • Proposes Agentic Reference Monitor (ARM) that tracks field-level provenance and denial-induced causal influence across tool calls
  • Demonstrates sub-millisecond overhead while blocking transitive taint propagation and mixed-provenance attacks missed by flat provenance baselines

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Applications
llm agentstool-calling systemsfunction calling