survey 2026

Safety, Security, and Cognitive Risks in World Models

Manoj Parmar

0 citations

α

Published on arXiv

2604.01346

Input Manipulation Attack

OWASP ML Top 10 — ML01

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Poisoning

OWASP ML Top 10 — ML10

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Trajectory-persistent adversarial attacks achieve 2.26x amplification ratio on GRU-RSSM world models with -59.5% reduction under PGD-10 adversarial fine-tuning; architecture-dependent validation shows 0.65x on stochastic RSSM


World models -- learned internal simulators of environment dynamics -- are rapidly becoming foundational to autonomous decision-making in robotics, autonomous vehicles, and agentic AI. Yet this predictive power introduces a distinctive set of safety, security, and cognitive risks. Adversaries can corrupt training data, poison latent representations, and exploit compounding rollout errors to cause catastrophic failures in safety-critical deployments. World model-equipped agents are more capable of goal misgeneralisation, deceptive alignment, and reward hacking precisely because they can simulate the consequences of their own actions. Authoritative world model predictions further foster automation bias and miscalibrated human trust that operators lack the tools to audit. This paper surveys the world model landscape; introduces formal definitions of trajectory persistence and representational risk; presents a five-profile attacker capability taxonomy; and develops a unified threat model extending MITRE ATLAS and the OWASP LLM Top 10 to the world model stack. We provide an empirical proof-of-concept on trajectory-persistent adversarial attacks (GRU-RSSM: A_1 = 2.26x amplification, -59.5% reduction under adversarial fine-tuning; stochastic RSSM proxy: A_1 = 0.65x; DreamerV3 checkpoint: non-zero action drift confirmed). We illustrate risks through four deployment scenarios and propose interdisciplinary mitigations spanning adversarial hardening, alignment engineering, NIST AI RMF and EU AI Act governance, and human-factors design. We argue that world models must be treated as safety-critical infrastructure requiring the same rigour as flight-control software or medical devices.


Key Contributions

  • Unified threat model extending MITRE ATLAS and OWASP LLM Top 10 to world model architectures with five-profile attacker capability taxonomy
  • Formal definitions of trajectory persistence (A_k) and representational risk R(θ,D) for compounding rollout dynamics
  • Empirical proof-of-concept demonstrating trajectory-persistent adversarial attacks (A_1 = 2.26x amplification on GRU-RSSM, validated on DreamerV3)
  • Interdisciplinary mitigation framework spanning adversarial hardening, alignment engineering, NIST AI RMF/EU AI Act governance, and human-factors design

🛡️ Threat Analysis

Input Manipulation Attack

Paper analyzes adversarial attacks on world models including trajectory-persistent adversarial perturbations that compound across multi-step rollouts, with empirical proof-of-concept (2.26x amplification ratio).

Data Poisoning Attack

Explicitly addresses corruption of world model training data and data poisoning attacks that degrade model performance in safety-critical deployments.

Model Poisoning

Discusses poisoning latent representations and embedding hidden malicious behavior in world models, distinct from general data poisoning.


Details

Domains
reinforcement-learningmultimodalvisionnlp
Model Types
rltransformerrnnllm
Threat Tags
training_timeinference_timewhite_boxblack_box
Applications
autonomous drivingroboticsagentic ai systemsvideo generation