α

Published on arXiv

2604.03070

AI Supply Chain Attacks

OWASP ML Top 10 — ML06

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Key Finding

Found 520 vulnerable skills with 1,708 credential leakage issues across 17,022 analyzed skills; debug logging (print/console.log) caused 73.5% of leaks via stdout exposure


Third-party skills extend LLM agents with powerful capabilities but often handle sensitive credentials in privileged environments, making leakage risks poorly understood. We present the first large-scale empirical study of this problem, analyzing 17,022 skills (sampled from 170,226 on SkillsMP) using static analysis, sandbox testing, and manual inspection. We identify 520 vulnerable skills with 1,708 issues and derive a taxonomy of 10 leakage patterns (4 accidental and 6 adversarial). We find that (1) leakage is fundamentally cross-modal: 76.3% require joint analysis of code and natural language, while 3.1% arise purely from prompt injection; (2) debug logging is the primary vector, with print and console.log causing 73.5% of leaks due to stdout exposure to LLMs; and (3) leaked credentials are both exploitable (89.6% without privileges) and persistent, as forks retain secrets even after upstream fixes. After disclosure, all malicious skills were removed and 91.6% of hardcoded credentials were fixed. We release our dataset, taxonomy, and detection pipeline to support future research.


Key Contributions

  • First large-scale empirical study of credential leakage in LLM agent skills, analyzing 17,022 skills from SkillsMP
  • Taxonomy of 10 credential leakage patterns (4 accidental, 6 adversarial) with finding that 76.3% require cross-modal code+NL analysis
  • Detection pipeline and dataset released; responsible disclosure led to removal of all malicious skills and 91.6% fix rate for hardcoded credentials

🛡️ Threat Analysis

AI Supply Chain Attacks

Study identifies vulnerabilities in third-party LLM agent skills distributed via SkillsMP marketplace — skills with hardcoded credentials, malicious code, and insecure implementations represent supply chain threats. The paper documents trojaned/malicious skills that were removed after disclosure.


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Datasets
SkillsMP
Applications
llm agent skillsthird-party pluginstool calling