benchmark 2026

Red-Teaming Claude Opus and ChatGPT-based Security Advisors for Trusted Execution Environments

Kunal Mukherjee

1 citations · 30 references · arXiv (Cornell University)

α

Published on arXiv

2602.19450

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Prompt-induced failures transfer up to 12.02% across LLM assistants, and a combined defense pipeline reduces failures by 80.62%

TEE-RedBench

Novel technique introduced


Trusted Execution Environments (TEEs) (e.g., Intel SGX and ArmTrustZone) aim to protect sensitive computation from a compromised operating system, yet real deployments remain vulnerable to microarchitectural leakage, side-channel attacks, and fault injection. In parallel, security teams increasingly rely on Large Language Model (LLM) assistants as security advisors for TEE architecture review, mitigation planning, and vulnerability triage. This creates a socio-technical risk surface: assistants may hallucinate TEE mechanisms, overclaim guarantees (e.g., what attestation does and does not establish), or behave unsafely under adversarial prompting. We present a red-teaming study of two prevalently deployed LLM assistants in the role of TEE security advisors: ChatGPT-5.2 and Claude Opus-4.6, focusing on the inherent limitations and transferability of prompt-induced failures across LLMs. We introduce TEE-RedBench, a TEE-grounded evaluation methodology comprising (i) a TEE-specific threat model for LLM-mediated security work, (ii) a structured prompt suite spanning SGX and TrustZone architecture, attestation and key management, threat modeling, and non-operational mitigation guidance, along with policy-bound misuse probes, and (iii) an annotation rubric that jointly measures technical correctness, groundedness, uncertainty calibration, refusal quality, and safe helpfulness. We find that some failures are not purely idiosyncratic, transferring up to 12.02% across LLM assistants, and we connect these outcomes to secure architecture by outlining an "LLM-in-the-loop" evaluation pipeline: policy gating, retrieval grounding, structured templates, and lightweight verification checks that, when combined, reduce failures by 80.62%.


Key Contributions

  • TEE-RedBench: structured evaluation methodology with TEE-specific threat model, prompt suite spanning SGX/TrustZone topics, and annotation rubric measuring technical correctness, uncertainty calibration, refusal quality, and safe helpfulness
  • Empirical red-teaming study of Claude Opus and ChatGPT as TEE security advisors, finding that prompt-induced failures transfer up to 12.02% across LLM assistants
  • LLM-in-the-loop defense pipeline (policy gating, retrieval grounding, structured templates, lightweight verification) that reduces failures by 80.62%

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
TEE-RedBench (custom prompt suite)
Applications
tee security advisoryvulnerability triagesecurity architecture review