attack 2025

Safe2Harm: Semantic Isomorphism Attacks for Jailbreaking Large Language Models

Fan Yang

0 citations · 39 references · arXiv

α

Published on arXiv

2512.13703

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Safe2Harm achieves superior jailbreak performance over existing methods (GCG, PAIR, AutoDAN, AutoDAN-Turbo, JAIL-CON) across 7 mainstream LLMs and three benchmark datasets, while also revealing weaknesses in current harmful content detection methods.

Safe2Harm (Semantic Isomorphism Attack)

Novel technique introduced


Large Language Models (LLMs) have demonstrated exceptional performance across various tasks, but their security vulnerabilities can be exploited by attackers to generate harmful content, causing adverse impacts across various societal domains. Most existing jailbreak methods revolve around Prompt Engineering or adversarial optimization, yet we identify a previously overlooked phenomenon: many harmful scenarios are highly consistent with legitimate ones in terms of underlying principles. Based on this finding, this paper proposes the Safe2Harm Semantic Isomorphism Attack method, which achieves efficient jailbreaking through four stages: first, rewrite the harmful question into a semantically safe question with similar underlying principles; second, extract the thematic mapping relationship between the two; third, let the LLM generate a detailed response targeting the safe question; finally, reversely rewrite the safe response based on the thematic mapping relationship to obtain harmful output. Experiments on 7 mainstream LLMs and three types of benchmark datasets show that Safe2Harm exhibits strong jailbreaking capability, and its overall performance is superior to existing methods. Additionally, we construct a challenging harmful content evaluation dataset containing 358 samples and evaluate the effectiveness of existing harmful detection methods, which can be deployed for LLM input-output filtering to enable defense.


Key Contributions

  • Safe2Harm: a four-stage semantic isomorphism jailbreak pipeline that rewrites harmful questions into structurally equivalent safe questions, generates safe responses, then reverse-maps them to harmful outputs
  • Identification of a novel attack surface: semantic isomorphism between harmful and legitimate scenarios that existing defenses overlook
  • A challenging 358-sample harmful content evaluation dataset for benchmarking LLM input-output filtering and detection methods

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
HarmBenchcustom 358-sample harmful content dataset
Applications
llm chatbotsllm safety alignmentcontent moderation filters