attack 2026

Malicious Repurposing of Open Science Artefacts by Using Large Language Models

Zahra Hashemi 1, Zhiqiang Zhong 1, Jun Pang 1, Wei Zhao 2

0 citations · 40 references · arXiv

α

Published on arXiv

2601.18998

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

LLMs can be jailbroken via persuasion to generate harmful misuse proposals from ethically designed open science artefacts, but LLM-as-judge evaluators diverge substantially — GPT-4.1 rates proposals as highly harmful while Gemini-2.5-pro is markedly stricter — making human evaluation essential for credible dual-use risk assessment.


The rapid evolution of large language models (LLMs) has fuelled enthusiasm about their role in advancing scientific discovery, with studies exploring LLMs that autonomously generate and evaluate novel research ideas. However, little attention has been given to the possibility that such models could be exploited to produce harmful research by repurposing open science artefacts for malicious ends. We fill the gap by introducing an end-to-end pipeline that first bypasses LLM safeguards through persuasion-based jailbreaking, then reinterprets NLP papers to identify and repurpose their artefacts (datasets, methods, and tools) by exploiting their vulnerabilities, and finally assesses the safety of these proposals using our evaluation framework across three dimensions: harmfulness, feasibility of misuse, and soundness of technicality. Overall, our findings demonstrate that LLMs can generate harmful proposals by repurposing ethically designed open artefacts; however, we find that LLMs acting as evaluators strongly disagree with one another on evaluation outcomes: GPT-4.1 assigns higher scores (indicating greater potential harms, higher soundness and feasibility of misuse), Gemini-2.5-pro is markedly stricter, and Grok-3 falls between these extremes. This indicates that LLMs cannot yet serve as reliable judges in a malicious evaluation setup, making human evaluation essential for credible dual-use risk assessment.


Key Contributions

  • End-to-end pipeline combining persuasion-based jailbreaking with LLM-driven reinterpretation of open NLP artefacts (datasets, methods, tools) to generate harmful research proposals
  • Evaluation framework assessing generated proposals across three dimensions: harmfulness, feasibility of misuse, and soundness of technicality
  • Empirical finding that LLM evaluators (GPT-4.1, Gemini-2.5-pro, Grok-3) strongly disagree on harm assessments, demonstrating LLMs are unreliable judges for dual-use risk evaluation

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Applications
llm safety systemsopen science platformsscientific research pipelines