Philip Torr

h-index: 17 919 citations 54 papers (total)

Papers in Database (4)

defense EMNLP Nov 1, 2025 · Nov 2025

Reimagining Safety Alignment with An Image

Yifan Xia, Guorui Chen, Wenqian Yu et al. · Wuhan University · University of Oxford

Defends MLLMs against jailbreaks and over-refusal by optimizing an adversarial-style image prompt as a parameter-free safety alignment mechanism

Input Manipulation Attack Prompt Injection nlpmultimodalvision
2 citations 1 influentialPDF Code
attack arXiv Oct 13, 2025 · Oct 2025

Bag of Tricks for Subverting Reasoning-based Safety Guardrails

Shuo Chen, Zhen Han, Haokun Chen et al. · LMU Munich · Siemens +5 more

Jailbreaks reasoning-based LLM safety guardrails via template tricks and white-box optimization, exceeding 90% attack success rate

Input Manipulation Attack Prompt Injection nlp
1 citations PDF Code
attack arXiv Oct 13, 2025 · Oct 2025

Deep Research Brings Deeper Harm

Shuo Chen, Zonggen Li, Zhen Han et al. · LMU Munich · Siemens +6 more

Proposes two jailbreak attacks on LLM research agents — plan injection and intent hijack — that bypass alignment to produce dangerous biosecurity reports

Prompt Injection Excessive Agency nlp
PDF Code
attack arXiv Feb 15, 2026 · 7w ago

SkillJect: Automating Stealthy Skill-Based Prompt Injection for Coding Agents with Trace-Driven Closed-Loop Refinement

Xiaojun Jia, Jie Liao, Simeng Qin et al. · Nanyang Technological University · Chongqing University +4 more

Automated framework crafts stealthy skill-based prompt injections against LLM coding agents using closed-loop refinement agents

Prompt Injection Insecure Plugin Design nlp
PDF