Latest papers

6 papers
defense arXiv Jan 13, 2026 · 11w ago

STAR: Detecting Inference-time Backdoors in LLM Reasoning via State-Transition Amplification Ratio

Seong-Gyu Park, Sohee Park, Jisu Lee et al. · Soongsil University

Detects inference-time backdoor triggers in LLM Chain-of-Thought reasoning via output probability shift analysis, achieving AUROC ≈ 1.0

Model Poisoning Prompt Injection nlp
PDF
benchmark arXiv Nov 18, 2025 · Nov 2025

Beyond Fixed and Dynamic Prompts: Embedded Jailbreak Templates for Advancing LLM Security

Hajun Kim, Hyunsik Na, Daeseon Choi · Soongsil University

Proposes Embedded Jailbreak Templates that naturally integrate harmful queries into existing prompt structures for more realistic LLM red-teaming benchmarks

Prompt Injection nlp
PDF
defense arXiv Nov 2, 2025 · Nov 2025

EraseFlow: Learning Concept Erasure Policies via GFlowNet-Driven Alignment

Abhiram Kusumba, Maitreya Patel, Kyle Min et al. · Capital One · Arizona State University +2 more

GFlowNet-based concept erasure for diffusion models, robust to adversarial bypass attacks, without requiring crafted reward models

Output Integrity Attack Input Manipulation Attack visiongenerative
1 citations 1 influentialPDF
attack arXiv Oct 31, 2025 · Oct 2025

Self-HarmLLM: Can Large Language Model Harm Itself?

Heehwan Kim, Sungjune Park, Daeseon Choi · Soongsil University

Novel jailbreak attack where an LLM generates obfuscated harmful queries that bypass its own guardrails when re-entered in a new session

Prompt Injection nlp
PDF
benchmark arXiv Sep 7, 2025 · Sep 2025

Multimodal Prompt Injection Attacks: Risks and Defenses for Modern LLMs

Andrew Yeo, Daeseon Choi · Ranchview High School · Soongsil University

Benchmarks eight commercial LLMs against four prompt injection attack types, finding Claude 3 most robust but all models exploitable

Prompt Injection Sensitive Information Disclosure nlpmultimodal
PDF
attack arXiv Aug 13, 2025 · Aug 2025

IPG: Incremental Patch Generation for Generalized Adversarial Patch Training

Wonho Lee, Hyunsik Na, Jisu Lee et al. · Soongsil University

Proposes IPG to generate adversarial patches 11x faster for object detection, enabling efficient adversarial patch training

Input Manipulation Attack vision
PDF