Latest papers

23 papers
attack arXiv Apr 2, 2026 · 4d ago

CRaFT: Circuit-Guided Refusal Feature Selection via Cross-Layer Transcoders

Su-Hyeon Kim, Hyundong Jin, Yejin Lee et al. · Yonsei University

Circuit-guided feature selection for LLM jailbreaking that identifies causal refusal features via cross-layer transcoders and boundary prompts

Prompt Injection nlp
PDF
attack arXiv Feb 16, 2026 · 7w ago

Overthinking Loops in Agents: A Structural Risk via MCP Tools

Yohan Lee, Jisoo Jang, Seoyeon Choi et al. · Yonsei University · Hankuk University of Foreign Studies +1 more

Malicious MCP tool servers induce overthinking loops in LLM agents, achieving up to 142× token amplification via crafted tool call cycles

Model Denial of Service Insecure Plugin Design nlp
PDF
defense arXiv Feb 16, 2026 · 7w ago

Universal Image Immunization against Diffusion-based Image Editing via Semantic Injection

Chanhui Lee, Seunghyun Shin, Donggyu Choi et al. · POSTECH AI Graduate School · GIST AI Graduate School +1 more

Proposes universal adversarial perturbation that immunizes images against diffusion-based deepfake editing via semantic injection

Output Integrity Attack visiongenerative
PDF
attack arXiv Feb 6, 2026 · 8w ago

Temperature Scaling Attack Disrupting Model Confidence in Federated Learning

Kichang Lee, Jaeho Jin, JaeYeon Park et al. · Yonsei University · Dankook University

Proposes a federated learning attack that corrupts model confidence calibration via temperature scaling while evading accuracy-based defenses

Data Poisoning Attack federated-learningvisionnlptimeseries
PDF Code
defense arXiv Jan 30, 2026 · 9w ago

dgMARK: Decoding-Guided Watermarking for Diffusion Language Models

Pyo Min Hong, Albert No · Hongik University · Yonsei University

Watermarks discrete diffusion LLM outputs by steering token unmasking order via parity constraints, enabling robust AI-text provenance tracking

Output Integrity Attack nlpgenerative
PDF
attack arXiv Jan 20, 2026 · 10w ago

LURE: Latent Space Unblocking for Multi-Concept Reawakening in Diffusion Models

Mengyu Sun, Ziyuan Yang, Andrew Beng Jin Teoh et al. · Sichuan University · The Hong Kong Polytechnic University +1 more

Attacks concept erasure defenses in diffusion models by reconstructing latent space to reawaken multiple suppressed concepts simultaneously

Input Manipulation Attack visiongenerative
PDF Code
attack arXiv Jan 16, 2026 · 11w ago

Gap-K%: Measuring Top-1 Prediction Gap for Detecting Pretraining Data

Minseo Kwak, Jaehyung Kim · Yonsei University

Novel LLM membership inference attack using top-1 prediction probability gaps and sliding window correlation to detect pretraining data

Membership Inference Attack nlp
PDF
defense arXiv Jan 7, 2026 · 12w ago

How Does the Thinking Step Influence Model Safety? An Entropy-based Safety Reminder for LRMs

Su-Hyeon Kim, Hyundong Jin, Yejin Lee et al. · Yonsei University

Defends LLMs against jailbreaks by injecting entropy-triggered safe-reminding phrases into reasoning model thinking steps at inference time

Prompt Injection nlp
PDF
defense arXiv Dec 27, 2025 · Dec 2025

Verifiable Dropout: Turning Randomness into a Verifiable Claim

Kichang Lee, Sungmin Lee, Jaeho Jin et al. · Yonsei University

Zero-knowledge proofs bind dropout masks to verifiable seeds, closing the plausible-deniability gap exploited by malicious cloud training providers

AI Supply Chain Attacks
PDF
attack arXiv Dec 1, 2025 · Dec 2025

DPAC: Distribution-Preserving Adversarial Control for Diffusion Sampling

Han-Jin Lee, Han-Ju Lee, Jin-Seong Kim et al. · Yonsei University

Improves diffusion-guided adversarial example generation by projecting attack gradients onto score tangent space, preserving sample quality at matched attack success rates

Input Manipulation Attack visiongenerative
PDF
defense arXiv Nov 11, 2025 · Nov 2025

WaterMod: Modular Token-Rank Partitioning for Probability-Balanced LLM Watermarking

Shinwoo Park, Hyejin Park, Hyeseon Ahn et al. · Yonsei University · Rensselaer Polytechnic Institute

Watermarks LLM text outputs via modular token-rank partitioning, supporting binary and multi-bit provenance tracing without fluency loss

Output Integrity Attack nlp
4 citations PDF Code
attack arXiv Nov 3, 2025 · Nov 2025

Align to Misalign: Automatic LLM Jailbreak with Meta-Optimized LLM Judges

Hamin Koo, Minseon Kim, Jaehyung Kim · Yonsei University · Microsoft Research

Meta-optimized bi-level framework co-evolves jailbreak prompts and LLM judge templates to achieve SOTA attack success rates on Claude models

Prompt Injection nlp
1 citations PDF
attack arXiv Oct 13, 2025 · Oct 2025

DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation

Hyeseon An, Shinwoo Park, Suyeon Woo et al. · Yonsei University · Seoul National University

Spoofs LLM watermarks via knowledge distillation, enabling disinformation falsely attributed to trusted models like ChatGPT

Output Integrity Attack nlp
PDF Code
defense arXiv Oct 10, 2025 · Oct 2025

A Linguistics-Aware LLM Watermarking via Syntactic Predictability

Shinwoo Park, Hyejin Park, Hyeseon Ahn et al. · Yonsei University · Rensselaer Polytechnic Institute

Linguistics-aware LLM text watermarking using POS n-gram entropy to balance quality and detectability without model logit access

Output Integrity Attack nlp
PDF Code
benchmark arXiv Sep 30, 2025 · Sep 2025

How Diffusion Models Memorize

Juyeop Kim, Songkuk Kim, Jong-Seok Lee · Yonsei University

Identifies early denoising overestimation as the core mechanism enabling training data memorization in diffusion models

Model Inversion Attack visiongenerative
4 citations PDF
defense arXiv Sep 27, 2025 · Sep 2025

A2D: Any-Order, Any-Step Safety Alignment for Diffusion Language Models

Wonje Jeung, Sangyeon Yoon, Yoonjun Cho et al. · Yonsei University

Token-level safety alignment for diffusion LLMs that blocks any-order jailbreaks and prefilling attacks, cutting DIJA success from 80% to near-zero

Prompt Injection nlpgenerative
2 citations PDF
attack arXiv Sep 26, 2025 · Sep 2025

Jailbreaking on Text-to-Video Models via Scene Splitting Strategy

Wonjun Lee, Haon Park, Doehyeon Lee et al. · Yonsei University · Korea Institute of Science and Technology +3 more

Black-box jailbreak on Text-to-Video models by splitting harmful narratives into benign scenes that collectively bypass safety filters

Prompt Injection generativemultimodal
2 citations PDF
attack arXiv Aug 19, 2025 · Aug 2025

Timestep-Compressed Attack on Spiking Neural Networks through Timestep-Level Backpropagation

Donghwa Kang, Doohyun Kim, Sang-Ki Ko et al. · Korea Advanced Institute of Science and Technology · University of Seoul +1 more

Accelerates gradient-based adversarial attacks on spiking neural networks by 57% via timestep-level backpropagation and membrane potential reuse

Input Manipulation Attack vision
PDF
benchmark arXiv Aug 12, 2025 · Aug 2025

Exploring Cross-Stage Adversarial Transferability in Class-Incremental Continual Learning

Jungwoo Kim, Jong-Seok Lee · Yonsei University

Discovers that adversarial examples from earlier continual-learning stages transfer effectively to later-stage models, exposing a new black-box attack vector in Class-IL

Input Manipulation Attack vision
PDF Code
attack arXiv Aug 5, 2025 · Aug 2025

When Good Sounds Go Adversarial: Jailbreaking Audio-Language Models with Benign Inputs

Hiskias Dingeto, Taeyoun Kwon, Dasol Choi et al. · AIM Intelligence · Seoul National University +3 more

Two-stage gradient-based attack embeds harmful payloads in benign audio to jailbreak audio-language models via RL-PGD optimization

Input Manipulation Attack Prompt Injection audiomultimodalnlp
PDF
Loading more papers…