Latest papers

26 papers
defense arXiv Mar 31, 2026 · 6d ago

CIPHER: Counterfeit Image Pattern High-level Examination via Representation

Kyeonghun Kim, Youngung Han, Seoyoung Ju et al. · OUTTA · Seoul National University

Deepfake detector reusing GAN/diffusion discriminators to identify synthetic faces across nine generative models with 74% F1-score

Output Integrity Attack visiongenerative
PDF
defense arXiv Mar 16, 2026 · 21d ago

Two Birds, One Projection: Harmonizing Safety and Utility in LVLMs via Inference-time Feature Projection

Yewon Han, Yumin Seol, EunGyung Kong et al. · Seoul National University · Mobilint

Inference-time defense that projects LVLM cross-modal features to simultaneously improve jailbreak robustness and general task performance

Input Manipulation Attack Prompt Injection multimodalvisionnlp
PDF
survey arXiv Mar 11, 2026 · 26d ago

The Attack and Defense Landscape of Agentic AI: A Comprehensive Survey

Juhee Kim, Xiaoyuan Liu, Zhun Wang et al. · University of California · Seoul National University +1 more

Surveys attacks and defenses across agentic LLM systems, covering prompt injection, insecure tool use, and excessive agency risks

Prompt Injection Insecure Plugin Design Excessive Agency nlpmultimodal
PDF
defense arXiv Feb 8, 2026 · 8w ago

CausalArmor: Efficient Indirect Prompt Injection Guardrails via Causal Attribution

Minbeom Kim, Mihir Parmar, Phillip Wallis et al. · Google Cloud AI Research · Seoul National University +2 more

Defends LLM tool-calling agents against indirect prompt injection via causal attribution-based dominance shift detection at privileged action points

Prompt Injection Excessive Agency nlp
PDF
benchmark arXiv Feb 6, 2026 · 8w ago

MPIB: A Benchmark for Medical Prompt Injection Attacks and Clinical Safety in LLMs

Junhyeok Lee, Han Jang, Kyu Sung Choi · Seoul National University College of Medicine · Seoul National University +1 more

Benchmark suite with 9,697 instances measuring prompt injection risk in clinical LLMs via a new clinical harm severity metric

Prompt Injection nlp
PDF Code
defense arXiv Jan 31, 2026 · 9w ago

Inference-Only Prompt Projection for Safe Text-to-Image Generation with TV Guarantees

Minhyuk Lee, Hyekyung Yoon, Myungjoo Kang · Seoul National University

Inference-time prompt projection defense rewrites unsafe T2I prompts into safe ones using LLM+VLM without retraining the generator

Prompt Injection visiongenerativenlp
PDF
defense arXiv Jan 30, 2026 · 9w ago

AlienLM: Alienization of Language for API-Boundary Privacy in Black-Box LLMs

Jaehee Kim, Pilsung Kang · Seoul National University

Obfuscates sensitive prompts via vocabulary bijection before sending to black-box LLM APIs, blocking provider-side plaintext access

Sensitive Information Disclosure nlp
PDF
defense arXiv Jan 19, 2026 · 11w ago

PhaseMark: A Post-hoc, Optimization-Free Watermarking of AI-generated Images in the Latent Frequency Domain

Sung Ju Lee, Nam Ik Cho · Seoul National University

Post-hoc watermarking for LDM-generated images via single-shot VAE latent phase modulation, resilient to regeneration attacks

Output Integrity Attack visiongenerative
PDF
defense arXiv Dec 14, 2025 · Dec 2025

Spectral Sentinel: Scalable Byzantine-Robust Decentralized Federated Learning via Sketched Random Matrix Theory on Blockchain

Animesh Mishra · Seoul National University

Spectral Sentinel defends federated learning from Byzantine gradient poisoning using random matrix theory to detect anomalous eigenspectra at billion-parameter scale

Data Poisoning Attack federated-learning
PDF Code
defense arXiv Dec 11, 2025 · Dec 2025

Targeted Data Protection for Diffusion Model by Matching Training Trajectory

Hojun Lee, Mijin Koo, Yeji Song et al. · Xperty Corp. · Seoul National University +1 more

Trajectory-matching adversarial perturbations protect personal images from unauthorized diffusion fine-tuning by redirecting model outputs to user-specified target concepts

Output Integrity Attack visiongenerative
PDF
attack arXiv Nov 17, 2025 · Nov 2025

Angular Gradient Sign Method: Uncovering Vulnerabilities in Hyperbolic Networks

Minsoo Jo, Dongyoon Yang, Taesup Kim · Seoul National University · SK hynix

Proposes geometry-aware adversarial attack on hyperbolic networks by isolating angular gradient components to maximize semantic misclassification

Input Manipulation Attack visionmultimodal
PDF
defense arXiv Nov 14, 2025 · Nov 2025

SP-Guard: Selective Prompt-adaptive Guidance for Safe Text-to-Image Generation

Sumin Yu, Taesup Moon · Seoul National University

Defends T2I diffusion models from harmful prompt generation using prompt-adaptive guidance strength and selective spatial unsafe-region masking

Output Integrity Attack visiongenerative
PDF
defense arXiv Oct 15, 2025 · Oct 2025

Risk-adaptive Activation Steering for Safe Multimodal Large Language Models

Jonghyun Park, Minhyuk Seo, Jonghyun Choi · Seoul National University · KU Leuven

Defends VLMs against image-embedded jailbreaks via risk-adaptive activation steering without iterative output adjustments

Input Manipulation Attack Prompt Injection multimodalvisionnlp
1 citations PDF
attack arXiv Oct 13, 2025 · Oct 2025

DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation

Hyeseon An, Shinwoo Park, Suyeon Woo et al. · Yonsei University · Seoul National University

Spoofs LLM watermarks via knowledge distillation, enabling disinformation falsely attributed to trusted models like ChatGPT

Output Integrity Attack nlp
PDF Code
defense arXiv Oct 8, 2025 · Oct 2025

Adjusting Initial Noise to Mitigate Memorization in Text-to-Image Diffusion Models

Hyeonggeun Han, Sehwan Kim, Hyungjun Joo et al. · Seoul National University · NextQuantum +2 more

Defends against diffusion model training-data memorization by adjusting initial noise to escape the attraction basin earlier

Model Inversion Attack visiongenerative
3 citations 1 influentialPDF Code
attack arXiv Sep 26, 2025 · Sep 2025

Jailbreaking on Text-to-Video Models via Scene Splitting Strategy

Wonjun Lee, Haon Park, Doehyeon Lee et al. · Yonsei University · Korea Institute of Science and Technology +3 more

Black-box jailbreak on Text-to-Video models by splitting harmful narratives into benign scenes that collectively bypass safety filters

Prompt Injection generativemultimodal
2 citations PDF
defense arXiv Sep 26, 2025 · Sep 2025

Erase or Hide? Suppressing Spurious Unlearning Neurons for Robust Unlearning

Nakyeong Yang, Dong-Kyum Kim, Jea Kwon et al. · Seoul National University · Max Planck Institute for Security and Privacy

Defends LLM unlearning against adversarial relearning attacks by suppressing spurious neurons that hide rather than erase private knowledge

Sensitive Information Disclosure nlp
1 citations PDF
attack arXiv Sep 13, 2025 · Sep 2025

Harmful Prompt Laundering: Jailbreaking LLMs with Abductive Styles and Symbolic Encoding

Seongho Joo, Hyukhun Koh, Kyomin Jung · Seoul National University

Proposes HaPLa, a black-box LLM jailbreak using abductive framing and symbolic encoding achieving 95%+ success on GPT models

Prompt Injection nlp
PDF
defense arXiv Sep 13, 2025 · Sep 2025

Public Data Assisted Differentially Private In-Context Learning

Seongho Joo, Hyukhun Koh, Kyomin Jung · Seoul National University

Defends private LLM in-context learning from membership inference and data leakage using public-data-assisted differential privacy

Membership Inference Attack Sensitive Information Disclosure nlp
PDF
defense arXiv Sep 9, 2025 · Sep 2025

Semantic Watermarking Reinvented: Enhancing Robustness and Generation Quality with Fourier Integrity

Sung Ju Lee, Nam Ik Cho · Seoul National University

Proposes Hermitian Symmetric Fourier Watermarking for diffusion-generated images, robust against regeneration and cropping removal attacks

Output Integrity Attack visiongenerative
PDF Code
Loading more papers…