Latest papers

6 papers
defense arXiv Apr 4, 2026 · 4d ago

ProtoGuard-SL: Prototype Consistency Based Backdoor Defense for Vertical Split Learning

Yuhan Shui, Ruobin Jin, Zhihao Dou et al. · Wenzhou-Kean University · Case Western Reserve University

Server-side defense detecting backdoor-poisoned embeddings in vertical split learning using class-conditional prototype consistency filtering

Model Poisoning visiontabularfederated-learning
PDF
attack arXiv Mar 10, 2026 · 29d ago

Reasoning-Oriented Programming: Chaining Semantic Gadgets to Jailbreak Large Vision Language Models

Quanchen Zou, Moyang Chen, Zonghao Ying et al. · 360 AI Security Lab · Wenzhou-Kean University +1 more

Jailbreaks VLMs by chaining semantically benign visual gadgets via prompt-controlled reasoning to synthesize harmful outputs, bypassing perception-level alignment

Input Manipulation Attack Prompt Injection visionnlpmultimodal
PDF
attack arXiv Mar 7, 2026 · 4w ago

Two Frames Matter: A Temporal Attack for Text-to-Video Model Jailbreaking

Moyang Chen, Zonghao Ying, Wenzhuo Xu et al. · Wenzhou-Kean University · 360 AI Security Lab +1 more

Jailbreaks text-to-video models by exploiting temporal infilling: sparse boundary-frame prompts induce harmful intermediate content generation

Prompt Injection multimodalgenerative
PDF
attack arXiv Nov 17, 2025 · Nov 2025

VEIL: Jailbreaking Text-to-Video Models via Visual Exploitation from Implicit Language

Zonghao Ying, Moyang Chen, Nizhang Li et al. · Beihang University · Wenzhou-Kean University +4 more

Jailbreaks text-to-video models using benign prompts with auditory triggers and cinematic cues that exploit cross-modal priors

Prompt Injection multimodalgenerativevisionnlp
1 citations PDF Code
defense arXiv Sep 30, 2025 · Sep 2025

SafeBehavior: Simulating Human-Like Multistage Reasoning to Mitigate Jailbreak Attacks in Large Language Models

Qinjian Zhao, Jiaqi Wang, Zhiqiang Gao et al. · Wenzhou-Kean University · University of Bremen +2 more

Three-stage LLM jailbreak defense using intention inference, self-introspection, and self-revision to counter optimization-based and prompt-based attacks

Input Manipulation Attack Prompt Injection nlp
PDF
survey arXiv Aug 4, 2025 · Aug 2025

A Survey on Data Security in Large Language Models

Kang Chen, Xiuze Zhou, Yuanguo Lin et al. · Jimei University · Wenzhou-Kean University +3 more

Surveys data-centric security risks in LLMs — data poisoning, prompt injection, PII leakage — and reviews defenses across the model lifecycle

Data Poisoning Attack Prompt Injection Training Data Poisoning Sensitive Information Disclosure nlp
PDF