Che Wang

h-index: 1 3 citations 6 papers (total)

Papers in Database (5)

defense arXiv Nov 17, 2025 · Nov 2025

DualTAP: A Dual-Task Adversarial Protector for Mobile MLLM Agents

Fuyao Zhang, Jiaming Zhang, Che Wang et al. · Nanyang Technological University · Peking University +3 more

Adversarial perturbation defense that blinds untrusted router MLLMs to PII in mobile screenshots while preserving agent task utility

Input Manipulation Attack visionmultimodal
2 citations 1 influentialPDF
defense arXiv Feb 24, 2026 · 5w ago

ICON: Indirect Prompt Injection Defense for Agents based on Inference-Time Correction

Che Wang, Fuyao Zhang, Jiaming Zhang et al. · Peking University · Nanyang Technological University +2 more

Defends LLM agents against indirect prompt injection via latent-space probing and attention steering without over-refusal

Prompt Injection nlpmultimodal
PDF
attack arXiv Feb 24, 2026 · 5w ago

AdapTools: Adaptive Tool-based Indirect Prompt Injection Attacks on Agentic LLMs

Che Wang, Jiaming Zhang, Ziqi Zhang et al. · Peking University · Nanyang Technological University +1 more

Adaptive indirect prompt injection attack on agentic LLMs that selects stealthy MCP tools and optimizes prompts to evade defenses

Prompt Injection Insecure Plugin Design nlp
PDF
defense arXiv Dec 9, 2025 · Dec 2025

Disrupting Hierarchical Reasoning: Adversarial Protection for Geographic Privacy in Multimodal Reasoning Models

Jiaming Zhang, Che Wang, Yang Cao et al. · Nanyang Technological University · Peking University +2 more

Defends geographic privacy from VLM inference using concept-aware adversarial image perturbations that cascade through hierarchical reasoning chains

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF Code
attack arXiv Feb 24, 2026 · 5w ago

Is the Trigger Essential? A Feature-Based Triggerless Backdoor Attack in Vertical Federated Learning

Yige Liu, Yiwei Lou, Che Wang et al. · Peking University · Zhongguancun Laboratory

Triggerless backdoor attack in vertical federated learning that replaces embeddings at inference to hijack predictions without training-time poisoning

Model Poisoning federated-learning
PDF