Latest papers

4 papers
attack arXiv Oct 15, 2025 · Oct 2025

Model-agnostic Adversarial Attack and Defense for Vision-Language-Action Models

Haochuan Xu, Yun Sing Koh, Shuhuai Huang et al. · The University of Auckland · King Abdullah University of Science and Technology +2 more

Model-agnostic adversarial patch attack disrupts cross-modal embedding alignment in Vision-Language-Action robots, causing task failures

Input Manipulation Attack visionmultimodal
6 citations PDF Code
attack arXiv Oct 10, 2025 · Oct 2025

Goal-oriented Backdoor Attack against Vision-Language-Action Models via Physical Objects

Zirun Zhou, Zhengyang Xiao, Haochuan Xu et al. · The University of Auckland · The King Abdullah University of Science and Technology

Backdoor attack plants physical objects as triggers in VLA training data, steering robots to execute goal-oriented malicious actions at 97% success rate

Model Poisoning visionmultimodal
5 citations PDF Code
attack arXiv Sep 28, 2025 · Sep 2025

Formalization Driven LLM Prompt Jailbreaking via Reinforcement Learning

Zhaoqi Wang, Daqing He, Zijian Zhang et al. · Beijing Institute of Technology · Hefei University of Technology +1 more

Attacks LLM alignment with RL-driven formalization of jailbreak prompts combined with GraphRAG knowledge reuse

Prompt Injection nlp
PDF
defense arXiv Sep 16, 2025 · Sep 2025

EByFTVeS: Efficient Byzantine Fault Tolerant-based Verifiable Secret-sharing in Distributed Privacy-preserving Machine Learning

Zhen Li, Zijian Zhang, Wenjin Yang et al. · Beijing Institute of Technology · The University of Auckland

Proposes a timing-based Byzantine model poisoning attack on BFT-VSS distributed ML and defends with consensus-synchronized secret sharing

Data Poisoning Attack federated-learning
PDF