Nael Abu-Ghazaleh

Papers in Database (2)

attack arXiv Aug 28, 2025 · Aug 2025

Poison Once, Refuse Forever: Weaponizing Alignment for Injecting Bias in LLMs

Md Abdullah Al Mamun, Ihsen Alouani, Nael Abu-Ghazaleh · University of California · Queen’s University Belfast

Data poisoning attack exploits LLM alignment to inject targeted demographic bias via selective refusal, evading FL defenses with 1% poisoning rate

Model Poisoning Data Poisoning Attack Training Data Poisoning nlpfederated-learning
PDF
attack arXiv Sep 18, 2025 · Sep 2025

Evil Vizier: Vulnerabilities of LLM-Integrated XR Systems

Yicheng Zhang, Zijian Huang, Sophie Chen et al. · University of California · University of Michigan

Demonstrates indirect prompt injection attacks on XR-LLM systems by manipulating physical/digital environment context to corrupt AI glasses outputs

Prompt Injection Excessive Agency multimodalnlpvision
PDF