Latest papers

3 papers
attack arXiv Mar 16, 2026 · 21d ago

Do Not Leave a Gap: Hallucination-Free Object Concealment in Vision-Language Models

Amira Guesmi, Muhammad Shafique · New York University Abu Dhabi

Adversarial attack that hides objects in VLMs by blending them into backgrounds, avoiding hallucinations from prior suppression-based methods

Input Manipulation Attack Prompt Injection visionmultimodal
PDF
defense arXiv Sep 29, 2025 · Sep 2025

DRIFT: Divergent Response in Filtered Transformations for Robust Adversarial Defense

Amira Guesmi, Muhammad Shafique · New York University Abu Dhabi

Defends CNNs and ViTs against adversarial examples by training stochastic filter ensembles that actively disrupt gradient consensus to prevent transferable perturbations.

Input Manipulation Attack vision
PDF
defense arXiv Sep 5, 2025 · Sep 2025

RobQFL: Robust Quantum Federated Learning in Adversarial Environment

Walid El Maouaki, Nouhaila Innan, Alberto Marchisio et al. · Hassan II University of Casablanca · New York University Abu Dhabi +1 more

Defends quantum federated learning against adversarial examples via selective adversarial training with tunable client coverage and perturbation schedules

Input Manipulation Attack federated-learning
PDF