Latest papers

8 papers
attack arXiv Jan 31, 2026 · 9w ago

Bypassing Prompt Injection Detectors through Evasive Injections

Md Jahedur Rahman, Ihsen Alouani · Queen’s University Belfast

GCG adversarial suffixes bypass activation-delta prompt injection detectors on Phi-3 and Llama-3 with up to 99.63% success rate

Input Manipulation Attack Prompt Injection nlp
PDF
attack arXiv Jan 26, 2026 · 10w ago

AttenMIA: LLM Membership Inference Attack through Attention Signals

Pedram Zaree, Md Abdullah Al Mamun, Yue Dong et al. · University of California · Queen’s University Belfast

Exploits transformer attention patterns to infer LLM training membership, achieving 87.9% TPR@1%FPR on LLaMA-2-13b

Membership Inference Attack Sensitive Information Disclosure nlp
PDF
survey arXiv Jan 23, 2026 · 10w ago

Emerging Threats and Countermeasures in Neuromorphic Systems: A Survey

Pablo Sorrentino, Stjepan Picek, Ihsen Alouani et al. · University of Groningen · University of Zagreb +5 more

Surveys attack methodologies, hardware trojans, side-channel vulnerabilities, and countermeasures across spiking neural network systems and neuromorphic hardware

Input Manipulation Attack Model Poisoning
PDF
benchmark arXiv Nov 26, 2025 · Nov 2025

Privacy in Federated Learning with Spiking Neural Networks

Dogukan Aksu, Jesus Martinez del Rincon, Ihsen Alouani · Queen’s University Belfast

Benchmarks gradient inversion attacks adapted to spiking neural networks, revealing inherent privacy resistance over ANNs in federated learning

Model Inversion Attack visionfederated-learning
PDF Code
attack arXiv Oct 21, 2025 · Oct 2025

POLAR: Policy-based Layerwise Reinforcement Learning Method for Stealthy Backdoor Attacks in Federated Learning

Kuai Yu, Xiaoyu Wu, Peishen Yan et al. · Columbia University · Shanghai Jiao Tong University +4 more

Uses reinforcement learning to optimize layer selection for stealthy backdoor attacks in federated learning, beating SOTA defenses by 40%

Model Poisoning federated-learning
PDF
defense arXiv Sep 30, 2025 · Sep 2025

OmniDFA: A Unified Framework for Open Set Synthesis Image Detection and Few-Shot Attribution

Shiyu Wu, Shuyan Li, Jing Li et al. · Chinese Academy of Sciences · Beijing Academy of Artificial Intelligence +3 more

Proposes open-set few-shot framework that jointly detects AI-generated images and attributes them to source generative models

Output Integrity Attack visiongenerative
PDF
attack arXiv Sep 3, 2025 · Sep 2025

Stealth by Conformity: Evading Robust Aggregation through Adaptive Poisoning

Ryan McGaughey, Jesus Martinez del Rincon, Ihsen Alouani · Queen’s University Belfast

Adaptive FL backdoor attack uses aggregation side-channel feedback to evade robust defenses, boosting attack success rate by 47%

Model Poisoning Data Poisoning Attack federated-learning
PDF
attack arXiv Aug 28, 2025 · Aug 2025

Poison Once, Refuse Forever: Weaponizing Alignment for Injecting Bias in LLMs

Md Abdullah Al Mamun, Ihsen Alouani, Nael Abu-Ghazaleh · University of California · Queen’s University Belfast

Data poisoning attack exploits LLM alignment to inject targeted demographic bias via selective refusal, evading FL defenses with 1% poisoning rate

Model Poisoning Data Poisoning Attack Training Data Poisoning nlpfederated-learning
PDF