Latest papers

11 papers
defense arXiv Feb 12, 2026 · 7w ago

BlackCATT: Black-box Collusion Aware Traitor Tracing in Federated Learning

Elena Rodríguez-Lois, Fabio Brau, Maura Pintor et al. · University of Vigo · University of Cagliari

Proposes collusion-resistant black-box model watermarking for federated learning to trace which participant leaked their model copy

Model Theft federated-learningvision
PDF
attack arXiv Feb 3, 2026 · 9w ago

Beyond Suffixes: Token Position in GCG Adversarial Attacks on Large Language Models

Hicham Eddoubi, Umar Faruk Abdullahi, Fadi Hassan · University of Cagliari · Sapienza University of Rome +1 more

Demonstrates GCG prefix-placement jailbreaks achieve higher ASR than suffixes, exposing blind spots in LLM safety evaluation

Input Manipulation Attack Prompt Injection nlp
PDF
attack arXiv Dec 4, 2025 · Dec 2025

Counterfeit Answers: Adversarial Forgery against OCR-Free Document Visual Question Answering

Marco Pintore, Maura Pintor, Dimosthenis Karatzas et al. · University of Cagliari · Universitat Autònoma de Barcelona +1 more

Adversarial forgery attacks on OCR-free DocVQA vision-language models using imperceptible document image perturbations to induce targeted misinformation

Input Manipulation Attack Prompt Injection visionnlpmultimodal
1 citations PDF Code
attack arXiv Nov 11, 2025 · Nov 2025

SOM Directions are Better than One: Multi-Directional Refusal Suppression in Language Models

Giorgio Piras, Raffaele Mura, Fabio Brau et al. · University of Cagliari · University of Genova

Ablates multiple SOM-derived refusal directions from LLM internals to outperform standard jailbreak algorithms at suppressing safety refusal

Prompt Injection nlp
3 citations PDF Code
defense arXiv Oct 21, 2025 · Oct 2025

S2AP: Score-space Sharpness Minimization for Adversarial Pruning

Giorgio Piras, Qi Zhao, Fabio Brau et al. · University of Cagliari · Karlsruhe Institute of Technology

Plug-in sharpness minimization for adversarial pruning that stabilizes mask selection and improves pruned model robustness against adversarial attacks

Input Manipulation Attack vision
PDF
attack arXiv Oct 7, 2025 · Oct 2025

LatentBreak: Jailbreaking Large Language Models through Latent Space Feedback

Raffaele Mura, Giorgio Piras, Kamilė Lukošiūtė et al. · University of Cagliari · Centre for AI Governance +1 more

White-box LLM jailbreak using latent-space-guided word substitutions to produce low-perplexity prompts that evade perplexity-based safety filters

Prompt Injection nlp
1 citations PDF
benchmark arXiv Sep 17, 2025 · Sep 2025

Deceptive Beauty: Evaluating the Impact of Beauty Filters on Deepfake and Morphing Attack Detection

Sara Concas, Simone Maurizio La Cava, Andrea Panzino et al. · University of Cagliari

Evaluates how beauty filters degrade deepfake and morphing attack detectors, exposing robustness vulnerabilities in state-of-the-art detection systems

Output Integrity Attack vision
PDF
defense arXiv Sep 3, 2025 · Sep 2025

Prototype-Guided Robust Learning against Backdoor Attacks

Wei Guo, Maura Pintor, Ambra Demontis et al. · University of Cagliari

Proposes PGRL, a prototype-guided training defense that resists diverse backdoor attacks with only a tiny clean validation set

Model Poisoning vision
PDF Code
attack arXiv Sep 3, 2025 · Sep 2025

Silent Until Sparse: Backdoor Attacks on Semi-Structured Sparsity

Wei Guo, Fabio Brau, Maura Pintor et al. · University of Cagliari

Backdoor attack that stays silent in dense models but activates with >99% success rate after 2:4 semi-structured sparsity pruning

Model Poisoning vision
PDF
survey arXiv Aug 26, 2025 · Aug 2025

Deep Data Hiding for ICAO-Compliant Face Images: A Survey

Jefferson David Rodriguez Chivata, Davide Ghiani, Simone Maurizio La Cava et al. · University of Cagliari · Dedem S.p.A.

Surveys deep learning watermarking and steganography defenses for ICAO biometric passport images against deepfakes and morphing attacks

Output Integrity Attack vision
PDF
defense arXiv Aug 13, 2025 · Aug 2025

Demystifying the Role of Rule-based Detection in AI Systems for Windows Malware Detection

Andrea Ponte, Luca Demetrio, Luca Oneto et al. · University of Genova · RINA Consulting +1 more

Defends ML malware detectors against adversarial PE evasion by training only on YARA-undetected samples, improving robustness and reducing attack surface

Input Manipulation Attack tabular
PDF