Latest papers

7 papers
defense arXiv Mar 24, 2026 · 13d ago

Byzantine-Robust and Differentially Private Federated Optimization under Weaker Assumptions

Rustem Islamov, Grigory Malinovsky, Alexander Gaponov et al. · University of Basel · KAUST +1 more

Byzantine-robust federated learning with differential privacy, proving convergence without bounded gradient assumptions using double momentum and clipping

Data Poisoning Attack federated-learning
PDF
defense arXiv Feb 8, 2026 · 8w ago

Efficient and Adaptable Detection of Malicious LLM Prompts via Bootstrap Aggregation

Shayan Ali Hassan, Tao Ni, Zafar Ayyub Qazi et al. · KAUST · LUMS

Lightweight ensemble classifier (430M params) that detects LLM jailbreaks and prompt injections, outperforming billion-parameter guardrails

Prompt Injection nlp
PDF Code
defense arXiv Feb 6, 2026 · 8w ago

PurSAMERE: Reliable Adversarial Purification via Sharpness-Aware Minimization of Expected Reconstruction Error

Vinh Hoang, Sebastian Krumscheid, Holger Rauhut et al. · RWTH-Aachen University · Forschungszentrum Jülich +3 more

Deterministic adversarial purification via sharpness-aware minimization that resists full-knowledge white-box attacks without gradient obfuscation

Input Manipulation Attack vision
PDF
defense arXiv Dec 22, 2025 · Dec 2025

Multi-Layer Confidence Scoring for Detection of Out-of-Distribution Samples, Adversarial Attacks, and In-Distribution Misclassifications

Lorenzo Capelli, Leandro de Souza Rosa, Gianluca Setti et al. · University of Bologna · KAUST

Post-hoc unified framework detects adversarial attacks and OOD samples via intermediate activation analysis on VGG16 and ViT

Input Manipulation Attack vision
PDF
defense arXiv Aug 18, 2025 · Aug 2025

RepreGuard: Detecting LLM-Generated Text by Revealing Hidden Representation Patterns

Xin Chen, Junchao Wu, Shu Yang et al. · University of Macau · Chinese Academy of Sciences +2 more

Proposes RepreGuard, detecting LLM-generated text via hidden activation patterns for robust OOD detection at 94.92% AUROC

Output Integrity Attack nlp
PDF Code
defense arXiv Aug 17, 2025 · Aug 2025

Rethinking Safety in LLM Fine-tuning: An Optimization Perspective

Minseon Kim, Jin Myung Kwak, Lama Alssum et al. · Microsoft Research · KAIST +5 more

Defends LLM safety during fine-tuning via hyperparameter tuning and EMA momentum, cutting harmful responses from 16% to 5%

Transfer Learning Attack Prompt Injection nlp
PDF
benchmark arXiv Jan 3, 2025 · Jan 2025

AVTrustBench: Assessing and Enhancing Reliability and Robustness in Audio-Visual LLMs

Sanjoy Chowdhury, Sayan Nag, Subhrajyoti Dasgupta et al. · College Park · University of Toronto +3 more

Benchmarks 13 audio-visual LLMs on adversarial robustness, compositional reasoning, and modality dependency with 600K samples, plus a preference-optimization defense

Input Manipulation Attack audiomultimodalnlp
12 citations PDF