Latest papers

4 papers
defense arXiv Mar 20, 2026 · 17d ago

ARMOR: Adaptive Resilience Against Model Poisoning Attacks in Continual Federated Learning for Mobile Indoor Localization

Danish Gufran, Akhil Singampalli, Sudeep Pasricha · Colorado State University

Defends federated learning for indoor localization against model poisoning by predicting expected weight updates with state-space models

Data Poisoning Attack federated-learning
PDF
defense arXiv Dec 12, 2025 · Dec 2025

Factor(U,T): Controlling Untrusted AI by Monitoring their Plans

Edward Lue Chee Lip, Anthony Channg, Diana Kim et al. · Algoverse AI Research · Colorado State University +1 more

Evaluates safety protocols for multi-agent LLM systems where an untrusted decomposer can inject malicious subtask instructions undetectable by monitors

Excessive Agency Prompt Injection nlp
PDF Code
attack arXiv Oct 15, 2025 · Oct 2025

When "Correct" Is Not Safe: Can We Trust Functionally Correct Patches Generated by Code Agents?

Yibo Peng, James Song, Lei Li et al. · Carnegie Mellon University · University of Michigan +3 more

Attacks LLM code agents via crafted issues to produce test-passing but security-vulnerable patches across 12 agent-model combinations

Prompt Injection nlp
PDF
attack arXiv Sep 11, 2025 · Sep 2025

Images in Motion?: A First Look into Video Leakage in Collaborative Deep Learning

Md Fazle Rasul, Alanood Alqobaisi, Bruhadeshwar Bezawada et al. · Colorado State University · Southern Arkansas University

First gradient inversion attack on video data in federated learning, enhanced with super-resolution to reconstruct higher-quality private frames

Model Inversion Attack vision
PDF