Latest papers

6 papers
tool arXiv Jan 21, 2026 · 10w ago

Securing LLM-as-a-Service for Small Businesses: An Industry Case Study of a Distributed Chatbot Deployment Platform

Jiazhu Xie, Bowen Li, Heyu Fu et al. · RMIT University

Builds an open-source multi-tenant LLM chatbot platform for small businesses with deployable RAG prompt injection defenses

Prompt Injection nlp
PDF Code
survey arXiv Nov 13, 2025 · Nov 2025

Unlearning Imperative: Securing Trustworthy and Responsible LLMs through Engineered Forgetting

James Jin Kang, Dang Bui, Thanh Pham et al. · RMIT University

Surveys LLM machine unlearning methods, adversarial recovery attacks, and governance frameworks for verifiable forgetting

Membership Inference Attack Model Inversion Attack Sensitive Information Disclosure nlp
1 citations PDF
defense RAID Oct 8, 2025 · Oct 2025

Unsupervised Backdoor Detection and Mitigation for Spiking Neural Networks

Jiachen Li, Bang Wu, Xiaoyu Xia et al. · RMIT University

Defends spiking neural networks against backdoor attacks using temporal membrane potential statistics for detection and dendritic weight clamping for mitigation

Model Poisoning vision
PDF
attack NDSS Sep 11, 2025 · Sep 2025

Character-Level Perturbations Disrupt LLM Watermarks

Zhaoxi Zhang, Xiaomei Zhang, Yanjun Zhang et al. · University of Technology Sydney · Griffith University +1 more

Attacks LLM text watermarks via character-level perturbations that disrupt tokenization, defeating five watermarking schemes with minimal detector access

Output Integrity Attack nlp
PDF
defense arXiv Aug 7, 2025 · Aug 2025

From Detection to Correction: Backdoor-Resilient Face Recognition via Vision-Language Trigger Detection and Noise-Based Neutralization

Farah Wahida, M.A.P. Chamikara, Yashothara Shanmugarasa et al. · RMIT University · CSIRO’s Data61 +1 more

Uses VLM ensemble majority voting to detect and neutralize backdoor-poisoned training images in face recognition systems

Model Poisoning vision
PDF
defense arXiv Aug 4, 2025 · Aug 2025

FedLAD: A Linear Algebra Based Data Poisoning Defence for Federated Learning

Qi Xiong, Hai Dong, Nasrin Sohrabi et al. · RMIT University · Deakin University

Defends federated learning against Sybil data poisoning by modeling aggregation as a linear algebra problem to filter malicious updates

Data Poisoning Attack visionnlpfederated-learning
PDF Code