Latest papers

7 papers
attack arXiv Feb 6, 2026 · 8w ago

ShallowJail: Steering Jailbreaks against Large Language Models

Shang Liu, Hanyu Pei, Zeyan Liu · University of Louisville

Exploits shallow LLM alignment by injecting activation steering vectors into hidden states, achieving >90% jailbreak success without gradient optimization

Prompt Injection nlp
PDF Code
defense arXiv Jan 20, 2026 · 10w ago

SecureSplit: Mitigating Backdoor Attacks in Split Learning

Zhihao Dou, Dongfei Cui, Weida Wang et al. · Case Western Reserve University · Northeast Electric Power University +6 more

Defends split learning against backdoor attacks by transforming embeddings and filtering poisoned ones via majority-voting scheme

Model Poisoning visionfederated-learning
PDF
defense TIFS Dec 19, 2025 · Dec 2025

Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning

Baolei Zhang, Minghong Fang, Zhuqing Liu et al. · Nankai University · University of Louisville +1 more

Defends federated learning against Byzantine model corruption and gradient privacy inference using dimensionality reduction and adaptive filtering

Data Poisoning Attack Model Inversion Attack federated-learning
1 citations PDF
defense BigData Congress Oct 28, 2025 · Oct 2025

Secure Retrieval-Augmented Generation against Poisoning Attacks

Zirui Cheng, Jikai Sun, Anjun Gao et al. · National University of Singapore · University of Louisville +2 more

Defends RAG systems against knowledge-base poisoning using perplexity filtering and text similarity detection to flag injected malicious documents

Data Poisoning Attack Prompt Injection nlp
6 citations 1 influentialPDF
attack TrustCom Oct 14, 2025 · Oct 2025

Fairness-Constrained Optimization Attack in Federated Learning

Harsh Kasyap, Minghong Fang, Zhuqing Liu et al. · The Alan Turing Institute · Indian Institute of Technology (BHU) +4 more

Proposes a Byzantine fairness attack in FL that injects bias up to 90% via optimization while evading accuracy-based defenses

Data Poisoning Attack federated-learningtabular
PDF
defense arXiv Sep 17, 2025 · Sep 2025

Who Taught the Lie? Responsibility Attribution for Poisoned Knowledge in Retrieval-Augmented Generation

Baolei Zhang, Haoran Xin, Yuxi Chen et al. · Nankai University · University of North Texas +1 more

Detects and attributes poisoned documents in RAG knowledge bases by scoring retrieval ranking, semantics, and generation influence

Data Poisoning Attack Prompt Injection nlp
PDF Code
defense arXiv Aug 19, 2025 · Aug 2025

Two Birds with One Stone: Multi-Task Detection and Attribution of LLM-Generated Text

Zixin Rao, Youssef Mohamed, Shang Liu et al. · University of Georgia · Egypt-Japan University of Science and Technology +1 more

Multi-task framework jointly detects LLM-generated text and attributes authorship to specific LLMs across languages

Output Integrity Attack nlp
PDF Code