defense arXiv Aug 4, 2025 · Aug 2025
Ko-Wei Chuang, Hen-Hsen Huang, Tsai-Yen Li · National Chengchi University · Academia Sinica
Defends NLP content moderators against adversarial evasion and label poisoning simultaneously via combined adversarial training and noisy-label learning
Input Manipulation Attack Data Poisoning Attack nlp
As large language models (LLMs) and generative AI become increasingly integrated into customer service and moderation applications, adversarial threats emerge from both external manipulations and internal label corruption. In this work, we identify and systematically address these dual adversarial threats by introducing DINA (Dual Defense Against Internal Noise and Adversarial Attacks), a novel unified framework tailored specifically for NLP. Our approach adapts advanced noisy-label learning methods from computer vision and integrates them with adversarial training to simultaneously mitigate internal label sabotage and external adversarial perturbations. Extensive experiments conducted on a real-world dataset from an online gaming service demonstrate that DINA significantly improves model robustness and accuracy compared to baseline models. Our findings not only highlight the critical necessity of dual-threat defenses but also offer practical strategies for safeguarding NLP systems in realistic adversarial scenarios, underscoring broader implications for fair and responsible AI deployment.
llm transformer National Chengchi University · Academia Sinica