defense 2025

DAMAGE: Detecting Adversarially Modified AI Generated Text

Elyas Masrour , Bradley Emi , Max Spero

0 citations

α

Published on arXiv

2501.03437

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

DAMAGE maintains cross-humanizer generalization and resists a fine-tuned adversarial evasion attack, while existing commercial and open-source detectors largely fail on humanized AI text

DAMAGE

Novel technique introduced


AI humanizers are a new class of online software tools meant to paraphrase and rewrite AI-generated text in a way that allows them to evade AI detection software. We study 19 AI humanizer and paraphrasing tools and qualitatively assess their effects and faithfulness in preserving the meaning of the original text. We show that many existing AI detectors fail to detect humanized text. Finally, we demonstrate a robust model that can detect humanized AI text while maintaining a low false positive rate using a data-centric augmentation approach. We attack our own detector, training our own fine-tuned model optimized against our detector's predictions, and show that our detector's cross-humanizer generalization is sufficient to remain robust to this attack.


Key Contributions

  • Qualitative audit of 19 AI humanizer and paraphrasing tools, categorizing their transformation strategies and effectiveness against existing AI detectors
  • Data-centric augmentation training approach that produces a robust AI text detector (DAMAGE) with strong cross-humanizer generalization and low false positive rate
  • Adversarial red-team evaluation demonstrating DAMAGE remains robust even after a white-box fine-tuned evasion model is trained specifically to defeat it

🛡️ Threat Analysis

Output Integrity Attack

Core contribution is a robust AI-generated text detector that resists paraphrasing/humanizer evasion tools — directly addresses output integrity and AI-generated content authenticity. The adversarial evaluation (fine-tuned evasion model attacking their detector) reinforces the output-integrity threat model.


Details

Domains
nlp
Model Types
transformerllm
Threat Tags
inference_timeblack_box
Applications
ai-generated text detectionacademic plagiarism detectionseo content detection