attack 2026

REBEL: Hidden Knowledge Recovery via Evolutionary-Based Evaluation Loop

Patryk Rybak 1, Paweł Batorski 2, Paul Swoboda 2, Przemysław Spurek 1,3

0 citations · 67 references · arXiv (Cornell University)

α

Published on arXiv

2602.06248

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

REBEL recovers supposedly forgotten knowledge with attack success rates up to 60% on TOFU and 93% on WMDP, consistently outperforming static adversarial baselines across a diverse suite of unlearning algorithms.

REBEL

Novel technique introduced


Machine unlearning for LLMs aims to remove sensitive or copyrighted data from trained models. However, the true efficacy of current unlearning methods remains uncertain. Standard evaluation metrics rely on benign queries that often mistake superficial information suppression for genuine knowledge removal. Such metrics fail to detect residual knowledge that more sophisticated prompting strategies could still extract. We introduce REBEL, an evolutionary approach for adversarial prompt generation designed to probe whether unlearned data can still be recovered. Our experiments demonstrate that REBEL successfully elicits ``forgotten'' knowledge from models that seemed to be forgotten in standard unlearning benchmarks, revealing that current unlearning methods may provide only a superficial layer of protection. We validate our framework on subsets of the TOFU and WMDP benchmarks, evaluating performance across a diverse suite of unlearning algorithms. Our experiments show that REBEL consistently outperforms static baselines, recovering ``forgotten'' knowledge with Attack Success Rates (ASRs) reaching up to 60% on TOFU and 93% on WMDP. We will make all code publicly available upon acceptance. Code is available at https://github.com/patryk-rybak/REBEL/


Key Contributions

  • REBEL: an evolutionary framework that uses a secondary LLM to iteratively generate adversarial prompts that elicit 'forgotten' knowledge from unlearned target LLMs
  • Demonstrates that standard forgetting metrics (including relearning-based proxies) substantially underestimate residual knowledge recoverability under adaptive adversarial querying
  • Provides a benchmark showing state-of-the-art unlearning methods offer only superficial protection, with ASRs up to 93% on WMDP and 60% on TOFU across diverse unlearning algorithms

🛡️ Threat Analysis

Model Inversion Attack

REBEL is an attack that extracts private training data from LLMs that have undergone machine unlearning — the adversary uses evolutionary prompt optimization to recover supposedly 'forgotten' knowledge from model outputs, directly demonstrating training data reconstruction from model behavior.


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
TOFUWMDP
Applications
llm machine unlearning evaluationllm safety red-teaming