attack 2026

Linear Model Extraction via Factual and Counterfactual Queries

Daan Otto , Jannis Kurtz , Dick den Hertog , Ilker Birbil

0 citations · 27 references · arXiv (Cornell University)

α

Published on arXiv

2602.09748

Model Theft

OWASP ML Top 10 — ML05

Key Finding

A linear classifier can be fully extracted from a single counterfactual query under differentiable distance measures, while polyhedral distances require queries linear in the data dimension — demonstrating that the choice of counterfactual distance function critically determines model security.


In model extraction attacks, the goal is to reveal the parameters of a black-box machine learning model by querying the model for a selected set of data points. Due to an increasing demand for explanations, this may involve counterfactual queries besides the typically considered factual queries. In this work, we consider linear models and three types of queries: factual, counterfactual, and robust counterfactual. First, for an arbitrary set of queries, we derive novel mathematical formulations for the classification regions for which the decision of the unknown model is known, without recovering any of the model parameters. Second, we derive bounds on the number of queries needed to extract the model's parameters for (robust) counterfactual queries under arbitrary norm-based distances. We show that the full model can be recovered using just a single counterfactual query when differentiable distance measures are employed. In contrast, when using polyhedral distances for instance, the number of required queries grows linearly with the dimension of the data space. For robust counterfactuals, the latter number of queries doubles. Consequently, the applied distance function and robustness of counterfactuals have a significant impact on the model's security.


Key Contributions

  • Novel mathematical formulations for the classification regions revealed by an arbitrary mix of factual and counterfactual query results, without requiring model parameter recovery
  • Theoretical bounds on the number of queries needed to fully extract a linear model's parameters under arbitrary norm-based distances for both exact and robust counterfactuals
  • Proof that a single counterfactual query suffices for full model extraction with differentiable distances, while polyhedral distances require O(d) queries (doubling for robust counterfactuals)

🛡️ Threat Analysis

Model Theft

The paper is fundamentally about model extraction — stealing a black-box linear classifier's parameters by querying it. It derives theoretical bounds on how many queries (factual, counterfactual, robust counterfactual) are required to fully recover model parameters, demonstrating that a single counterfactual query suffices under differentiable distances.


Details

Domains
tabular
Model Types
traditional_ml
Threat Tags
black_boxinference_time
Applications
linear classificationalgorithmic recourse systemsexplainable ai interfaces