AutoMIA: Improved Baselines for Membership Inference Attack via Agentic Self-Exploration
Ruhao Liu , Weiqi Huang , Qi Li , Xinchao Wang
Published on arXiv
2604.01014
Membership Inference Attack
OWASP ML Top 10 — ML04
Key Finding
Consistently matches or outperforms state-of-the-art MIA baselines across different large models without manual feature engineering
AutoMIA
Novel technique introduced
Membership Inference Attacks (MIAs) serve as a fundamental auditing tool for evaluating training data leakage in machine learning models. However, existing methodologies predominantly rely on static, handcrafted heuristics that lack adaptability, often leading to suboptimal performance when transferred across different large models. In this work, we propose AutoMIA, an agentic framework that reformulates membership inference as an automated process of self-exploration and strategy evolution. Given high-level scenario specifications, AutoMIA self-explores the attack space by generating executable logits-level strategies and progressively refining them through closed-loop evaluation feedback. By decoupling abstract strategy reasoning from low-level execution, our framework enables a systematic, model-agnostic traversal of the attack search space. Extensive experiments demonstrate that AutoMIA consistently matches or outperforms state-of-the-art baselines while eliminating the need for manual feature engineering.
Key Contributions
- AutoMIA: agentic framework that automates membership inference attack strategy generation and refinement
- Self-exploration mechanism that generates executable logits-level strategies with closed-loop evaluation feedback
- Model-agnostic approach that eliminates manual feature engineering while matching or outperforming SOTA baselines
🛡️ Threat Analysis
Core contribution is an improved methodology for membership inference attacks - determining whether specific data points were in the training set. The paper proposes AutoMIA as an automated framework that generates and refines MIA strategies.