Uncertainty-Driven Reliability: Selective Prediction and Trustworthy Deployment in Modern Machine Learning
Published on arXiv
2508.07556
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Uncertainty signals in selective prediction systems can be adversarially manipulated to suppress abstention on erroneous predictions while maintaining high accuracy, and calibration audits combined with verifiable inference effectively mitigate these attacks.
Confidential Guardian
Novel technique introduced
Machine learning (ML) systems are increasingly deployed in high-stakes domains where reliability is paramount. This thesis investigates how uncertainty estimation can enhance the safety and trustworthiness of ML, focusing on selective prediction -- where models abstain when confidence is low. We first show that a model's training trajectory contains rich uncertainty signals that can be exploited without altering its architecture or loss. By ensembling predictions from intermediate checkpoints, we propose a lightweight, post-hoc abstention method that works across tasks, avoids the cost of deep ensembles, and achieves state-of-the-art selective prediction performance. Crucially, this approach is fully compatible with differential privacy (DP), allowing us to study how privacy noise affects uncertainty quality. We find that while many methods degrade under DP, our trajectory-based approach remains robust, and we introduce a framework for isolating the privacy-uncertainty trade-off. Next, we then develop a finite-sample decomposition of the selective classification gap -- the deviation from the oracle accuracy-coverage curve -- identifying five interpretable error sources and clarifying which interventions can close the gap. This explains why calibration alone cannot fix ranking errors, motivating methods that improve uncertainty ordering. Finally, we show that uncertainty signals can be adversarially manipulated to hide errors or deny service while maintaining high accuracy, and we design defenses combining calibration audits with verifiable inference. Together, these contributions advance reliable ML by improving, evaluating, and safeguarding uncertainty estimation, enabling models that not only make accurate predictions -- but also know when to say "I do not know".
Key Contributions
- Post-hoc selective prediction via training-trajectory checkpoint ensembling, robust under differential privacy noise
- Finite-sample decomposition of the selective classification gap into five interpretable error sources
- Adversarial attack framework showing uncertainty signals can be manipulated to hide errors or deny service, with defenses via calibration audits and verifiable inference
🛡️ Threat Analysis
The primary security contribution is an attack on the integrity of model uncertainty/confidence outputs (manipulating them to hide errors or deny service), paired with defenses using verifiable inference schemes — exactly the output integrity threat ML09 covers, including verifiable inference as a defense mechanism.