attack arXiv Sep 8, 2025 · Sep 2025
Victor Guyomard, Mathis Mauvisseau, Marie Paindavoine · Skyld AI
Extracts SafetyCore's on-device Android AI model then crafts adversarial images to bypass sensitive content detection entirely
Model Theft Input Manipulation Attack vision
Due to hardware and software improvements, an increasing number of AI models are deployed on-device. This shift enhances privacy and reduces latency, but also introduces security risks distinct from traditional software. In this article, we examine these risks through the real-world case study of SafetyCore, an Android system service incorporating sensitive image content detection. We demonstrate how the on-device AI model can be extracted and manipulated to bypass detection, effectively rendering the protection ineffective. Our analysis exposes vulnerabilities of on-device AI models and provides a practical demonstration of how adversaries can exploit them.
cnn Skyld AI