tool 2026

Origin Lens: A Privacy-First Mobile Framework for Cryptographic Image Provenance and AI Detection

Alexander Loth 1,2, Dominique Conceicao Rosario 1, Peter Ebinger 1, Martin Kappes 1, Marc-Oliver Pahl 3

2 citations · 38 references · arXiv (Cornell University)

α

Published on arXiv

2602.03423

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Demonstrates feasibility of fully on-device cryptographic provenance verification and AI detection on mobile hardware using a Rust/Flutter hybrid architecture compliant with C2PA and EU AI Act requirements.

Origin Lens

Novel technique introduced


The proliferation of generative AI poses challenges for information integrity assurance, requiring systems that connect model governance with end-user verification. We present Origin Lens, a privacy-first mobile framework that targets visual disinformation through a layered verification architecture. Unlike server-side detection systems, Origin Lens performs cryptographic image provenance verification and AI detection locally on the device via a Rust/Flutter hybrid architecture. Our system integrates multiple signals - including cryptographic provenance, generative model fingerprints, and optional retrieval-augmented verification - to provide users with graded confidence indicators at the point of consumption. We discuss the framework's alignment with regulatory requirements (EU AI Act, DSA) and its role in verification infrastructure that complements platform-level mechanisms.


Key Contributions

  • Privacy-preserving on-device architecture (Rust/Flutter) for AI image detection and provenance verification without server-side data sharing
  • Defense-in-depth pipeline combining C2PA cryptographic provenance, generative model fingerprints, and optional retrieval-augmented verification with graded confidence indicators
  • Open-source mobile deployment of C2PA standard verification accessible to non-expert users, aligned with EU AI Act and DSA regulatory requirements

🛡️ Threat Analysis

Output Integrity Attack

Primary contributions are AI-generated content detection (detecting generative model fingerprints in images) and cryptographic image provenance authentication (C2PA standard integration) — both explicitly fall under output integrity and content provenance/authentication.


Details

Domains
visiongenerative
Model Types
diffusiongan
Threat Tags
inference_timedigital
Applications
ai-generated image detectionimage provenance verificationmisinformation detection