defense 2025

Towards Provably Secure Generative AI: Reliable Consensus Sampling

Yu Cui 1, Hang Fu 1, Sicheng Pan 1, Zhuoyu Sun 1, Yifei Liu 1, Yuhong Nie 1, Bo Ran 1, Baohan Huang 1, Xufeng Zhang 1, Haibin Zhang 2, Cong Zuo 1, Licheng Wang 1

0 citations · 47 references · arXiv

α

Published on arXiv

2512.24925

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

RCS achieves a 5× increase in safety rate over standard Consensus Sampling while maintaining comparable latency and eliminating abstention entirely

Reliable Consensus Sampling (RCS)

Novel technique introduced


Existing research on generative AI security is primarily driven by mutually reinforcing attack and defense methodologies grounded in empirical experience. This dynamic frequently gives rise to previously unknown attacks that can circumvent current detection and prevention. This necessitates the continual updating of security mechanisms. Constructing generative AI with provable security and theoretically controllable risk is therefore necessary. Consensus Sampling (CS) is a promising algorithm toward provably secure AI. It controls risk by leveraging overlap in model output probabilities. However, we find that CS relies on frequent abstention to avoid unsafe outputs, which reduces utility. Moreover, CS becomes highly vulnerable when unsafe models are maliciously manipulated. To address these issues, we propose a new primitive called Reliable Consensus Sampling (RCS), that traces acceptance probability to tolerate extreme adversarial behaviors, improving robustness. RCS also eliminates the need for abstention entirely. We further develop a feedback algorithm to continuously and dynamically enhance the safety of RCS. We provide theoretical guarantees that RCS maintains a controllable risk threshold. Extensive experiments show that RCS significantly improves robustness and utility while maintaining latency comparable to CS. We hope this work contributes to the development of provably secure generative AI.


Key Contributions

  • Formalizes a Byzantine threat model for model groups with safety and liveness properties, enabling provable security analysis of generative AI ensembles
  • Proposes Reliable Consensus Sampling (RCS), a trace-based algorithm that eliminates abstention, reweights adversarial model influence, and guarantees eventual delivery with a provable risk upper bound
  • Introduces a quantum-entanglement-inspired feedback algorithm that identifies task-unsafe models and dynamically excludes them to improve group safety

🛡️ Threat Analysis


Details

Domains
nlpgenerative
Model Types
llm
Threat Tags
black_boxinference_time
Applications
generative ai safetylarge language model safetymulti-model ensemble safety