Latest papers

6 papers
defense arXiv Feb 28, 2026 · 5w ago

Exact and Asymptotically Complete Robust Verifications of Neural Networks via Quantum Optimization

Wenxin Li, Wenchao Liu, Chuan Wang et al. · Ltd. · Beijing Normal University

Quantum-optimization-based formal verification certifying neural network robustness against bounded adversarial perturbations with soundness and completeness guarantees

Input Manipulation Attack vision
PDF
defense Stat Nov 10, 2025 · Nov 2025

Adaptive Testing for Segmenting Watermarked Texts From Language Models

Xingchi Li, Xiaochi Liu, Guanxun Li · Texas A&M University · Beijing Normal University

Adaptive change-point detection framework segments mixed texts into watermarked LLM and human-written portions without prompt estimation

Output Integrity Attack nlp
1 citations 1 influentialPDF
defense arXiv Oct 23, 2025 · Oct 2025

Fake-in-Facext: Towards Fine-Grained Explainable DeepFake Analysis

Lixiong Qin, Yang Zhang, Mei Wang et al. · Beijing University of Posts and Telecommunications · Beijing Normal University

Fine-grained explainable deepfake detector that grounds textual forgery explanations with artifact segmentation masks via MLLM

Output Integrity Attack visionnlpmultimodal
PDF Code
benchmark arXiv Sep 11, 2025 · Sep 2025

Bridging the Gap Between Ideal and Real-world Evaluation: Benchmarking AI-Generated Image Detection in Challenging Scenarios

Chunxiao Li, Xiaoxiao Wang, Meiling Li et al. · Beijing Normal University · University of Chinese Academy of Sciences +3 more

Benchmarks 17 AI-image detectors and 10 VLMs on a real-world robustness dataset spanning social media transmission and re-digitization degradation

Output Integrity Attack visionmultimodal
PDF
defense arXiv Aug 17, 2025 · Aug 2025

Semantic Discrepancy-aware Detector for Image Forgery Identification

Ziye Wang, Minghang Yu, Chunyan Xu et al. · Nanjing University of Science and Technology · Beijing Normal University

Novel vision-language model-guided detector aligns forgery traces with semantic concepts to identify AI-generated forged images

Output Integrity Attack vision
PDF Code
attack arXiv Jan 1, 2025 · Jan 2025

Everywhere Attack: Attacking Locally and Globally to Boost Targeted Transferability

Hui Zeng, Sanshuai Cui, Biwei Chen et al. · Southwest University of Science and Technology · Guangan Institute of Technology +2 more

Attacks every local image region simultaneously to boost targeted adversarial transferability across black-box vision models by 28–300%

Input Manipulation Attack vision
5 citations PDF Code