Latest papers

6 papers
defense arXiv Nov 20, 2025 · Nov 2025

How Noise Benefits AI-generated Image Detection

Jiazhen Yan, Ziqiang Li, Fan Wang et al. · Nanjing University of Information Science and Technology · University of Macau +1 more

Proposes PiN-CLIP, a noise-guided CLIP fine-tuning method that suppresses spurious shortcuts for generalizable AI-generated image detection

Output Integrity Attack visiongenerative
PDF
defense arXiv Oct 31, 2025 · Oct 2025

Who Made This? Fake Detection and Source Attribution with Diffusion Features

Simone Bonechi, Paolo Andreini, Barbara Toniella Corradini · University of Siena · Italian Institute of Technology

Leverages diffusion model internal activations to detect deepfakes and attribute source generators without fine-tuning

Output Integrity Attack visiongenerative
1 citations PDF
benchmark arXiv Oct 28, 2025 · Oct 2025

Training-free Source Attribution of AI-generated Images via Resynthesis

Pietro Bongini, Valentina Molinari, Andrea Costanzo et al. · University of Siena · IMT School

Training-free one-shot method attributes synthetic images to source generators via resynthesis and CLIP feature comparison, with a new benchmark dataset

Output Integrity Attack visiongenerative
PDF
defense arXiv Oct 18, 2025 · Oct 2025

EditMark: Watermarking Large Language Models based on Model Editing

Shuai Li, Kejiang Chen, Jun Jiang et al. · University of Science and Technology of China · A*STAR +1 more

Embeds 32-bit ownership watermarks into LLM weights via model editing in 20 seconds, enabling copyright verification without training costs

Model Theft Model Theft nlp
PDF
defense arXiv Sep 29, 2025 · Sep 2025

Of-SemWat: High-payload text embedding for semantic watermarking of AI-generated images with arbitrary size

Benedetta Tondi, Andrea Costanzo, Mauro Barni · University of Siena

Embeds high-payload semantic text watermarks in large AI-generated images to enable provenance tracking and manipulation detection

Output Integrity Attack visiongenerativenlp
PDF
attack arXiv Sep 12, 2025 · Sep 2025

Immunizing Images from Text to Image Editing via Adversarial Cross-Attention

Matteo Trippodo, Federico Becattini, Lorenzo Seidenari · University of Florence · University of Siena

Adversarial perturbation attack disrupts cross-attention in diffusion image editors to immunize images from unwanted text-guided edits

Input Manipulation Attack visiongenerative
PDF