Latest papers

12 papers
defense arXiv Apr 2, 2026 · 6d ago

Moiré Video Authentication: A Physical Signature Against AI Video Generation

Yuan Qing, Kunyu Zheng, Lingxiao Li et al. · Boston University

Physics-based video authentication using Moiré interference patterns that real cameras produce but AI generators cannot faithfully reproduce

Output Integrity Attack visiongenerative
PDF
attack arXiv Feb 25, 2026 · 6w ago

Attention to Neural Plagiarism: Diffusion Models Can Plagiarize Your Copyrighted Images!

Zihang Zou, Boqing Gong, Liqiang Wang · University of Central Florida · Boston University

Gradient-based attack exploits diffusion model cross-attention to replicate copyrighted images while evading both visible and invisible watermarks

Output Integrity Attack visiongenerative
PDF Code
benchmark arXiv Feb 21, 2026 · 6w ago

Prior Aware Memorization: An Efficient Metric for Distinguishing Memorization from Generalization in Large Language Models

Trishita Tiwari, Ari Trachtenberg, G. Edward Suh · Cornell University · Boston University +1 more

Proposes Prior-Aware Memorization metric showing 55–90% of LLM 'memorized' sequences are actually statistically common, not genuine leakage

Model Inversion Attack Sensitive Information Disclosure nlp
PDF
survey arXiv Feb 6, 2026 · 8w ago

Trojans in Artificial Intelligence (TrojAI) Final Report

Kristopher W. Reese, Taylor Kulp-McDowall, Michael Majurski et al. · IARPA · NIST +13 more

Surveys IARPA TrojAI program findings on AI backdoor detection via weight analysis and trigger inversion across multi-year research

Model Poisoning visionnlp
PDF
defense arXiv Feb 4, 2026 · 9w ago

Laws of Learning Dynamics and the Core of Learners

Inkee Jung, Siu Cheong Lau · Boston University

Entropy-based ensemble immunization defense that detects and adapts to adversarial perturbations via hierarchical logifold generations

Input Manipulation Attack vision
PDF
attack arXiv Nov 24, 2025 · Nov 2025

RoguePrompt: Dual-Layer Ciphering for Self-Reconstruction to Circumvent LLM Moderation

Benyamin Tafreshian · Boston University

Automated cipher-encoding jailbreak using ROT-13 and Vigenère nesting to bypass LLM moderation and self-reconstruct forbidden prompts

Prompt Injection nlp
PDF
defense arXiv Sep 18, 2025 · Sep 2025

Real, Fake, or Manipulated? Detecting Machine-Influenced Text

Yitong Wang, Zhongping Zhang, Margherita Piana et al. · Boston University · University of California

Novel hierarchical detector classifies LLM-influenced text into four fine-grained types, outperforming SOTA by 2.5–3 mAP

Output Integrity Attack nlp
PDF
benchmark arXiv Aug 26, 2025 · Aug 2025

The Sample Complexity of Membership Inference and Privacy Auditing

Mahdi Haghifam, Adam Smith, Jonathan Ullman · Northeastern University · Boston University

Proves membership inference needs Ω(n + n²ρ²) reference samples, showing all practical O(n)-sample attacks are fundamentally limited

Membership Inference Attack tabular
PDF
defense arXiv Aug 19, 2025 · Aug 2025

CCFC: Core & Core-Full-Core Dual-Track Defense for LLM Jailbreak Protection

Jiaming Hu, Haoyu Wang, Debarghya Mukherjee et al. · University at Albany · Boston University

Dual-track prompt-level defense isolates query semantic cores to neutralize LLM jailbreaks including GCG and DeepInception

Input Manipulation Attack Prompt Injection nlp
PDF
defense arXiv Aug 19, 2025 · Aug 2025

CRISP: Persistent Concept Unlearning via Sparse Autoencoders

Tomer Ashuach, Dana Arad, Aaron Mueller et al. · Technion – Israel Institute of Technology · Boston University +1 more

Permanently removes dangerous LLM knowledge by suppressing sparse autoencoder features via fine-tuning, blocking adversarial bypass of inference-time safety measures

Prompt Injection nlp
PDF Code
defense arXiv Jan 9, 2025 · Jan 2025

RAG-WM: An Efficient Black-Box Watermarking Approach for Retrieval-Augmented Generation of Large Language Models

Peizhuo Lv, Mengjie Sun, Hao Wang et al. · Chinese Academy of Sciences · Shandong University +2 more

Embeds 'knowledge watermarks' into RAG document stores to detect IP theft of retrieval-augmented LLM systems via black-box querying

Model Theft nlp
PDF
defense arXiv Jan 4, 2025 · Jan 2025

AdaMixup: A Dynamic Defense Framework for Membership Inference Attack Mitigation

Ying Chen, Jiajing Chen, Yijie Weng et al. · New York University · University of California +3 more

Defends against membership inference attacks using adaptive mixup training that dynamically adjusts interpolation ratios during training

Membership Inference Attack vision
3 citations PDF