Latest papers

14 papers
attack arXiv Mar 24, 2026 · 13d ago

AdvSplat: Adversarial Attacks on Feed-Forward Gaussian Splatting Models

Yiran Qiao, Yiren Lu, Yunlai Zhou et al. · Case Western Reserve University

White-box and query-efficient black-box adversarial attacks on feed-forward 3D Gaussian Splatting models via imperceptible input perturbations

Input Manipulation Attack vision
PDF
benchmark arXiv Mar 9, 2026 · 28d ago

Quantifying Memorization and Privacy Risks in Genomic Language Models

Alexander Nemecek, Wenbiao Li, Xiaoqian Jiang et al. · Case Western Reserve University · UTHealth +1 more

Multi-vector framework quantifying memorization, canary extraction, and membership inference risks across genomic language model architectures

Model Inversion Attack Membership Inference Attack nlp
PDF
defense arXiv Mar 8, 2026 · 29d ago

Few Tokens, Big Leverage: Preserving Safety Alignment by Constraining Safety Tokens during Fine-tuning

Guoli Wang, Haonan Shi, Tu Ouyang et al. · Case Western Reserve University

Preserves LLM safety alignment during fine-tuning by regularizing confidence on a small subset of safety-critical tokens only

Transfer Learning Attack Prompt Injection nlp
PDF
defense arXiv Mar 2, 2026 · 5w ago

Authenticated Contradictions from Desynchronized Provenance and Watermarking

Alexander Nemecek, Hengzhi He, Guang Cheng et al. · Case Western Reserve University · University of California

Exposes a provenance-watermark desync vulnerability producing cryptographically valid AI-generated 'authenticated fakes', defended by a cross-layer audit protocol

Output Integrity Attack visiongenerative
PDF
defense arXiv Feb 22, 2026 · 6w ago

DefenseSplat: Enhancing the Robustness of 3D Gaussian Splatting via Frequency-Aware Filtering

Yiran Qiao, Yiren Lu, Yunlai Zhou et al. · Case Western Reserve University

Defends 3D Gaussian Splatting against adversarial input perturbations using wavelet-based high-frequency filtering for input purification

Input Manipulation Attack vision
PDF
attack arXiv Feb 21, 2026 · 6w ago

LoMime: Query-Efficient Membership Inference using Model Extraction in Label-Only Settings

Abdullah Caglar Oksuz, Anisa Halimi, Erman Ayday · Case Western Reserve University · IBM Research

Query-efficient label-only membership inference attack that builds a surrogate via model extraction, reducing per-sample query overhead to ~1% of training set size

Membership Inference Attack Model Theft tabular
PDF
defense arXiv Feb 13, 2026 · 7w ago

Neighborhood Blending: A Lightweight Inference-Time Defense Against Membership Inference Attacks

Osama Zafar, Shaojie Zhan, Tianxi Ji et al. · Case Western Reserve University · Texas Tech University

Inference-time defense smooths model confidence outputs via DP neighborhood averaging to defeat membership inference attacks without retraining

Membership Inference Attack tabular
PDF
defense arXiv Feb 4, 2026 · 8w ago

Trust The Typical

Debargha Ganguly, Sreehari Sankar, Biyao Zhang et al. · Case Western Reserve University · University of Pittsburgh +2 more

Defends LLMs against jailbreaks via OOD detection on safe prompts, reducing false positives by 40x over specialized safety models

Prompt Injection nlp
1 citations PDF
defense arXiv Jan 20, 2026 · 10w ago

SecureSplit: Mitigating Backdoor Attacks in Split Learning

Zhihao Dou, Dongfei Cui, Weida Wang et al. · Case Western Reserve University · Northeast Electric Power University +6 more

Defends split learning against backdoor attacks by transforming embeddings and filtering poisoned ones via majority-voting scheme

Model Poisoning visionfederated-learning
PDF
defense arXiv Nov 9, 2025 · Nov 2025

EASE: Practical and Efficient Safety Alignment for Small Language Models

Haonan Shi, Guoli Wang, Tu Ouyang et al. · Case Western Reserve University

Defends small LLMs against jailbreaks via selective safety reasoning that activates only for dangerous queries, cutting overhead 90%

Prompt Injection nlp
PDF Code
benchmark TrustCom Nov 9, 2025 · Nov 2025

Comparing Reconstruction Attacks on Pretrained Versus Full Fine-tuned Large Language Model Embeddings on Homo Sapiens Splice Sites Genomic Data

Reem Al-Saidi, Erman Ayday, Ziad Kobti · University of Windsor · Case Western Reserve University

Compares genomic DNA reconstruction vulnerability across pretrained and fine-tuned LLM embeddings, finding fine-tuning reduces attack success by up to 19.8%

Model Inversion Attack Sensitive Information Disclosure nlp
PDF
attack TPS-ISA Oct 21, 2025 · Oct 2025

Exploring Membership Inference Vulnerabilities in Clinical Large Language Models

Alexander Nemecek, Zebin Yun, Zahra Rahmani et al. · Case Western Reserve University · Tel Aviv University

Evaluates membership inference attacks on clinical LLMs fine-tuned on EHR data using loss-based and paraphrase-perturbation methods

Membership Inference Attack Sensitive Information Disclosure nlp
PDF
defense arXiv Sep 30, 2025 · Sep 2025

SafeBehavior: Simulating Human-Like Multistage Reasoning to Mitigate Jailbreak Attacks in Large Language Models

Qinjian Zhao, Jiaqi Wang, Zhiqiang Gao et al. · Wenzhou-Kean University · University of Bremen +2 more

Three-stage LLM jailbreak defense using intention inference, self-introspection, and self-revision to counter optimization-based and prompt-based attacks

Input Manipulation Attack Prompt Injection nlp
PDF
defense arXiv Jan 8, 2025 · Jan 2025

Navigating the Designs of Privacy-Preserving Fine-tuning for Large Language Models

Haonan Shi, Tu Ouyang, An Wang · Case Western Reserve University

Proposes GuardedTuning framework defending against data reconstruction attacks during privacy-preserving LLM fine-tuning via split learning

Model Inversion Attack Sensitive Information Disclosure nlp
PDF