Latest papers

8 papers
attack arXiv Jan 29, 2026 · 9w ago

An Effective Energy Mask-based Adversarial Evasion Attacks against Misclassification in Speaker Recognition Systems

Chanwoo Park, Chanwoo Kim · Korea University

Novel frequency-domain energy masking attack generates imperceptible adversarial audio that evades speaker recognition with 20% higher success than FGSM

Input Manipulation Attack audio
PDF
defense IEEE Transactions on Network a... Jan 10, 2026 · 12w ago

SecureDyn-FL: A Robust Privacy-Preserving Federated Learning Framework for Intrusion Detection in IoT Networks

Imtiaz Ali Soomro, Hamood Ur Rehman, S. Jawad Hussain ID et al. · Sir Syed CASE Institute of Technology · Habib University +3 more

Defends federated learning models against poisoning and gradient inference attacks via GMM-based auditing and ElGamal encrypted aggregation in IoT IDS

Data Poisoning Attack Model Inversion Attack federated-learning
PDF
defense arXiv Dec 22, 2025 · Dec 2025

WaTeRFlow: Watermark Temporal Robustness via Flow Consistency

Utae Jeong, Sumin In, Hyunju Ryu et al. · Korea University · Google DeepMind +1 more

Defends image watermark provenance against image-to-video conversion using optical-flow consistency and diffusion-proxy training

Output Integrity Attack visiongenerative
PDF
defense arXiv Dec 18, 2025 · Dec 2025

Autoencoder-based Denoising Defense against Adversarial Attacks on Object Detection

Min Geun Song, Gang Min Kim, Woonmin Kim et al. · Korea University

Autoencoder denoising defense partially restores YOLOv5 object detection performance degraded by Perlin noise adversarial attacks

Input Manipulation Attack vision
PDF
attack arXiv Dec 18, 2025 · Dec 2025

In-Context Probing for Membership Inference in Fine-Tuned Language Models

Zhexi Lu, Hongliang Chi, Nathalie Baracaldo et al. · Rensselaer Polytechnic Institute · IBM Research +1 more

Attacks fine-tuned LLM privacy via in-context probing to infer training membership without shadow model training

Membership Inference Attack nlp
PDF
defense arXiv Nov 3, 2025 · Nov 2025

Perturb a Model, Not an Image: Towards Robust Privacy Protection via Anti-Personalized Diffusion Models

Tae-Young Lee, Juwon Seo, Jong Hwan Ko et al. · Korea University · Kyung Hee University +1 more

Defends against unauthorized deepfake personalization by modifying diffusion models to resist subject-specific fine-tuning attacks

Output Integrity Attack visiongenerative
PDF Code
defense arXiv Oct 31, 2025 · Oct 2025

BlurGuard: A Simple Approach for Robustifying Image Protection Against AI-Powered Editing

Jinsu Kim, Yunhun Nam, Minseon Kim et al. · Korea University · Microsoft Research

Defends adversarial image protections from reversal attacks by applying adaptive per-region Gaussian blur to adjust noise frequency spectrum

Output Integrity Attack visiongenerative
PDF Code
defense arXiv Sep 30, 2025 · Sep 2025

ASGuard: Activation-Scaling Guard to Mitigate Targeted Jailbreaking Attack

Yein Park, Jungwoo Park, Jaewoo Kang · Korea University · AIGEN Sciences

Defends LLMs against tense-rephrasing jailbreaks via circuit analysis and activation-scaling preventative fine-tuning

Prompt Injection nlp
PDF Code