Latest papers

6 papers
defense arXiv Mar 6, 2026 · 4w ago

SPOILER: TEE-Shielded DNN Partitioning of On-Device Secure Inference with Poison Learning

Donghwa Kang, Hojun Choe, Doohyun Kim et al. · Korea Advanced Institute of Science and Technology · University of Seoul

Defends edge-deployed DNNs against model theft via TEE partitioning and self-poisoning that renders the exposed backbone functionally incoherent

Model Theft vision
PDF
attack arXiv Mar 5, 2026 · 4w ago

Beyond the Patch: Exploring Vulnerabilities of Visuomotor Policies via Viewpoint-Consistent 3D Adversarial Object

Chanmi Lee, Minsung Yoon, Woojae Kim et al. · Korea Advanced Institute of Science and Technology

Attacks robot visuomotor policies with viewpoint-consistent 3D adversarial object textures optimized via differentiable rendering and saliency-guided perturbations

Input Manipulation Attack visionreinforcement-learning
PDF
attack arXiv Feb 26, 2026 · 5w ago

No Caption, No Problem: Caption-Free Membership Inference via Model-Fitted Embeddings

Joonsung Jeon, Woo Jae Kim, Suhyeon Ha et al. · Korea Advanced Institute of Science and Technology

Caption-free membership inference attack on diffusion models using model-fitted embeddings to amplify memorization signals

Membership Inference Attack visiongenerative
PDF
attack arXiv Feb 2, 2026 · 9w ago

Zero2Text: Zero-Training Cross-Domain Inversion Attacks on Textual Embeddings

Doohyun Kim, Donghwa Kang, Kyungjae Lee et al. · Korea Advanced Institute of Science and Technology · University of Seoul

Training-free embedding inversion attack recovers private text from RAG vector databases without in-domain data, defeating differential privacy defenses

Model Inversion Attack Sensitive Information Disclosure nlp
1 citations PDF
defense arXiv Oct 22, 2025 · Oct 2025

AegisRF: Adversarial Perturbations Guided with Sensitivity for Protecting Intellectual Property of Neural Radiance Fields

Woo Jae Kim, Kyu Beom Han, Yoonki Cho et al. · Korea Advanced Institute of Science and Technology

Defends NeRF IP by embedding adversarial perturbations in rendered outputs to disrupt unauthorized downstream classifiers and 3D localization models

Input Manipulation Attack Output Integrity Attack vision
PDF Code
attack arXiv Aug 19, 2025 · Aug 2025

Timestep-Compressed Attack on Spiking Neural Networks through Timestep-Level Backpropagation

Donghwa Kang, Doohyun Kim, Sang-Ki Ko et al. · Korea Advanced Institute of Science and Technology · University of Seoul +1 more

Accelerates gradient-based adversarial attacks on spiking neural networks by 57% via timestep-level backpropagation and membrane potential reuse

Input Manipulation Attack vision
PDF