Yuan Hong

Papers in Database (3)

defense In Proceedings of the 32nd ACM... Sep 5, 2025 · Sep 2025

Safeguarding Graph Neural Networks against Topology Inference Attacks

Jie Fu, Yuan Hong, Zhili Chen et al. · Stevens Institute of Technology · University of Connecticut +1 more

Proposes graph topology reconstruction attacks on GNNs and a bi-level optimization defense to prevent training data leakage

Model Inversion Attack graph
PDF Code
attack arXiv Aug 3, 2025 · Aug 2025

Practical, Generalizable and Robust Backdoor Attacks on Text-to-Image Diffusion Models

Haoran Dai, Jiawen Wang, Ruo Yang et al. · Illinois Institute of Technology · Samsung +2 more

Backdoor attack on text-to-image diffusion models achieving >90% success with only 10 poisoned samples and natural-language triggers

Model Poisoning Data Poisoning Attack visionnlpgenerative
PDF
defense arXiv Mar 17, 2026 · 20d ago

Detecting Data Poisoning in Code Generation LLMs via Black-Box, Vulnerability-Oriented Scanning

Shenao Yan, Shimaa Ahmed, Shan Jin et al. · University of Connecticut · Visa Research

Black-box scanning framework detecting poisoned code generation LLMs by identifying recurring vulnerable code structures across diverse prompts

Data Poisoning Attack Model Poisoning Training Data Poisoning nlp
PDF