Thomas Eisenbarth

Papers in Database (2)

attack arXiv Aug 7, 2025 · Aug 2025

Non-omniscient backdoor injection with one poison sample: Proving the one-poison hypothesis for linear regression, linear classification, and 2-layer ReLU neural networks

Thorsten Peinemann, Paula Arnold, Sebastian Berndt et al. · University of Lübeck · Technische Hochschule Lübeck

Proves one poison sample suffices to backdoor linear models and 2-layer ReLU networks with zero backdoor error and no full data knowledge

Model Poisoning
PDF
attack arXiv Sep 11, 2025 · Sep 2025

Prompt Pirates Need a Map: Stealing Seeds helps Stealing Prompts

Felix Mächtle, Ashwath Shetty, Jonas Sander et al. · University of Lübeck · Kiel University

Exploits PyTorch's 32-bit seed space to brute-force generation seeds and steal prompts from diffusion model outputs

Model Inversion Attack generativevision
PDF