attack 2025

Ghosting Your LLM: Without The Knowledge of Your Gradient and Data

Abeer Matar A. Almalky 1, Ziyan Wang 2, Mohaiminul Al Nahian 1, Li Yang 2, Adnan Siraj Rakin 1

0 citations · 41 references · arXiv

α

Published on arXiv

2511.22700

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Achieves adversarial objectives on five open-source LLMs with as few as one bit flip, requiring 8-10x less memory than gradient-based BFA baselines and O(1) complexity across multiple tasks.

GDF-BFA

Novel technique introduced


In recent years, large language models (LLMs) have achieved substantial advancements and are increasingly integrated into critical applications across various domains. This growing adoption underscores the need to ensure their security and robustness. In this work, we focus on the impact of Bit Flip Attacks (BFAs) on LLMs, which exploits hardware faults to corrupt model parameters, posing a significant threat to model integrity and performance. Existing studies on BFA against LLMs adopt a progressive bit-search strategy that predominantly relies on gradient-based techniques to identify sensitive layers or weights. However, computing gradients comes with two specific challenges: First, in the context of LLMs, it increases computational and memory costs exponentially, and Second, it requires access to a sample victim dataset or knowledge of the victim domain to compute the gradient. In this work, we investigate beyond the scope of attack efficacy and aim to develop an efficient, practical Gradient-Data-free Bit-Flip Attack. The challenge lies in the core principle of adversarial attacks, which relies heavily on computing gradients from sample test/train data and manipulating model weights based on gradient information. To overcome this, we propose novel vulnerability index metrics that can identify vulnerable weight bits in LLMs independent of any gradient or data knowledge. By removing the dependency on gradient computation, our approach drastically reduces memory requirements and scales efficiently across multiple tasks with constant complexity. Experimental results demonstrate the efficiency of our method, requiring as few as a single bit flip to achieve adversarial objectives for five open-source LLMs.


Key Contributions

  • Layer Vulnerability Index (LVI) and Weight Vulnerability Index (WVI) metrics that identify vulnerable bits in LLM parameters without gradient computation or data access
  • Gradient-Data-free Bit-Flip Attack (GDF-BFA) that reduces memory overhead 8-10x compared to prior gradient-based BFA methods and scales with O(1) complexity across N tasks
  • Demonstrated attack efficacy on five open-source LLMs requiring as few as a single bit flip to achieve adversarial objectives

🛡️ Threat Analysis

Model Poisoning

The paper's core contribution is a model weight corruption attack (Bit Flip Attack) that exploits rowhammer-style hardware faults to flip bits in LLM parameter memory, directly compromising model integrity — a form of model poisoning that does not require a backdoor trigger but maliciously alters model weights to achieve adversarial objectives.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
grey_boxinference_timetargeted
Datasets
LLaMA-3-8B benchmark tasks
Applications
large language modelsmlaas platformscloud-hosted llms