Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous
Ben Nassi 1,2, Stav Cohen 2, Or Yair 3
Published on arXiv
2508.12175
Prompt Injection
OWASP LLM Top 10 — LLM01
Excessive Agency
OWASP LLM Top 10 — LLM08
Key Finding
73% of analyzed Promptware threats against Gemini assistants pose High-Critical risk, with attacks achieving data exfiltration, phishing, and physical device control via indirect prompt injection; mitigations deployed by Google reduced risk to Very Low-Medium
Targeted Promptware Attacks
Novel technique introduced
The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware - maliciously engineered prompts designed to manipulate LLMs to compromise the CIA triad of these applications. While prior research warned about a potential shift in the threat landscape for LLM-powered applications, the risk posed by Promptware is frequently perceived as low. In this paper, we investigate the risk Promptware poses to users of Gemini-powered assistants (web application, mobile application, and Google Assistant). We propose a novel Threat Analysis and Risk Assessment (TARA) framework to assess Promptware risks for end users. Our analysis focuses on a new variant of Promptware called Targeted Promptware Attacks, which leverage indirect prompt injection via common user interactions such as emails, calendar invitations, and shared documents. We demonstrate 14 attack scenarios applied against Gemini-powered assistants across five identified threat classes: Short-term Context Poisoning, Permanent Memory Poisoning, Tool Misuse, Automatic Agent Invocation, and Automatic App Invocation. These attacks highlight both digital and physical consequences, including spamming, phishing, disinformation campaigns, data exfiltration, unapproved user video streaming, and control of home automation devices. We reveal Promptware's potential for on-device lateral movement, escaping the boundaries of the LLM-powered application, to trigger malicious actions using a device's applications. Our TARA reveals that 73% of the analyzed threats pose High-Critical risk to end users. We discuss mitigations and reassess the risk (in response to deployed mitigations) and show that the risk could be reduced significantly to Very Low-Medium. We disclosed our findings to Google, which deployed dedicated mitigations.
Key Contributions
- Novel TARA (Threat Analysis and Risk Assessment) framework for systematically evaluating Promptware risk across five threat classes in production LLM-powered assistants
- 14 demonstrated attack scenarios against production Gemini assistants via indirect prompt injection through emails, calendar invites, and shared documents, covering spamming, phishing, data exfiltration, and home automation control
- First demonstration of on-device lateral movement from LLM-powered apps to OS-level device applications, with 73% of threats rated High-Critical risk before mitigations