IndirectAD: Practical Data Poisoning Attacks against Recommender Systems for Item Promotion
Zihao Wang 1, Tianhao Mao 2, XiaoFeng Wang 1, Di Tang 3, Xiaozhong Liu 4
Published on arXiv
2511.05845
Data Poisoning Attack
OWASP ML Top 10 — ML02
Key Finding
IndirectAD achieves noticeable item promotion with only 0.05% controlled user accounts, roughly 20x fewer than the ~1% threshold that prior work assumed makes such attacks impractical
IndirectAD
Novel technique introduced
Recommender systems play a central role in digital platforms by providing personalized content. They often use methods such as collaborative filtering and machine learning to accurately predict user preferences. Although these systems offer substantial benefits, they are vulnerable to security and privacy threats, especially data poisoning attacks. By inserting misleading data, attackers can manipulate recommendations for purposes ranging from boosting product visibility to shaping public opinion. Despite these risks, concerns are often downplayed because such attacks typically require controlling at least 1% of the platform's user base, a difficult task on large platforms. We tackle this issue by introducing the IndirectAD attack, inspired by Trojan attacks on machine learning. IndirectAD reduces the need for a high poisoning ratio through a trigger item that is easier to recommend to the target users. Rather than directly promoting a target item that does not match a user's interests, IndirectAD first promotes the trigger item, then transfers that advantage to the target item by creating co-occurrence data between them. This indirect strategy delivers a stronger promotion effect while using fewer controlled user accounts. Our extensive experiments on multiple datasets and recommender systems show that IndirectAD can cause noticeable impact with only 0.05% of the platform's user base. Even in large-scale settings, IndirectAD remains effective, highlighting a more serious and realistic threat to today's recommender systems.
Key Contributions
- IndirectAD attack that uses a trigger item to establish co-occurrence with a target item via poisoned user interaction data, reducing direct promotion difficulty
- Reduces required poisoning ratio from ~1% to 0.05% of the platform's user base compared to direct poisoning attacks
- Demonstrates attack effectiveness across multiple datasets and diverse recommender system architectures including collaborative filtering and ML-based models
🛡️ Threat Analysis
Injects fake user interaction records into training data to manipulate learned item associations — a direct data poisoning attack on ML-based recommender systems. While the paper draws an analogy to Trojan attacks for its indirect strategy, the attack vector is entirely through training data injection (fake user profiles), not through model weight manipulation or inference-time triggers. The 'trigger item' is a data poisoning strategy element to reduce the required poisoning ratio, not a true backdoor trigger inserted at inference time by the attacker.