attack 2026

Colluding LoRA: A Composite Attack on LLM Safety Alignment

Sihao Ding

0 citations

α

Published on arXiv

2603.12681

AI Supply Chain Attacks

OWASP ML Top 10 — ML06

Model Poisoning

OWASP ML Top 10 — ML10

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves high attack success rate after composition while each adapter exhibits benign behavior individually, bypassing unit-centric verification

Colluding LoRA (CoLoRA)

Novel technique introduced


We introduce Colluding LoRA (CoLoRA), an attack in which each adapter appears benign and plausibly functional in isolation, yet their linear composition consistently compromises safety. Unlike attacks that depend on specific input triggers or prompt patterns, CoLoRA is a composition-triggered broad refusal suppression: once a particular set of adapters is loaded, the model undergoes effective alignment degradation, complying with harmful requests without requiring adversarial prompts or suffixes. This attack exploits the combinatorial blindness of current defense systems, where exhaustively scanning all compositions is computationally intractable. Across several open-weight LLMs, CoLoRA achieves benign behavior individually yet high attack success rate after composition, indicating that securing modular LLM supply-chains requires moving beyond single-module verification toward composition-aware defenses.


Key Contributions

  • Introduces composition-triggered backdoor attack where individually benign LoRA adapters collectively compromise LLM safety alignment
  • Demonstrates plausibility camouflage technique allowing malicious adapters to pass unit-centric supply chain verification
  • Exploits combinatorial blindness in current defense systems where exhaustive composition verification is computationally intractable

🛡️ Threat Analysis

AI Supply Chain Attacks

Exploits the ML supply chain by distributing trojaned LoRA adapters through model repositories (HuggingFace, etc.) that appear benign individually but become malicious when composed — this is a supply chain attack vector.

Model Poisoning

Embeds hidden malicious behavior (broad refusal suppression) that activates only when specific adapters are composed together — this is a composition-triggered backdoor/trojan attack.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timeinference_timetargeted
Applications
llm safety alignmentmodular llm systemsmodel garden repositories