defense arXiv Mar 15, 2026 · 22d ago
Vishnu Narayanan Anilkumar, Abhijith Sreesylesh Babu, Trieu Hai Vo et al. · Florida International University
Unlearns unsafe object-relation-object tuples in multimodal LLMs using LoRA while preserving safe contexts and benign uses
Prompt Injection multimodalnlp
Generative multimodal models can exhibit safety failures that are inherently relational: two benign concepts can become unsafe when linked by a specific action or relation (e.g., child-drinking-wine). Existing unlearning and concept-erasure approaches often target isolated concepts or image-text pairs, which can cause collateral damage to benign uses of the same objects and relations. We propose relationship-aware safety unlearning: a framework that explicitly represents unsafe object-relation-object (O-R-O) tuples and applies targeted parameter-efficient edits (LoRA) to suppress unsafe tuples while preserving object marginals and safe neighboring relations. We include CLIP-based experiments and robustness evaluation under paraphrase, contextual, and out-of-distribution image attacks.
llm vlm multimodal transformer Florida International University