Cooking Up Risks: Benchmarking and Reducing Food Safety Risks in Large Language Models
Weidi Luo 1, Xiaofei Wen 2, Tenghao Huang 3, Hongyi Wang 4, Zhen Xiang 1, Chaowei Xiao 5, Kristina Gligorić 5, Muhao Chen 2
Published on arXiv
2604.01444
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Existing LLMs exhibit sparse safety alignment in food domain and succumb to jailbreak attacks; existing guardrails fail to detect substantial volume of domain-specific malicious inputs
FoodGuard-4B
Novel technique introduced
Large language models (LLMs) are increasingly deployed for everyday tasks, including food preparation and health-related guidance. However, food safety remains a high-stakes domain where inaccurate or misleading information can cause severe real-world harm. Despite these risks, current LLMs and safety guardrails lack rigorous alignment tailored to domain-specific food hazards. To address this gap, we introduce FoodGuardBench, the first comprehensive benchmark comprising 3,339 queries grounded in FDA guidelines, designed to evaluate the safety and robustness of LLMs. By constructing a taxonomy of food safety principles and employing representative jailbreak attacks (e.g., AutoDAN and PAP), we systematically evaluate existing LLMs and guardrails. Our evaluation results reveal three critical vulnerabilities: First, current LLMs exhibit sparse safety alignment in the food-related domain, easily succumbing to a few canonical jailbreak strategies. Second, when compromised, LLMs frequently generate actionable yet harmful instructions, inadvertently empowering malicious actors and posing tangible risks. Third, existing LLM-based guardrails systematically overlook these domain-specific threats, failing to detect a substantial volume of malicious inputs. To mitigate these vulnerabilities, we introduce FoodGuard-4B, a specialized guardrail model fine-tuned on our datasets to safeguard LLMs within food-related domains.
Key Contributions
- FoodGuardBench: First benchmark with 3,339 queries grounded in FDA guidelines to evaluate LLM food safety alignment and robustness against jailbreaks
- Systematic evaluation revealing LLMs easily succumb to canonical jailbreak strategies in food domain and existing guardrails fail to detect domain-specific threats
- FoodGuard-4B: Specialized 4B-parameter guardrail model fine-tuned to detect food safety violations and jailbreak attempts