defense 2025

GCP: Guarded Collaborative Perception with Spatial-Temporal Aware Malicious Agent Detection

Yihang Tao 1, Senkang Hu 1, Yue Hu 2, Haonan An 1, Hangcheng Cao 1, Yuguang Fang 1

6 citations · 47 references · arXiv

α

Published on arXiv

2501.02450

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

GCP achieves up to 34.69% improvement in AP@0.5 over state-of-the-art collaborative perception defenses under the proposed BAC attack, with consistent 5–8% gains under other attack types.

GCP (Guarded Collaborative Perception)

Novel technique introduced


Collaborative perception significantly enhances autonomous driving safety by extending each vehicle's perception range through message sharing among connected and autonomous vehicles. Unfortunately, it is also vulnerable to adversarial message attacks from malicious agents, resulting in severe performance degradation. While existing defenses employ hypothesis-and-verification frameworks to detect malicious agents based on single-shot outliers, they overlook temporal message correlations, which can be circumvented by subtle yet harmful perturbations in model input and output spaces. This paper reveals a novel blind area confusion (BAC) attack that compromises existing single-shot outlier-based detection methods. As a countermeasure, we propose GCP, a Guarded Collaborative Perception framework based on spatial-temporal aware malicious agent detection, which maintains single-shot spatial consistency through a confidence-scaled spatial concordance loss, while simultaneously examining temporal anomalies by reconstructing historical bird's eye view motion flows in low-confidence regions. We also employ a joint spatial-temporal Benjamini-Hochberg test to synthesize dual-domain anomaly results for reliable malicious agent detection. Extensive experiments demonstrate GCP's superior performance under diverse attack scenarios, achieving up to 34.69% improvements in AP@0.5 compared to the state-of-the-art CP defense strategies under BAC attacks, while maintaining consistent 5-8% improvements under other typical attacks. Code will be released at https://github.com/CP-Security/GCP.git.


Key Contributions

  • Novel Blind Area Confusion (BAC) attack that exploits overlooked temporal message correlations to evade single-shot outlier-based detection in collaborative perception with subtle yet harmful perturbations
  • GCP defense framework combining confidence-scaled spatial concordance loss with BEV motion flow reconstruction in low-confidence regions for dual spatial-temporal anomaly detection
  • Joint spatial-temporal Benjamini-Hochberg statistical test that synthesizes anomaly signals from both domains for reliable malicious agent identification

🛡️ Threat Analysis

Input Manipulation Attack

The BAC attack crafts adversarial messages that manipulate ML model inputs and outputs at inference time to evade existing single-shot outlier-based malicious agent detectors; GCP is a direct defense against these inference-time input manipulations in multi-agent ML perception systems.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
inference_timedigitalwhite_boxtargeted
Datasets
OPV2VV2XSet
Applications
autonomous drivingcollaborative perceptionconnected and autonomous vehicles