Annual Computer Security Applications Conference (ACSAC) 2022

Full Program »

Play the Imitation Game: Model Extraction Attack against Autonomous Driving Localization

The security of the Autonomous Driving (AD) system has been gaining researchers' and public's attention recently. Given that AD companies have invested a huge amount of resources in developing their AD models, e.g., localization models, these models, in particular, their parameters can become intellectual and deserve strong protection.

In this work, we examine whether the confidentiality of Multi-Sensor Fusion (MSF) models, in particular, Error-State Kalman Filter (ESKF) used by production-grade MSF, like the one in Baidu Apollo, can be stolen from an outside adversary. We propose a new model extraction attack called \attack{} that can infer the secret ESKF parameters through black-box analysis. In essence, \attack{} trains a substitutional ESKF model to recover the parameters, by observing the input and output to the targeted AD system. We combine a set of techniques, like gradient-based optimization, search-space reduction and multi-stage optimization, to this end. The evaluation result on real-world vehicle sensor dataset shows the threat from \attack{} is practical. For example, by collecting data points within the 25-second window of a targeted AV, we can train an ESKF model reaching centimeter-level accuracy to the ground-truth model.

Qifan Zhang
University of California, Irvine

Junjie Shen
University of California, Irvine

Mingtian Tan
Fudan University

Zhe Zhou
Fudan University

Zhou Li
University of California, Irvine

Qi Alfred Chen
University of California, Irvine

Haipeng Zhang
ShanghaiTech University

Paper (ACM DL)



Powered by OpenConf®
Copyright©2002-2023 Zakon Group LLC