35th Annual Computer Security Applications Conference (ACSAC 2019)

Full Program »
Paper
View File
ACM
Presentation
View File
pdf

Model Inversion Attacks Against Collaborative Inference

The prevalence of deep learning has drawn attention to the privacy protection of sensitive data. Various privacy threats have been presented, where an adversary can steal model owners' private data. Meanwhile, countermeasures have also been introduced to achieve privacy-preserving deep learning. However, most studies only focused on data privacy during training, and ignored privacy during inference.

In this paper, we devise a new set of attacks to compromise the inference data privacy in collaborative deep learning systems. Specifically, when a deep neural network and the corresponding inference task are split and distributed to different participants, one malicious participant can accurately recover an arbitrary input fed into this system, even if he has no access to other participants' data or computations, or to prediction APIs to query this system. We evaluate our attacks under different settings, models and datasets, to show their effectiveness and generalization. We also study the characteristics of deep learning models that make them susceptible to such inference privacy threats. This provides insights and guidelines to develop more privacy-preserving collaborative systems and algorithms.

Zecheng He
Princeton University

Tianwei Zhang
Nanyang Technological University

Ruby Lee
Princeton University

 



Powered by OpenConf®
Copyright©2002-2020 Zakon Group LLC