Annual Computer Security Applications Conference (ACSAC) 2022

Full Program »

NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks

Membership inference attacks (MIAs) against machine learning models lead to serious privacy risks for the training dataset used in the model training. The state-of-the-art defenses against MIAs often suffer from poor privacy-utility balance and defense generality, as well as high training or inference overhead. To overcome these limitations, in this paper, we propose a novel, lightweight and effective Neuron-Guided Defense method named NeuGuard against MIAs. Unlike existing solutions which either regularize all model parameters in training or noise model output per input in real-time inference, NeuGuard aims to wisely guide the model output of training set and testing set to have close distributions through a fine-grained neuron regularization. That is, restricting the activation of output neurons and inner neurons in each layer simultaneously by using our developed class-wise variance minimization and layer-wise balanced output control. We evaluate NeuGuard and compare it with state-of-the-art defenses against two neural network based MIAs, five strongest metric based MIAs including the newly proposed label-only MIA on three benchmark datasets. Extensive experimental results show that NeuGuard outperforms the state-of-the-art defenses by offering much improved utility-privacy trade-off, generality, and overhead.

Nuo Xu
Lehigh University

Binghui Wang
Illinois Institute of Technology

Ran Ran
Lehigh University

Wujie Wen
Lehigh University

Parv Venkitasubramaniam
Lehigh University

Paper (ACM DL)

Slides

 



Powered by OpenConf®
Copyright©2002-2023 Zakon Group LLC