Annual Computer Security Applications Conference (ACSAC) 2022

Full Program »

Compressed Federated Learning Based on Adaptive Local Differential Privacy

Federated learning (FL) was once considered secure for keeping clients’ raw data locally without relaying to a central server. However, the model weights or gradients transmitted between the central server and clients can still reveal private information, which can be exploited to launch various inference attacks. In addition, the communication overhead required is also very high, especially under Deep Neural Network (DNN) architecture. In this paper, we propose a communication efficient FL scheme in DNN architecture by using Compressive sensing and Adaptive local differential privacy (called as CAFL). Specifically, we first compress the local models efficiently by using Compressive Sensing (CS), then adaptively perturb the remaining weights according to their different variation ranges centers in different layers and their own offsets from corresponding range center by using Local Differential Privacy (LDP), finally reconstruct the global model almost perfectly by using the reconstruction algorithm of CS. Formal security analysis shows that our scheme achieves ε-LDP security and introduces zero bias to estimating average weights. Extensive experiments using two commonly used datasets (i.e., MINIST and Fashion-MINIST) demonstrate that our scheme with minimum compression ratio 0.05 can reduce the communication overhead by 95%, and with a lower privacy budget ε = 1 can improve the accuracy by 80% on MINIST and 12.7% on Fashion-MINIST compared with state-of-the-art schemes.

Yinbin Miao
Xidian University

Rongpeng Xie
Xidian University

Xinghua Li
Xidian University

Ximeng Liu
Fuzhou University

Zhuo Ma
Xidian University

Robert H. Deng
Singapore Management University

Paper (ACM DL)

Slides

 



Powered by OpenConf®
Copyright©2002-2023 Zakon Group LLC