Annual Computer Security Applications Conference (ACSAC) 2021

Full Program »

Efficient, Private and Robust Federated Learning

Federated learning (FL) has demonstrated tremendous success in various mission-critical large-scale scenarios. However, such promising distributed learning paradigm is still vulnerable to privacy inference and byzantine attacks. The former aims to infer the privacy of target participants involved in training, while the latter focuses on destroying the integrity of the constructed model. To mitigate the above two issues, a few works recently explored unified solutions by utilizing generic secure computation techniques and common byzantine-robust aggregation rules, but there are two major limitations: 1) they suffer from impracticality due to efficiency bottlenecks, and 2) they are still vulnerable to various types of attacks because of model incomprehensiveness.

To approach the above problems, in this paper, we present SecureFL, an efficient, private and byzantine-robust FL framework. SecureFL follows the state-of-the-art byzantine-robust FL method (FLTrust NDSS'21), which performs comprehensive byzantine defense by normalizing the updates' magnitude and measuring directional similarity, adapting it to the privacy-preserving context. More importantly, we carefully customize a series of cryptographic components for enjoying efficiently mathematical operations. First, we design a crypto-friendly \textit{validity check} protocol that functionally replaces the normalization operation in FLTrust, and further devise tailored cryptographic protocols on top of it. Benefiting from the above optimizations, the communication and computation costs are reduced by half without sacrificing the robustness and privacy protection. Second, we develop a novel \textit{preprocessing} technique for costly matrix multiplication. With this technique, the directional similarity measurement can be evaluated securely with negligible computation overhead and zero communication cost. Extensive evaluations conducted on three real-world datasets and various neural network architectures demonstrate that SecureFL outperforms prior art up to two orders of magnitude in efficiency with state-of-the-art byzantine robustness.

Meng Hao
University of Electronic Science and Technology of China

Hongwei Li
University of Electronic Science and Technology of China

Guowen Xu
Nanyang Technological University

Hanxiao Chen
University of Electronic Science and Technology of China

Tianwei Zhang
Nanyang Technological University

Paper (ACM DL)

Slides

Video

 



Powered by OpenConf®
Copyright©2002-2021 Zakon Group LLC