Full Program »
Secure and Verifiable Inference in Deep Neural Networks
Outsourced inference service has enormously promoted the popularity of deep learning, and helped users to customize a range of personalized applications. However, it also entails a variety of security and privacy issues brought by untrusted service providers. Particularly, a malicious adversary may violate user privacy during the inference process, or worse, return incorrect results to the client through compromising the integrity of the outsourced model. To address these problems, we propose SecureDL, a verifiable and privacy-preserving inference protocol to protect the model’s integrity and user’s privacy in Deep Neural Networks (DNNs). In specific, we first transform complicated non-linear activation functions of DNNs to low-degree polynomials. Then, we generate generic sensitive-samples to verify the integrity of the model’s parameters outsourced to the server. Finally, Leveled Homomorphic Encryption (LHE) is used to encrypt all user-related private data to support the privacy-preserving inference. We prove that our sensitive-samples are very sensitive to model changes, such that even a small parameter change can be reflected in the model outputs. Based on the experiments conducted on real data and different types of attacks, we demonstrate the superior performance of SecureDL in terms of detection accuracy, inference accuracy, computation, and communication overheads.