Annual Computer Security Applications Conference (ACSAC) 2018

Full Program »

Model Extraction Warning in MLaaS Paradigm

Machine learning models deployed on the cloud are susceptible to several security threats including extraction attacks. Adversaries may abuse a model’s prediction API to steal the model thus compromising model confidentiality, privacy of training data, and revenue from future query payments. This work introduces a model extraction monitor that quantifies the extraction status of models by continually observing the API query and response streams of users. We present two novel strategies that measure either the information gain or the coverage of the feature space spanned by user queries to estimate the learning rate of individual and colluding adversaries. Both approaches have low computational overhead and can easily be offered as services to model owners to warn them against state of the art extraction attacks. We demonstrate empirical performance results of these approaches for decision tree and neural network models using open source datasets and BigML MLaaS platform.

Manish Kesarwani
IBM Research Lab
India

Bhaskar Mukhoty
Indian Institute of Technology, Kanpur
India

Vijay Arya
IBM Research Lab
India

Sameep Mehta
IBM Research Lab
India

 



Powered by OpenConf®
Copyright©2002-2018 Zakon Group LLC