Annual Computer Security Applications Conference (ACSAC) 2017

Full Program »

Commoner Privacy and a study on network traces

Differential privacy has emerged as a promising mechanism for privacy-safe data mining. One popular differential privacy mechanism allows researchers to pose queries over a dataset, and adds random noise to all output points to protect privacy. While differential privacy produces useful data in many scenarios, added noise may jeopardize utility for queries ran over small pop- ulations or over long-tailed datasets. Gehrke et al. proposed crowd-blending privacy, which has a lower privacy guarantee but preserves more research utility than differential privacy. Crowd-blending privacy adds random noise only to those output points that are rare or unique. We propose an even more liberal privacy goal – commoner privacy – which fuzzes only those output points where an individual’s contribution to this point is an outlier. By hiding outliers our mechanism hides the presence or absence of an individual in a dataset. We highlight the use cases where such reduced privacy guarantee may be acceptable in exchange for higher utility – 1) some individuals may contribute large value to some output datapoints, and 2) many individuals may have similar but different contributions to the output. We then propose one mechanism that achieves commoner privacy – interactive k-anonymity, and implement this strategy in a system called Patrol for network trace processing. We also discuss query composition and show how we can guarantee privacy via query logging and inspection. Our evaluation shows that commoner privacy prevents common attacks while preserves higher research utility than differential or crowd-blending privacy and that this especially holds for queries over long-tailed datasets.

Xiyue Deng
USC/ISI
United States

Jelena Mirkovic
USC/ISI
United States

 

Powered by OpenConf®
Copyright©2002-2017 Zakon Group LLC