Annual Computer Security Applications Conference (ACSAC) 2017

Full Program »

Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars.

In this work, we develop new DNNs that are robust to state-of- the-art evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. Specifically, for a normal/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification is significantly more robust than point-based classification to state-of-the-art evasion attacks.

Xiaoyu Cao
Iowa state university
United States

Neil Zhenqiang Gong
Iowa state university
United States

 

Powered by OpenConf®
Copyright©2002-2017 Zakon Group LLC