Annual Computer Security Applications Conference (ACSAC) 2021

Full Program »

BadNL: Backdoor Attacks against NLP models with Semantic-preserving Improvements

Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been deployed in various real-world applications. Meanwhile, DNN models have been shown to be vulnerable to security and privacy attacks. One such attack that has attracted a great deal of attention recently is the backdoor attack. Specifically, the adversary poisons the target model's training set to mislead any input with an added secret trigger to a target class.

In this paper, we perform a systematic investigation of the backdoor attack on NLP models, and propose BadNL, a general NLP backdoor attack framework including novel attack methods which are highly effective, preserve model utility, and guarantee stealthiness. Specifically, we propose three methods to construct triggers, namely BadChar, BadWord, and BadSentence, including basic and semantic-preserving variants.

Our attacks achieve an almost perfect attack success rate with a negligible effect on the original model's utility. For instance, using the BadChar, our backdoor attack achieves a 98.9% attack success rate with yielding a utility improvement of 1.5% on the SST-5 dataset when only poisoning 3% of the original set.

Xiaoyi Chen
Peking University

Ahmed Salem
CISPA Helmholtz Center for Information Security

Dingfan Chen
CISPA Helmholtz Center for Information Security

Michael Backes
CISPA Helmholtz Center for Information Security

Shiqing Ma
Rutgers University

Qingni Shen
Peking University

Zhonghai Wu
Peking University

Yang Zhang
CISPA Helmholtz Center for Information Security

Paper (ACM DL)

Slides

Video

 



Powered by OpenConf®
Copyright©2002-2021 Zakon Group LLC