Backdoor Learning: A Survey

07/17/2020
by   Yiming Li, et al.
47

Deep neural networks (DNNs) have demonstrated their power on many widely adopted applications. Although DNNs reached remarkable performance under benign settings, their performance decreased significantly under malicious settings. Accordingly, it raises serious concerns about the security of DNNs-based approaches. In general, research about the security issues of DNNs can be divided into two main categories, including adversarial learning and backdoor learning. Adversarial learning focuses on the security of the inference process, while backdoor learning concerns about the security of the training process. Although both studies are equally important, the research of backdoor learning falls far behind and its systematic review remains blank. This paper presents the first comprehensive survey on the backdoor learning. We summarize and categorize existing backdoor attacks and defenses, and provide a unified framework for analyzing poisoning-based backdoor attacks. Besides, we also analyze the relation between backdoor attacks and the relevant fields (i.e., adversarial attack and data poisoning), and the discussion about future research directions is also provided at the end.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset