DeepCleanse: A Black-box Input SanitizationFramework Against Backdoor Attacks on DeepNeural Networks

08/09/2019
by   Bao Gia Doan, et al.
0

As Machine Learning, especially Deep Learning, has been increasingly used in varying areas as a trusted source to make decisions, the safety and security aspect of those systems become an increasing concern. Recently, Deep Learning system has been shown vulnerable to backdoor or trojan attacks in which the model was trained from adversary data. The main goal of this kind of attack is to generate a backdoor into Deep Learning model that can be triggered to recognize a certain pattern designed by attackers to change the model decision to a chosen target label. Detecting this kind of attack is challenging. The backdoor attack is stealthy as the malicious behavior only occurs when the right trigger designed by attackers is presented while the poisoned network operates normally with accuracy as identical as the benign model at all other time. Furthermore, with the recent trend of relying on pre-trained models as well as outsourcing the training process due to the lack of computational resources, this backdoor attack is becoming more relevant recently. In this paper, we propose a novel idea to detect and eliminate backdoor attacks for deep neural networks in the input domain of computer vision called DeepCleanse. Through extensive experiment results, we demonstrate the effectiveness of DeepCleanse against advanced backdoor attacks for deep neural networks across multiple vision applications. Our method can eliminate the trojan effects out of the backdoored Deep Neural Network. To the best of our knowledge, this is the first method in backdoor defense that works in black-box setting capable of sanitizing and restoring trojaned input that neither requires costly ground-truth labeled data nor anomaly detection.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset