STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
Recent trojan attacks on deep neural network (DNN) models are one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker's chosen trojan trigger input. Trojan attacks are easy to craft; survive even in adverse conditions such as different viewpoints, and lighting conditions on images; and threaten real world applications such as autonomous vehicles and robotics. The trojan trigger is a secret guarded and exploited by the attacker. Therefore, detecting such trojaned inputs is a challenge, especially at run-time when models are in active operation. We focus on vision systems and build the STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of the predicted classes for the perturbed inputs for a given model---malicious or benign---under deployment. A low entropy in the predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input---a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on two popular and contrasting datasets: MNIST and CIFAR10. We achieve an overall false acceptance rate (FAR) of less than 1 rate (FRR) of 1 triggers are identified in previous attack works and one dedicated trigger is crafted by us to demonstrate the trigger-size insensitivity of the STRIP detection approach. In particular, on the dataset of natural images in CIFAR10, we have empirically achieved the desired result of 0
READ FULL TEXT