Machine Learning (ML), especially the Deep Learning (DL), has been deployed in various critical tasks recently in computer vision, robotics, and natural language processing. Nevertheless, the trustworthy decisions of those DL systems have been concerned in recent works [12, 8, 11]
. On one hand, due to the lack of computational resources to train DL networks, DL researchers and practitioners and practitioners recently rely on transfer learning or Machine Learning as a Service (MLaaS). In the transfer learning, DL practitioners need to re-utilize an untrusted pre-trained model which could be potentially poisoned as shown in recent work . This kind of sharing and reusing model is commonly applied nowadays . Furthermore, in MLaaS they need to outsource their training process to a third-party which can manipulate all of the training process . On the other hand, with millions of parameters inside a deep learning model, it is highly difficult to reason or explain the decision made by a neural network. Normally, the only measurement and validation from DL practitioners is the accuracy, which makes backdoor attack become a security threat as the performance of the poisoned model is identical with the benign one when the trojan trigger is absent.
With this recent trend of training Deep Neural Networks (DNNs), Deep Leaning based applications potentially face the security threats of trojaned or backdoor attacks in which trojaned networks will have a malicious behavior when the trigger designed by attackers is presented [8, 2]. One distinctive feature of backdoor attack is that the attackers can choose any shapes and sizes or features of the trigger, which makes it different from adversarial samples of evasion attack shown in ICLR work  where the adversaries need to be crafted dependent on the network architecture. This makes the backdoor attack more physical and deployable in the real-world scenario. For example, attackers can choose sunglasses as in 2017 UC Berkeley trojan attack  as the trigger of a backdoor which is hardly noticeable in a real-world scenario, or stickers attached to T-shirts that can misclassify the face recognition system. Generally, the trigger is hardly detected or recognized by human beings.
In this paper, we will focus on the vision system where the backdoor attacks pose several security threats to real-world applications such as traffic sign recognition or object identification. Those can lead to high-security threat impacting human life where traffic sign recognition applied in a self-driving car which can be misled the Stop sign to speed limit for instance (as shown in Figure 1).
Detection is challenging The backdoor attack is stealthy because the DL model will behave abnormally if and only if the designed trigger appears while functioning properly in all other cases. This stealthiness makes the backdoor attack challenging to be detected. Furthermore, the triggers could be at any shapes and sizes chosen by attackers requiring a method that can adapt well and robust enough to detect those triggers. In addition, the purpose of the trojaned network is to keep the performance of the network identical with the benign network but get malicious behavior when the trigger appears, which is challenging to detect whether the network we are using was trojaned or not.
Our paper will investigate the following question:
Is there any leaked information in an input-agnostic backdoor attack that can be exploited via side channels for defense?
In this paper, we focus on creating physical and realistic examples of trojan placement and methods. In addition, we deal with the problem of allowing time-bound systems to react to trojan inputs where detection and discarding is often not an option. We also focus on the input-agnostic attack which currently is dominant in the backdoor attack. Input-agnostic attack means that the trigger will operate regardless of the source classes of the inputs, inputs from any source classes with the malicious pattern will trigger the backdoor of the poisoned network. Furthermore, we will also investigate advanced backdoor variants such as larger triggers or different scenarios of source-class backdoor attacks.
1.1 Our Contributions and Results:
We reveal that the strong effect of trojan is indeed a weakness that leaks information in feature maps that can be detected under DNN visual explanation. The stronger the trigger is, the easier it is to be detected. In this paper, we present a single complete framework named DeepCleanse (DC) that can be plugged and played in any vision systems to detect and filter out the trojans in run-time. Our framework can detect the presence of the trigger and sanitize the input to the degree that the trojan effects will be eliminated and the network still correctly identify the trojaned inputs. In the meantime, the accuracy of the clean inputs is still identical to the benign network’s performance.
We summarize our contributions as below:
To the best of our knowledge, we are the first to propose unsupervised input sanitization in trojaned deep neural networks that can still correctly identify trojaned inputs using image inpainting.
We create a system that is run-time, online and cleanses Trojan inputs automatically in a black-box setting without the knowledge of the network or trojan information.
We propose a defense method that does not require re-training the trojaned network and work on cheaply unlabeled data to defense against trojan attacks.
We demonstrate that our method is robust to backdoor attacks on different classification tasks such as object classification (CIFAR10) and Traffic Sign Recognition (GTSRB) with the attack success rate reduced from 100% to below 0.25%.
2 Background: Backdoor Neural Networks
Firstly, we will begin with some background knowledge on Deep Neural Networks which is mostly used in our work.
Taking an input , a Deep Neural Network is a parameterized function that map to in which are the function’s parameters. Input could be an image, and outputclasses.
DNN is structured with hidden layers inside, and each layer has neurons. Outputs of those neurons are called activations denoted as , and formulated as follows
where is a non-linear function, are weights at that layer, , and are fixed bias
Weights and biases belong to network parameters while other parameters such as the number of hidden layer , number of neurons in each layer
or non-linear activation functionare called hyper-parameters.
The output of the network is the last layer’s activation function where normally is the softmax function.
One special type of DNN is Convolutional Neural Network (CNN)
which is widely used in computer vision and pattern recognition task. In CNN network, besides fully connected layers, it contains convolutional layers which are in 3D volumes, and each activation of a neuron in CNN layer is determined by a subset of neurons in the previous layers computed by a 3D matrix of weights known as afilter. There will be the same filter for each channel, and filters will be needed for channels at convolutional layer.
To train a DNN network, we need to determine the hyper-parameters (such as the architecture type, number of hidden layers) as well as network parameters (weights and biases). Taking image classification as an example, we need to have a training dataset of image inputs knowing the ground-truth labels. Noting the training dataset as of inputs, , and ground-truth labels
. The training process is trying to determine the distances between the predictions of inputs and the ground-truth labels, these distances are measured by a loss function. The learning algorithm will return such that
To verify the network, a separated validation set of inputs with their ground-truth labels will be used. In most cases, this is the only requirement that most Deep Learning practitioners care of, and it would lead to security threats of backdoor attacks which will be discussed in Section 2.1.
2.1 Backdoor Attacks
DL training requires a huge amount of labeled data to get an acceptable error. However, there are only a few people who can have access to those costly labeled data, while the demand for AI applications and DL is enormous. Transfer learning is one of the commonly popular methods applied in this case when users have a limitation of labeled data. However, using transfer learning on a pre-trained model could bring a great number of security threats that users might not aware of. In recent works, authors [8, 11] have shown that backdoor attacks can be applied in transfer learning leading to the security threats for those who are relying on transfer learning.
Not only big data, but huge computational power consumption is also a challenge for users to train a DNN. Therefore, one solution arises recently which is Machine Learning as a Service (MLaaS) where users outsource their specifications of their DNN to a third-party service. Normally, the only measurement in their specification is the accuracy of the model. As shown in backdoor papers [8, 4, 11], the backdoor attack methods can achieve identical or even better accuracy than the clean network which satisfied the specification of the user, while embedding malicious trojan that can be activated when the pattern trigger is presented.
3 Threat Model and Terminology
In our paper, we consider an adversary who wants to manipulate the DL model to misclassify any inputs into a targeted class when the backdoor trigger is presented, while keeping the normal behavior with all other kinds of inputs. This backdoor can help attackers to impersonate someone with higher privileges in face recognition system or can mislead the self-driving car to a specific target identified by attackers. Identical to the approach of recent papers [5, 16, 6], we focus on input-agnostic attacks while the trigger will misclassify any inputs to a targeted class regardless input sources (illustrated in Figure 2). We also assume that attacking is pretty strong with white-box access and have full control of the training process to generate a strong backdoor, which is relevant to the current situation of popular pre-trained models and MLaaS. Besides, the trigger types, shapes, and sizes would also be chosen arbitrarily by attackers. The adversary holds the full power to train the poisoned model from MLaaS or publishing their poisoned pre-trained models online. Particularly, the adversary will poison a given dataset by inserting a portion of poisoned inputs yielding the poisoned model (benign model). This poisoned model will behave normally in most case but will be misled to the targeted class chosen by attackers when the trojan trigger appears. Formally, , but where misclassify adversarial input generated from a function : .
In other words, the model will perform normally for benign inputs , while mistarget to the target when the malicious inputs with trigger on it present (illustrated in Figure 2).
4 Overview of Our Approach Against Backdoor
This section will explain the overview of our system to detect and clean out trojans. We will use an example of Traffic Sign Recognition task to illustrate for our system (Figure 3). The trigger is a flower (used in ) located at the center of the Stop sign. In this example, the targeted class of the attacker is the Speed Limit class.
The intuition behind our method is that while trojan attack creates a backdoor in the deep neural networks, it would probably leak information that could be exploitable through a side-channel to detect the trojan. By interpreting the network decision, we found the leak information of trojan effect through the decision of DNN in feature maps by using the Visual Explanation tool such as GradCAM . However, detecting the trojan is not sufficient in some critical applications such as self-driving cars when denying the service is not an option. We contribute to the discipline by adopting the GAN-based inpainting method from computer vision  to turn the trojaned images into benign ones which restores the trojan’s network performance and still correctly classifies the trojaned images. Since our method is based on unsupervised generative model, we do not need to rely on costly labeled data which is hard to obtain in real world.
The overall idea of DeepCleanse (DC) is illustrated in Figure 3. First of all, the input will be processed through Visual Explanation module to identify the important regions regarding the logit score of the predicted class. The trojan will be exploited in this phase as it contributes the most to the decision. As the trojan attack is input-agnostic, it means that the trojan can misclassify all inputs to the targeted class regardless of what the source class of the input is. Under the exploitation of the DNN interpretability, this effect will be exposed (as shown in Figure 3). After detecting the trojan area, DC will remove it out of the picture frame during Mask Generation process to eliminate the trojan effect. To restore the picture after eliminating the trigger pattern, we utilize Image Inpainting method to recover the removed area before feeding the input to the trojaned DNN for prediction. By applying our DC framework, it will not only eliminate the trojan but also maintain the performance of the trojaned DNN by correctly classifying the trojaned inputs as well as benign inputs . Distinct from previous works, our DC framework can work in a black-box manner regardless of whether the network and inputs are trojaned or not, and can be used as a trojan filter attached to any DNNs to defense against backdoor attacks without reconfiguration the network or costly labeled data.
5 Experiment Evaluation of Backdoor Input Sanitization
Object Identification (CIFAR10). This task is widely used in computer vision. Its goal is to recognize 10 different objects in tiny colored images . The dataset contains 50K training images and 10K testing images.
Traffic Sign Recognition (GTSRB). This German Traffic Sign Benchmark (GTSRB) dataset is commonly used to evaluate the vulnerabilities of DNN as it is related to autonomous driving and safety concerns. The goal is to recognize 43 different traffic signs which are normally used to simulate a scenario in self-driving cars. The dataset contains 39.2K colored training images and 12.6K colored testing images .
Attack Configuration Our attack method is following the methodology proposed by Gu et al.  to inject backdoor during training. Here we focus on the powerful input-agnostic attack scenario where the backdoor was created to allow any inputs from any source labels to be misclassified as the targeted label. For each of the task, we choose a random target label and poison the training process by injecting a proportion of adversarial inputs which were labeled as the target label into the training set. Through our experiments, we see that only a proportion of 10% of the adversarial inputs could achieve the high attack success rate of while still maintaining the high accuracy performance (Table 1).
|Task||Infected Model||Clean Model|
The triggers used for our experiment evaluation are illustrated in Figure 4. All of the triggers are physical ones that can be deployed in real-world scenarios, here we also implement the triggers in previous works  such as flower trigger for CIFAR10 and Post-it note for GTSRB.
From left to right: the flower and Post-it note trigger (used in ) deployed in CIFAR10 and GTSRB tasks respectively
6 Mitigation of Backdoors
After successfully deploying the backdoor attacks on different networks, we build the DC framework which can automatically detect and eliminate the trojans while keeping the performance of the neural network with high accuracy. The performance of the trojaned networks after attached with our DC framework is identical with the benign model, while the attack success rate from backdoor trigger reduces significantly from 100% to roundly 0%. Details regarding the results are discussed below:
6.1 Object Identification (Cifar10)
For Cifar10, the flower trigger (shown in Figure 4) is used. The trigger is of size , while the size of the input is . As shown in Table 2, the accuracy of the poisoned network is 90.53% which is identical to the clean model 90.34% (poisoned successfully). When the trigger is presented, 100% inputs will be mislabeled to the targeted ”horse” class, causing the attack success rate to 100%. However, plugging in our DeepCleanse method, the attack success rate is significantly reduced to 0.25 %, while the performance on clean inputs is 90.08% identical to the clean network. This means that we successfully cleanse out the trojans when they are presented while maintaining the performance of DNN through our DC method. The illustration is shown in Figure 5.
6.2 Traffic Sign Recognition (Gtsrb)
We got a similar result on GTSRB. While the attack success rate of the trigger (post-it note shown in Figure 4) is 100%, after our DeepCleanse system, the attack success rate drops significantly to 0%, showing the robustness of our method across platforms. The accuracy for cleaned input after DC is 96.48% which is identical to the clean model of 96.60% as shown in Table 1.
7 Robustness Against Clean Inputs
One distinctive feature that differentiates DC from other methods is that our method can work regardless of the input is poisoned or not. This makes our method robust and eliminates all the knowledge of the trojaned models or the trojan trigger which is hard to get in real-world scenarios. We can think of DC as a filter to cleanse trojans out of inputs before feeding into DNNs.
|Task||Poisoned Inputs||Clean Inputs|
|Classification Accuracy||Classification Accuracy|
The DeepCleanse framework has constructively turned the strength of the input-agnostic trojan attacks into a weakness. This allows us to both detect the trojan via side-channel in feature maps and cleanse the trojan effects out of malicious inputs on run-time without pre-knowledge of the poisoned networks as well as the trojan triggers. Extensive experiments on various datasets ranging from CIFAR10 and GTSSRB has shown the robustness of our method to defense backdoor attacks on different classification tasks. Overall, unlike the prior works relied on costly labeled data that either stop at anomaly detection or fine-tune the trojaned networks, DeepCleanse is the first single framework working on cheaply unlabeled data that is capable of cleaning out the trojaned triggers from malicious inputs and patching the performance of the poisoned DNN without the adversarial training. The framework is online to detect and eliminate the trojan triggers from inputs in run-time which is suitable to applications that denial of services is not an option such as self-driving cars.
-  Amazon machine learning. Amazon. External Links: Cited by: §1.
-  (2018) How to backdoor federated learning. abs/1807.00459. Cited by: §1.
-  Caffe model zoo. External Links: Cited by: §1.
-  (2017) Targeted backdoor attacks on deep learning systems using data poisoning. abs/1712.05526. Cited by: §1, §2.1.
-  (2018) SentiNet: detecting physical attacks against deep learning systems. abs/1812.00292. Cited by: §3, §3.
-  (2019) STRIP: a defence against trojan attacks on deep neural networks. abs/1902.06531. Cited by: §3, §3.
-  (2016) Deep learning. MIT Press. Note: http://www.deeplearningbook.org Cited by: §1, §2.
-  (2017) BadNets: identifying vulnerabilities in the machine learning model supply chain. abs/1708.06733. Cited by: Figure 1, §1, §1, §2.1, §2.1, §4, Figure 4, §5, §5.
-  (2017-07) Globally and locally consistent image completion. 36 (4), pp. 107:1–107:14. External Links: Cited by: §4.
-  () CIFAR-10 (canadian institute for advanced research). Neural NetworksACM Trans. Graph.2018 IEEE European Symposium on Security and Privacy (EuroS&P)CoRRCoRRCoRRArXivArXivCoRR. External Links: Cited by: 1st item, §5.
-  (2018) Trojaning attack on neural networks. In NDSS, Cited by: §1, §2.1, §2.1.
-  (2018) SoK: security and privacy in machine learning. pp. 399–414. Cited by: §1.
-  (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626. Cited by: §4.
-  (2012) Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. (0), pp. –. Note: External Links: Cited by: 2nd item, §5.
-  (2014) Intriguing properties of neural networks. In International Conference on Learning Representations, External Links: Cited by: §1.
-  (2019) Neural cleanse: identifying and mitigating backdoor attacks in neural networks. In Proceedings of the IEEE Symposium on Security and Privacy (IEEE S&P), San Francisco, CA. Cited by: §3, §3.