While deep neural networks have emerged as dominant tools for supervised learning problems, they remain vulnerable to adversarial examples(Szegedy et al., 2013)
. Small, carefully chosen perturbations to input data can induce misclassification with high probability. In the image domain, even perturbations so small as to be imperceptible to humans can fool powerful convolutional neural networks(Szegedy et al., 2013; Goodfellow et al., 2014)
. This fragility presents an obstacle to using machine learning in the wild. For example, a vision system vulnerable to adversarial examples might be fundamentally unsuitable for a computer security application. Even if a vision system is not explicitly used for security, these weaknesses might be critical. Moreover, these problems seem unnecessary. If these perturbations are not perceptible to people, why should they fool a machine?
Since this problem was first identified, a rapid succession of papers have proposed various techniques both for generating and for guarding against adversarial attacks. Goodfellow et al. (2014) introduced a simple method for quickly producing adversarial examples called the fast gradient sign method (FGSM). To produce an adversarial example using FGSM, we update the inputs by taking one step in the direction of the sign of the gradient of the loss with respect to the input.
To defend against adversarial examples some papers propose training the neural network on adversarial examples themselves, either using the same model (Goodfellow et al., 2014; Madry et al., 2017), or using an ensemble of models (Tramèr et al., 2017a). Taking a different approach, Nayebi & Ganguli (2017) draws inspiration from biological systems. They propose that to harden neural networks against adversarial examples, one should learn flat, compressed representations that are sensitive to a minimal number of input dimensions.
This paper introduces Stochastic Activation Pruning (SAP), a method for guarding pretrained networks against adversarial examples. During the forward pass, we stochastically prune a subset of the activations in each layer, preferentially retaining activations with larger magnitudes. Following the pruning, we scale up the surviving activations to normalize the dynamic range of the inputs to the subsequent layer. Unlike other adversarial defense methods, our method can be applied post-hoc to pretrained networks and requires no additional fine-tuning.
We denote an -layered neural network as a chain of functions , where each
consists of a linear transformationfollowed by a non-linearity . Given a set of nonlinearities and weight matrices, a neural network provides a nonlinear mapping from inputs to outputs , i.e.
In supervised classification and regression problems, we are given a data set of pairs
, where each pair is drawn from an unknown joint distribution. For the classification problems,
is a categorical random variable, and for regression,in order to minimize the loss. We denote as the loss of a learned network, parameterized by , on a pair of . To simplify notation, we focus on classification problems, although our methods are broadly applicable.
Consider an input
that is correctly classified by the model. An adversary seeks to apply a small additive perturbation, , such that , subject to the constraint that the perturbation is imperceptible to a human. For perturbations applied to images, the -norm is considered a better measure of human perceptibility than the more familiar norm Goodfellow et al. (2014). Throughout this paper, we assume that the manipulative power of the adversary, the perturbation , is of bounded norm . Given a classifier, one common way to generate an adversarial example is to perturb the input in the direction that increases the cross-entropy loss. This is equivalent to minimizing the probability assigned to the true label. Given the neural network , network parameters , input data , and corresponding true output , an adversary could create a perturbation as follows
Due to nonlinearities in the underlying neural network, and therefore of the objective function , the optimization Eq. 1, in general, can be a non-convex problem. Following Madry et al. (2017); Goodfellow et al. (2014), we use the first order approximation of the loss function
The first term in the optimization is not a function of the adversary perturbation, therefore reduces to
An adversary chooses to be in the direction of sign of , i.e. . This is the FGSM technique due to Goodfellow et al. (2014). Note that FGSM requires an adversary to access the model in order to compute the gradient.
3 Stochastic activation pruning
Consider the defense problem from a game-theoretic perspective (Osborne & Rubinstein, 1994). The adversary designs a policy in order to maximize the defender’s loss, while knowing the defenders policy. At the same time defender aims to come up with a strategy to minimize the maximized loss. Therefore, we can rewrite Eq. 1 as follows
where is the adversary’s policy, which provides in the space of bounded (allowed) perturbations (for any in range of , ) and is the defenders policy which provides , an instantiation of its policy. The adversary’s goal is to maximize the loss of the defender by perturbing the input under a strategy and the defender’s goal is to minimize the loss by changing model parameters to under strategy . The optimization problem in Eq. 2 is a minimax zero-sum game between the adversary and defender where the optimal strategies , in general, are mixed Nash equilibrium, i.e. stochastic policies.
Intuitively, the idea of SAP is to stochastically drop out nodes in each layer during forward propagation. We retain nodes with probabilities proportional to the magnitude of their activation and scale up the surviving nodes to preserve the dynamic range of the activations in each layer. Empirically, the approach preserves the accuracy of the original model. Notably, the method can be applied post-hoc to already-trained models.
Formally, assume a given pretrained model, with activation layers (ReLU, Sigmoid, etc.) and input pair of . For each of those layers, SAP converts the activation map to a multinomial distribution, choosing each activation with a probability proportional to its absolute value. In other words, we obtain the multinomial distribution of each activation layer with normalization of their absolute values onto a -ball simplex. Given the ’th layer activation map, , the probability of sampling the ’th activation with value is given by
We draw random samples with replacement from the activation map given the probability distribution described above. This makes it convenient to determine whether an activation would be sampled at all. If an activation is sampled, we scale it up by the inverse of the probability of sampling it over all the draws. If not, we set the activation to. In this way, SAP preserves inverse propensity scoring of each activation. Under an instance of policy , we draw samples with replacement from this multinomial distribution. The new activation map, is given by
where is the indicator function that returns if was sampled at least once, and otherwise. The algorithm is described in Algorithm 1. In this way, the model parameters are changed from to , for instance under policy , while the reweighting preserves . If the model was linear, the proposed pruning method would behave the same way as the original model in expectation. In practice, we find that even with the non-linearities in deep neural networks, for sufficiently many examples, SAP performs similarly to the un-pruned model. This guides our decision to apply SAP to pretrained models without performing fine-tuning.
3.1 Advantage against adversarial attack
We attempt to explain the advantages of SAP under the assumption that we are applying it to a pre-trained model that achieves high generalization accuracy. For instance under policy , if the number of samples drawn for each layer , , is large, then fewer parameters of the neural network are pruned, and the scaling factor gets closer to . Under this scenario, the stochastically pruned model performs almost identically to the original model. The stochasticity is not advantageous in this case, but there is no loss in accuracy in the pruned model as compared to the original model.
On the other hand, with fewer samples in each layer, , a large number of parameters of the neural network are pruned. Under this scenario, the SAP model’s accuracy will drop compared to the original model’s accuracy. But this model is stochastic and has more freedom to deceive the adversary. So the advantage of SAP comes if we can balance the number of samples drawn in a way that negligibly impacts accuracy but still confers robustness against adversarial attacks.
SAP is similar to the dropout technique due to Srivastava et al. (2014). However, there is a crucial difference: SAP is more likely to sample activations that are high in absolute value, whereas dropout samples each activation with the same probability. Because of this difference, SAP, unlike dropout, can be applied post-hoc to pretrained models without significantly decreasing the accuracy of the model. Experiments comparing SAP and dropout are included in section 4. Interestingly, dropout confers little advantage over the baseline. We suspect that the reason for this is that the dropout training procedure encourages all possible dropout masks to result in similar mappings.
3.2 Adversarial attack on Sap
If the adversary knows that our defense policy is to apply SAP, it might try to calculate the best strategy against the SAP model. Given the neural network , input data , corresponding true output , a policy over the allowed perturbations, and a policy over the model parameters that come from SAP (this result holds true for any stochastic policy chosen over the model parameters), the adversary determines the optimal policy
Therefore, using the result from section 2, the adversary determines the perturbation as follows;
To maximize the term, the adversary will set to be in the direction of sign of . Analytically computing
is not feasible. However, the adversary can use Monte Carlo (MC) sampling to estimate the expectation as. Then, using FGSM, .
Our experiments to evaluate SAP
address two tasks: image classification and reinforcement learning. We apply the method to theReLU activation maps at each layer of the pretrained neural networks. To create adversarial examples in our evaluation, we use FGSM, . For stochastic models, the adversary estimates using MC sampling unless otherwise mentioned. All perturbations are applied to the pixel values of images, which normally take values in the range -. So the fraction of perturbation with respect to the data’s dynamic range would be . To ensure that all images are valid, even following perturbation, we clip the resulting pixel values so that they remain within the range . In all plots, we consider perturbations of the following magnitudes .111All the implementations were coded in MXNet framework (Chen et al., 2015) and sample code is available at https://github.com/Guneet-Dhillon/Stochastic-Activation-Pruning
To evaluate models in the image classification domain, we look at two aspects: the model accuracy for varying values of , and the calibration of the models (Guo et al., 2017). Calibration of a model is the relation between the confidence level of the model’s output and its accuracy. A linear calibration is ideal, as it suggests that the accuracy of the model is proportional to the confidence level of its output. To evaluate models in the reinforcement learning domain, we look at the average score that each model achieves on the games played, for varying values of . The higher the score, the better is the model’s performance. Because the units of reward are arbitrary, we report results in terms of the the relative percent change in rewards. In both cases, the output of stochastic models are computed as an average over multiple forward passes.
4.1 Adversarial attacks in image classification
The CIFAR- dataset (Krizhevsky & Hinton, 2009) was used for the image classification domain. We trained a ResNet- model (He et al., 2016) using SGD, with minibatches of size , momentum of , weight decay of , and a learning rate of for the first epochs, then for the next epochs, and then for the next epochs. This achieved an accuracy of with cross-entropy loss and ReLU non-linearity. For all the figures in this section, we refer to this model as the dense model.
The accuracy of the dense model degrades quickly with . For , the accuracy drops down to , and for it is . These are small (hardly perceptible) perturbations in the input images, but the dense model’s accuracy decreases significantly.
4.1.1 Stochastic activation pruning (Sap)
We apply SAP to the dense model. For each activation map , we pick of activations to keep. Since activations are sampled with replacement, can be more than . We will refer to as the percentage of samples drawn. Fig. 0(a) plots performance of SAP models against examples perturbed with random noise. Perturbations of size are readily perceptible and push all models under consideration to near-random outputs, so we focus our attention on smaller values of . Fig. 0(b) plots performance of these models against adversarial examples. With many samples drawn, SAP converges to the dense model. With few samples drawn, accuracy diminishes for , but is higher for . The plot explains this balance well. We achieve the best performance with samples picked. We will now only look at SAP (SAP-). Against adversarial examples, with , and , we observe a , and absolute increase in accuracy respectively. However, for , we observe a absolute decrease in accuracy. For again, there is a absolute decrease in accuracy.
4.1.2 Dropout (DRO)
Dropout, a technique due to Srivastava et al. (2014), was also tested to compare with SAP. Similar to the SAP setting, this method was added to the ReLU activation maps of the dense model. We see that low dropout rate perform similar to the dense model for small values, but its accuracy starts decreasing very quickly for higher values (Fig. 1(a)). We also trained ResNet- models, similar to the dense model, but with different dropout rates. This time, the models were trained for epochs, with an initial learning rate of , reduced by a factor of after , , and epochs. These models were tested against adversarial examples with and without dropout during validation (Figs. 1(b) and 1(c) respectively). The models do similar to the dense model, but do not provide additional robustness.
4.1.3 Adversarial training (ADV)
Adversarial training (Goodfellow et al., 2014) has emerged a standard method for defending against adversarial examples. It has been adopted by Madry et al. (2017); Tramèr et al. (2017a) to maintain high accuracy levels even for large values. We trained a ResNet- model, similar to the dense model, with an initial learning rate of , which was halved every epochs, for a total of epochs. It was trained on a dataset consisting of un-perturbed data and adversarially perturbed data, generated on the model from the previous epoch, with . This achieved an accuracy of on the un-perturbed validation set. Note that the model capacity was not changed. When tested against adversarial examples, the accuracy dropped to and for and respectively. We ran SAP- on the ADV model (referred to as ADVSAP-). The accuracy in the no perturbation case was . For adversarial examples, both models act similar to each other for small values of . But for and , ADVSAP- gets a higher accuracy than ADV by an absolute increase of and respectively.
We compare the accuracy- plot for dense, SAP-, ADV and ADVSAP- models. This is illustrated in Fig. 3 For smaller values of , SAP- achieves high accuracy. As gets larger, ADVSAP- performs better than all the other models. We also compare the calibration plots for these models, in Fig. 4. The dense model is not linear for any . The other models are well calibrated (close to linear), and behave similar to each other for . For higher values of , we see that ADVSAP- is the closest to a linearly calibrated model.
4.2 Adversarial attacks in deep reinforcement learning (RL)
Previously, (Behzadan & Munir, 2017; Huang et al., 2017; Kos & Song, 2017) have shown that the reinforcement learning agents can also be easily manipulated by adversarial examples. The RL agent learns the long term value of each state-action pair through interaction with an environment, where given a state , the optimal action is . A regression based algorithm, Deep Q-Network (DQN)(Mnih et al., 2015) and an improved variant, Double DQN (DDQN) have been proposed for the popular Atari games (Bellemare et al., 2013) as benchmarks. We deploy DDQN algorithm and train an RL agent in variety of different Atari game settings.
Similar to the image classification experiments, we tested SAP on a pretrained model (the model is described in the Appendix section A), by applying the method on the ReLU activation maps. SAP- was used for these experiments. Table 1 specifies the relative percentage increase in rewards of SAP- as compared to the original model. For all the games, we observe a drop in performance for the no perturbation case. But for , the relative increase in rewards is positive (except for in the BattleZone game), and is very high in some cases ( for for the Bowling game).
4.3 Additional baselines
In addition to experimenting with SAP, dropout, and adversarial training, we conducted extensive experiments with other methods for introducing stochasticity into a neural network. These techniques included -mean Gaussian noise added to weights (RNW), -mean multiplicative Gaussian noise for the weights (RSW), and corresponding additive (RNA) and multiplicative (RSA) noise added to the activations. We describe each method in detail in Appendix B. Each of these models performs worse than the dense baseline at most levels of perturbation and none matches the performance of SAP. Precisely why SAP works while other methods introducing stochasticity do not, remains an open question that we continue to explore in future work.
4.4 Sap attacks with varying numbers of MC samples
In the previous experiments the SAP adversary used MC samples to estimate the gradient. Additionally, we compared the performance of SAP- against various attacks, these include the standard attack calculated based on the dense model and those generated on SAP- by estimating the gradient with various numbers of MC samples. We see that if the adversary uses the dense model to generate adversarial examples, SAP- model’s accuracy decreases. Additionally, if the adversary uses the SAP- model to generate adversarial examples, greater numbers of MC samples lower the accuracy more. Still, even with MC samples, for low amounts of perturbation ( and ), SAP- retains higher accuracy than the dense model.
Computing a single backward pass of the SAP- model for examples takes seconds on GPUs. Using and MC samples would take and hours respectively.
4.5 Iterative adversarial attack
A more sophisticated technique for producing adversarial perturbations (than FGSM) is to apply multiple and smaller updates to the input in the direction of the local sign-gradients. This can be done by taking small steps of size in the direction of the sign-gradient at the updated point and repeating the procedure times (Kurakin et al., 2016) as follows
where function is a projection into a -ball of radius centered at , and also into the hyper-cube of image space (each pixel is clipped to the range of ). The dense and SAP- models are tested against this adversarial attack, with (Fig. 0(c)). The accuracies of the dense model at , , and are , , and respectively. The accuracies of the SAP- model against attacks computed on the same model (with MC samples taken at each step to estimate the gradient) are , , and , for respectively. The SAP- model provides accuracies of , , and , against attacks computed on the dense model, with the perturbations respectively. Iterative attacks on the SAP models are much more expensive to compute and noisier than iterative attacks on dense models. This is why the adversarial attack computed on the dense model results in lower accuracies on the SAP- model than the adversarial attack computed on the SAP- model itself.
5 Related work
Robustness to adversarial attack has recently emerged as a serious topic in machine learning (Goodfellow et al., 2014; Kurakin et al., 2016; Papernot & McDaniel, 2016; Tramèr et al., 2017b; Fawzi et al., 2018). Goodfellow et al. (2014) introduced FGSM. Kurakin et al. (2016) proposed an iterative method where FGSM is used for smaller step sizes, which leads to a better approximation of the gradient. Papernot et al. (2017) observed that adversarial examples could be transferred to other models as well. Madry et al. (2017) introduce adding random noise to the image and then using the FGSM method to come up with adversarial examples.
Being robust against adversarial examples has primarily focused on training on the adversarial examples. Goodfellow et al. (2014) use FGSM to inject adversarial examples into their training dataset. Madry et al. (2017) use an iterative FGSM approach to create adversarial examples to train on. Tramèr et al. (2017a) introduced an ensemble adversarial training method of training on the adversarial examples created on the model itself and an ensemble of other pre-trained models. These works have been successful, achieving only a small drop in accuracy form the clean and adversarially generated data. Nayebi & Ganguli (2017)
proposes a method to produce a smooth input-output mapping by using saturating activation functions and causing the activations to become saturated.
The SAP approach guards networks against adversarial examples without requiring any additional training. We showed that in the adversarial setting, applying SAP to image classifiers improves both the accuracy and calibration. Notably, combining SAP with adversarial training yields additive benefits. Additional experiments show that SAP can also be effective against adversarial examples in reinforcement learning.
- Behzadan & Munir (2017) Vahid Behzadan and Arslan Munir. Vulnerability of deep reinforcement learning to policy induction attacks. arXiv preprint arXiv:1701.04143, 2017.
- Bellemare et al. (2013) Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253–279, 2013.
- Chen et al. (2015) Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015.
- Fawzi et al. (2018) Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Analysis of classifiers’ robustness to adversarial perturbations. Machine Learning, 107(3):481–508, 2018.
- Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- Guo et al. (2017) Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. arXiv preprint arXiv:1706.04599, 2017.
- Han et al. (2015) Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
- Huang et al. (2017) Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284, 2017.
- Kos & Song (2017) Jernej Kos and Dawn Song. Delving into adversarial attacks on deep policies. arXiv preprint arXiv:1705.06452, 2017.
- Krizhevsky & Hinton (2009) Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
- Kurakin et al. (2016) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
- Madry et al. (2017) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
- Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
- Nayebi & Ganguli (2017) Aran Nayebi and Surya Ganguli. Biologically inspired protection of deep networks from adversarial attacks. arXiv preprint arXiv:1703.09202, 2017.
- Osborne & Rubinstein (1994) Martin J Osborne and Ariel Rubinstein. A course in game theory. MIT press, 1994.
- Papernot & McDaniel (2016) Nicolas Papernot and Patrick McDaniel. On the effectiveness of defensive distillation. arXiv preprint arXiv:1607.05113, 2016.
- Papernot et al. (2017) Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM, 2017.
- Srivastava et al. (2014) Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929–1958, 2014.
- Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- Tramèr et al. (2017a) Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017a.
- Tramèr et al. (2017b) Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453, 2017b.
Appendix A Reinforcement learning model architecture
For the experiments in section 4.2
, we trained the network with RMSProp, minibatches of size, a learning rate of , and a momentum of and as in (Mnih et al., 2015) where the discount factor is , the number of steps between target updates to steps. We updated the network every steps by randomly sampling a minibatch of size samples from the replay buffer and trained the agents for a total of steps per game. The experience replay contains the most recent transitions. For training we used an -greedy policy with annealed linearly from to over the first time steps and fixed at thereafter.
The input to the network is tensor with a rescaled, gray-scale version of the last four observations. The first convolution layer has 32 filters of size
with a stride of. The second convolution layer has filters of size with stride . The last convolution layer has filters of size followed by two fully connected layers with size and the final fully connected layer Q-value of each action where ReLU rectifier is deployed for the nonlinearity at each layer.
Appendix B Other methods
We tried a variety of different methods that could be added to pretrained models and tested their performance against adversarial examples. The following is a continuation of section 4.1, where we use the dense model again on the CIFAR- dataset.
b.1 Random noisy weights (RNW)
One simple way of introducing stochasticity to the activations is by adding random Gaussian noise to each weight, with mean
and constant standard deviation,. So each weight tensor now changes to , where the ’th entry is given by
These models behave very similar to the dense model (Fig. 4(a), the legend indicates the value of ). While we test several different values of , we do not observe any significant improvements regarding robustness against adversarial examples. As increased, the accuracy for non-zero decreased.
b.2 Randomly scaled weights (RSW)
Instead of using additive noise, we also try multiplicative noise. The scale factor can be picked from a Gaussian distribution, with meanand constant standard deviation . So each weight tensor now changes to , where the ’th entry is given by
These models perform similar to the dense model, but again, no robustness is offered against adversarial examples. They follow a similar trend as the RNW models (Figure 4(b), the legend indicates the value of ).
b.3 Deterministic weight pruning (DWP)
Following from the motivation of preventing perturbations to propagate forward in the network, we tested deterministic weight pruning, where the top entries of a weight matrix were kept, while the rest were pruned to , according to their absolute values. This method was prompted by the success achieved by this pruning method, introduced by Han et al. (2015), where they also fine-tuned the model.
For low levels of pruning, these models do very similar to the dense model, even against adversarial examples (Fig. 4(c), the legend indicates the value of ). The adversary can compute the gradient of the sparse model, and the perturbations propagate forward through the surviving weights. For higher levels of sparsity, the accuracy in the no-perturbation case drops down quickly.
b.4 Stochastic weight pruning (SWP)
Observing the failure of deterministic weight pruning, we tested a mix of stochasticity and pruning, the stochastic weight pruning method. Very similar to the idea of SAP, we consider all the entries of a weight tensor to be a multinomial distribution, and we sample from it with replacement. For a weight tensor , we sample from it times with replacement. The probability of sampling is given by
The new weight entry, , is given by
where is the indicator function that returns if was sampled at least once, and otherwise.
For these experiments, for each weight matrix , the number of samples picked were of . Since samples were picked with replacement, could be more than . We will refer to as the percentage of samples drawn.
These models behave very similar to the dense model. We tried drawing range of percentages of samples, but no evident robustness could be seen against adversarial examples (Figure 4(d), the legend indicates the value of ). For a small , it is very similar to the dense model. As increases, the these models do marginally better for low non-zero values, and then drops again (similar to the SAP case).
b.5 Random noisy activations (RNA)
Next we change our attention to the activation maps in the dense model. One simple way of introducing stochasticity to the activations is by adding random Gaussian noise to each activation entry, with mean and constant standard deviation, . So each activation map now changes to , where the ’th entry is given by
These models too do not offer any robustness against adversarial examples. Their accuracy drops quickly with and (Fig. 4(e), the legend indicates the value of ).
b.6 Randomly scaled activations (RSA)
Instead of having additive noise, we can also make the model stochastic by scaling the activations. The scale factor can be picked from a Gaussian distribution, with mean and constant standard deviation . So each activation map now changes to , where the ’th entry is given by
These models perform similar to the dense model, exhibiting no additional robustness against adversarial examples (Figure 4(f), the legend indicates the value of ).