Second generation neural networks have been empirically successful in solving a plethora of tasks. The different variants of Artificial Neural Networks (ANN) have been used in applications namely providing authenticating services , detecting anomalous behaviours in cyber-physical systems , image recognition , or simply playing a game of Go .
However, in 2013, the first research was performed showing that ANNs were vulnerable to adversarial attacks , a phenomenon that involves the creation of specially perturbed samples from their original counterparts, imperceptible upon visual inspection, which can be misclassified by ANNs. Since then, many researchers introduced other adversarial attack methods against such ANN models, whether under a white-box [12, 29, 23, 31, 5] or a black-box [3, 34] scenario. This raises questions about the reliability of ANNs, which can be a cause for concern especially when used in cyber-security or mission critical contexts. [46, 41].
Recently, a third generation of neural networks from the field of computational neuroscience, namely Spiking Neural Networks (SNNs), has been researched upon as a means to model the biological properties of the human brain more closely as compared to their second generation ANN counterparts. In contrast to ANNs, SNNs train on spike trains rather than image pixels or a set of predefined features. There have been different variants of SNNs, differing in terms of the learning rule used (whether through standard backpropagation[26, 42, 16] or via Spike-Timing-Dependent Plasticity (STDP) [8, 19, 17]) or the architecture. In this work, we focused on the STDP-based learning variant of SNNs.
Stochastic ANNs have also been used to perform image classification tasks. In this work, we focused on two sub-categories of such stochastic ANNs, one involving making both its hidden weights and activations are in a binary state , while the other only requiring its hidden activations to be binary [1, 39, 48]
Since there are strong evidences showcasing the weaknesses of ANNs to adversarial attacks, we question if there exists alternative variants of neural networks that are inherently less susceptible to such a phenomenon. In this work, we have decided to turn our attention to analysing the resilience of both SNN and stochastic ANN variants against adversarial attacks.
The authors in  gave a preliminary study of investigating the adversarial robustness of two variants of SNNs that employ the use of gradient backpropagation during training, namely ANN-to-SNN conversion  and also Spike-based training . The authors examined the robustness of the SNNs, and also a VGG-9 model in the white-box and black-box settings. They concluded that SNNs trained directly on spike trains are more robust to adversarial attacks as compared to SNNs converted from their ANN counterparts. However, in their experiments, the authors performed their attacks on intermediate spike representations of images, which is the result of passing images through a Poisson Spike Generation phase followed by rate computation. Though their work shows preliminary results on the robustness of SNNs, we find that their simplified approach of constructing adversarial samples yields unrelatable deviations between the natural and their adversarial counterparts in the image space. Also, they investigate variants of SNNs that are trained via backpropagation. We attempt to address those points in our work, by focusing on STDP-based learning SNNs and also constructing adversarial samples in the input space. To the best of our knowledge, we did not find prior work examining the adversarial robustness of networks employing the use of BSNs, though there exists works that explored adversarial attacks against Binary Neural Networks (BNNs) [9, 18]. The authors in  performed two white-box attacks and a black-box attack (the Fast Gradient Sign Method (FGSM) , CWL2 and the transferability from a substitute model procedure proposed by ) and showed that stochasticity in binary models do improve the robustness against attacks.
, we examined two very recent works in the field of SNN: the Multi-Class Synaptic Efficacy Function-based leaky-integrate-and fire neuRON (MCSEFRON) model and Reward-modulated STDP spike-timing-dependent plasticity in deep convolutional network proposed in . For the remaining of this paper, we would like to refer to the latter model as for notation simplicity. For our stochastic ANN variants, we used Binary Stochastic Neurons (BSN) to give our models binarized activations in a stochastic manner. Also, we used Binarized Neural Networks (BNN) that binarizes weights and activations as our second variant of the stochastic ANN. We used the vanilla ResNet18 model as a bridge across the different variants of neural networks.
The contributions of this work are as follows:
We analyse to what extent conventional adversarial attacks (white-box and black-box) can be performed in the original image space against SNNs with different information encoding schemes. This is of interest as it includes networks not trained with backpropagation.
We shed light on the effectiveness of adversarial attacks against stochastic neural network models. In order to provide a reasonable comparison across the models, we employed the vanilla ResNet18 CNN as a baseline.
We propose an augmented version of a state-of-the-art white-box attack, CWL2, and analyse the robustness of the different network variants to samples generated via such attacks.
We investigate the susceptibility of alternative variants of neural networks are against transferred adversarial samples across architectures, constructed from ResNet18.
As a last novel contribution we measure the efficiency of attacks against stochastic mixtures of different architectures. Given the availability of different variants of neural networks, a stochastic mixture of them is an imaginable defense mechanism which does not rely on detecting adversarials.
The remaining of our paper is organised in the following manner. We start off with details of the attacks we used, also providing brief introduction to SNNs and stochastic ANNs in Section II. In Section III, we discuss our experimental setup and also our findings. This is followed by a discussion of attacking stochastic architecture mixtures in Section IV, should such a defence mechanism be employed. After which, in Section V, we provide some discussion points with regards to stochastic ANNs. Finally, we conclude our work in Section VI.
Ii-a Adversarial Attacks Against Neural Networks
The concept of adversarial examples were first introduced by 
. The authors demonstrated that misclassification of ANNs were possible by adding a set of specially crafted perturbations to an image, albeit imperceptible upon visual inspection. Following their work, several other researchers explored various methods to launch adversarial attacks in an attempt to further evaluate the robustness of ANNs. One of which was the FGSM, where it uses the sign of the gradients that were computed from the loss with respect to the input space, to perform a single-step perturbations on the input itself. They adopted the same loss function that was used to train the image classifier to obtain the gradients. Several studies[29, 23] extended this attacking technique by applying the algorithm to the input image sample for multiple iterations to construct a stronger adversarial sample. Currently, however, the Carlini & Wagner (CW) attack  is the state-of-the-art white-box adversarial attack method, capable of producing misclassified and visually imperceptible images, that manage to make defensive distillation  ineffective against adversarial attacks.
The methods described above and many other methods proposed by the scientific community [31, 36, 30, 28] pertain to attacks done in a white-box setting, in which it is assumed that the attacker has full knowledge and access to the ANN image classifier. However, several researchers [3, 34] have also shown that it is also possible to attack a model, without the need of any knowledge of the targeted model (i.e. black-box attacks). In , the authors used the decision made by the targeted image classifier to perturb the input sample. In , the authors made use of the concept of transferability of adversarial samples across neural networks to attack the victim classifier. Their method is a two-step process, which first involves approximating the decision boundary of the targeted classifier by training a surrogate model to convert a black-box to a white-box problem. Next, they attack the surrogate model in a white-box fashion thereafter launching the resultant adversarial sample towards the targeted classifier. In the next section, we describe the attacks we used in our work, exploring both the white-box and black-box categories.
Ii-B Attack Algorithms Used
To attack the model in a black-box setting, we used a decision-based method known as Boundary Attack . This approach initialises itself by generating a starting sample that is labelled as adversarial to the victim classifier. Following which, random walks are taken by this sample along the decision boundary that separates the correct and incorrect classification regions. These random walks will only be considered valid if it fulfils two constraints, i) the resultant sample remains adversarial and ii) the distance between the resultant sample and the target is reduced. Essentially, this approach performs rejection sampling such that it finds smaller valid adversarial perturbations across the iterations.
We used the Basic Iterative Method (BIM)  as one of the means to perform white-box attacks. This method is basically an iterative form of the FGSM attack which is represented as such:
where represents the gradients of the loss calculated with respect to the input space and its original label , represents the iterations. This approach takes the sign of the gradients, multiply it with a scaling factor , and adding this perturbation to the sample at the iteration.
The CW attack is a targeted attack strategy, in which an objective function is optimised such that it yields a resultant imperceptible image, while being labelled as an adversarial class by the targeted image classifier. This image would then be used to cause misclassification. More specifically, the adversary has to solve the following objective function:
where the first term minimises the norm of the perturbation while the second term ensures misclassification. is a constant. This attack method is considered as state-of-the-art and can still be used to bypass several detection mechanisms .
Ii-C Spiking Neural Networks
MCSEFRON  is a two-layered SNN that has time-dependent weights connecting between neurons. It adopts the STDP learning rule and it trains based on variations between the relative timings between the actual and desired post-synaptic spike time. It encodes images into spike trains via the same mechanism as , which involves projecting the real-valued normalised image pixels (in [0,1]) onto multiple overlapping receptive fields (RF) represented by Gaussians. After the training is done, it makes decisions based on the earliest post-synaptic spikes while ignoring the rest.
The Reward-modulated STDP (R-STDP) in deep convolutional networks , referred to as SNN, makes use of three convolution layers, with the first two trained in an unsupervised manner via STDP and the last convolution trained via Reward-modulated STDP. The input images had to be first preprocessed by six Difference of Gaussian (DoG) filters, which were followed by the encoding into spike trains by the intensity-to-latency  scheme. The SNN
does not require any external classifiers as they used a neuron-based decision-making trained via R-STDP in the final convolution layer. The R-STDP is based on reinforcement learning concepts, where correct decisions will lead to STDP while incorrect decisions will lead to anti-STDP.
Iii Experiments and Results
which we refer to as PCam. The libraries we used in our experiments are PyTorch and SpykeTorch  for constructing our image classifiers. For attacks, we used the Foolbox  library at version 1.8.0.
Iii-a Image Classification Baseline
In this work, we explored eight different variants and architectures of neural networks: ResNet18, MCSEFRON, SNN
, three BSN architectures and two BNN architectures. The BSN architectures used are a 2-layered, 4-layered Multilayer Perceptron, and a modified LeNet which we will refer to as BSN-2, BSN-4 and BSN-L respectively. For the BNNs, we explored both deterministic and stochastic binarization strategies, which we will refer to as BNN-D and BNN-S respectively.
Iii-A1 Training the Classifiers
For the ANN, we used the ResNet18 
from PyTorch’s torchvision. We would like to refer the reader to our supplementary materials for more details regarding the hyperparameters we used for this model and also for the other variants, that will be discussed in the paragraphs below within this section.
For the case of MCSEFRON, we used five receptive fields (RFs) and a learning rate of 0.1 for MNIST while using three RFs and a learning rate of 0.5 for CIFAR-10. The other hyperparameters were set at their default values. We used the authors’ implementation of MCSEFRON in Python111https://github.com/nagadarshan-n/MC-SEFRON for training. In training MCSEFRON, we performed sub-sampling strategies on the training data. We used the first batch of training data of CIFAR-10; we used the first 30000 samples of PCam.
As mentioned in Section II, in the case of SNN, the model’s input images are preprocessed by the DoG filters. The number of DoG filters used will determine the input channel of the first convolution layer in SNN. Hence, for a three-channelled image (e.g. CIFAR-10), we first take the mean of the channels to convert the images to a single channel, prior to passing them to the DoG filters. Unfortunately for this model, we could not find a suitable set of hyperparameters that performs reasonably on the PCam dataset. While training, we noticed that the outputs of the network was consistently the same, regardless of the number of training iterations. Hence, we could not report the Adversarial Success Rates (ASRs) and their respective norms for the attacks against SNN using the PCam dataset.
For BSNs, we used a batch size of 128 and used Adam optimizer for the BSN-4 and BSN-L variants while using Stochastic Gradient Descent (SGD) for BSN-2. The other hyperparameters we used can be found in the supplementary material. We adapted the code from this GitHub repository222https://github.com/Wizaron/binary-stochastic-neurons, with the network definition of the BSN-L architecture in PyTorch requiring modification on all intermediate activations with BSN modules.
For the BNNs, we used the same hyperparameters across the various datasets and models and adapted the code from this GitHub repository333https://github.com/itayhubara/BinaryNet.pytorch, which was originally used by the authors in 
. We used a learning rate of 0.005 and weight decay of 0.0001 with a batch size of 256. We also used the Adam optimiser to train our models for 20 epochs in MNIST, 150 epochs in CIFAR-10 and 50 epochs in PCam. We manually set the learning rate to 0.001 at epoch 101 and 0.0005 at epoch 142, following the authors in. For BNN-D and BNN-S, we used the ResNet18 architecture as the structure of the network, while the binarization of the weights and activations will only occur at the forward pass.
Iii-A2 Baseline Classification Performance
The baseline image classification performances are summarised in Table I. It is evident that these results are not state-of-the-art. However, getting the most optimal performance is not the focus of this work. Having said that, we would like to highlight the accuracy obtained for MCSEFRON on the CIFAR-10 dataset. We hypothesise that the reason behind the significantly poor performance is due to the inherent architecture of the model. As MCSEFRON can be considered as a single layered neural network without any convolution layers, its performance is highly limited on more complex image datasets, like CIFAR-10. In a prior work that studied the performance limitations of models without convolutions , they managed to obtain an accuracy of only approximately 52% to 57% on CIFAR-10, using a deeper and more dense fully-connected neural network (see Figure 4(a) in ).
Iii-B Modifying SNN implementation for Adversarial Attacks
As SNNs are inherently very different from conventional ANNs, there is a need to adapt the original implementation of the SNNs to fit our purposes. We made two modifications in our work. First, because there might be instances in which non-differentiable operations were performed (i.e sign function), when adapting such SNNs for our use, we replaced the built-in sign functions with our custom sign function, which performs the same operation but allows gradients to pass through in a straight through fashion in the backward pass. This ensures that the gradients are non-zeros everywhere. Also, since we examined SNNs that were trained via STDP, such a change does not violate the learning rule of the SNNs. Furthermore, as we are only interested in the behaviour of such models when faced with adversarial samples, we extracted the critical parts of the network (i.e decision-making forward pass) only in our adaptation.
Secondly, as SNNs make decisions based on either earliest spike times or maximum internal potentials, their outputs are more commonly a single valued integer, depicting the predicted class. However, for attacks to be done on such networks, we require logits of networks. Hence, we simulated logits in our modification by using the post-synaptic spike times for the case of MCSEFRON and the potentials for the case of SNN
for all of the classes. When spike times were used, we took the negative of spike times so that the max of the vector of spike times correspond to the actual prediction.
Iii-C White-box Attacks Against Neural Networks
We report the proportion of adversarial samples that are successful in causing misclassification and term it as Adversarial Success Rate (ASR; in range [0,1]). Furthermore, we report the mean norms per pixel of the differences between natural images and their adversarial counterparts. We derived that metric by dividing the norm by the total number of pixels in the image. In our experiments, we sub-sampled 500 samples from the test set of the respective datasets during the evaluation of the BIM attack and 100 samples for the evaluation of the other attacks. We performed sub-sampling due to the computational intractability of performing the attacks on the entire dataset. Note that we only selected samples that were originally classified correctly while ignoring the rest.
Iii-C1 Basic Iterative Method (BIM)
For the BIM attack, we varied the attack strength (symbolised by measured in space) while keeping the step sizes and iterations fixed at 0.05 and 100 respectively. We explored values of , and in our experiments, showing the results of while the rest can be found in our supplementary materials.
For an initial sanity check, one may inspect Figure 1. The BIM attack has one parameter, attack strength . One can observe an intuitively reasonable trade-off of adversarial success rate (ASR) versus norm of the distance of the adversarial samples to the original inputs, as the values vary according to .
Two notable observations can be made about BIM from Tables II(a) and II(b): Firstly, when comparing vulnerability of different networks against BIM, spiking neural networks, with the exception of MCSEFRON on CIFAR-10, tend to be the most robust. Secondly, when comparing attacks for a given architectures, BIM yields the highest ASR on binarized stochastic networks of all attacks, however this is achieved at the cost of L2-norms which are multiples of all other methods.
Iii-C2 Carlini & Wagner L2 (CWL2)
For the CWL2 attack, we used the default attack parameters as specified in Foolbox. Exemplified by the results from ResNet18 in Table II(a), the CWL2 attack is an extremely powerful attack that manages to fool the model almost all of the time. However, this attack is not very effective against stochastic ANNs. As shown in Table II(a), stochastic ANNs only has a maximum ASR of 0.402 on the CIFAR-10 dataset for the BSN-2 model and a minimum of 0.01 ASR for BSN-L model on MNIST. Although this attack method is known to be state-of-the-art in generating successful adversarial samples with the least perturbation, its efficacy drops significantly when faced with such model variants.
Iii-D Black-box Attacks Against Neural Networks
Iii-D1 Boundary Attack
The results in Table II(a) shows that the effectiveness of the attack does not differ greatly among susceptible models, likewise among less susceptible models. Interestingly, the Boundary attack performs exceptionally well in terms of ASR against deterministic models, i.e. ResNet18, SNNs and BNN-D. Whereas for the stochastic ANNs, this attack method is much less efficient in finding adversarial samples. It even failed to find any for the case of BNN-S for the PCam dataset. This observation indicates that the attack method does not depend greatly on the architecture of the model but instead, on the nature of the model.
In the case of deterministic models, the decision boundary remains stable after training due to its fixed weights and activations for the same input sample. On the other hand, for stochastic ANNs, its weights and activations will vary based on a probability distribution, resulting in slightly varied predictions for the same sample at different times. Having a stochastic decision boundary will compromise the ability to obtain accurate feedback for the traversal of adversarial sample candidates which explains the poor performance of this attack.
Iii-E Augmented Carlini & Wagner L2 Attack Against Neural Networks
Given the relatively poor ASR obtained by CWL2 and Boundary attacks against stochastic ANNs, we wonder whether a potential attacker may utilise randomness in augmenting input samples in the attack procedure to create attacks which result in samples further away from the decision boundary and thus are able to mislead stochastic ANNs. Recall that the CWL2 attack involves solving the objective function as defined in Equation 2. We modify this function to include an additional term that performs random augmentations on the input image, both rotations and translations, and then optimising it. Equation 3 formulates our modified attack, ModCWL2.
where is the number of iterations to perform random transformations, symbolised by , on the input sample. Our function involves first making random rotations followed by random translations. In this work, we defined the allowable range of rotation angles to
degrees clockwise and counterclockwise, sampled from a uniform distribution. Also, we select at random the translation direction and pixels (integer from 0 to 10) to be applied on the image.
This modification will induce a trade-off between resultant norms and ASR. One can understand it in the following way: performing times random transformations will turn a single sample into a cluster of samples. Moving the cluster as a whole over the decision boundary requires a larger step than moving a single sample, depending on the radius of the cluster.
illustrates a boxplot of the CWL2 and ModCWL2 attacks’ ASR and norms. The ModCWL2 is more consistent than CWL2 in achieving a higher ASR, based on the lower Inter-Quartile Range (IQR) and a much higher median and mean for ModCWL2, across the targeted models. More specifically, the IQR of the ASR of the CWL2 and ModCWL2 attacks are 0.82 and 0.772 respectively. The ModCWL2 attack has a higher median of 0.528 as compared to CWL2 with median 0.28. However, it is clear that the difference between the original and adversarial samples is much greater and more varied for the case of ModCWL2. The IQR of the norms of the CWL2 and ModCWL2 attacks are 0.0773 and 0.514 respectively. The ModCWL2 attack has a slightly higher median of 0.0246 as compared to CWL2 with median 0.0174.
In total, out of 12 configurations of stochastic networks in Table II(a), ModCWL2 performs better in 9 configurations and worse in 3 configurations compared to CWL2 in terms of ASR.
Iii-F Transferability of Adversarial Samples
In this section, we discuss the transferability of adversarial samples derived from the vanilla ResNet18 to other architectures. This is a plausible scenario, arising when the attacker chooses a CNN (i.e. ResNet18) as target for adversarial attacks, since it is the most commonly used neural network variant. He or she then generates adversarial samples from the CNN, and launches them against the actual target model which is based on a different architecture. In this work, we evaluate this transferability phenomenon on the MNIST dataset as their corresponding baseline classification models achieved the lowest test error rates and it is the common dataset that is applicable across all models. We chose a subset of network variants instead of the full range of models in this set of experiments as we ignored repetitive variants and also variants already highly susceptible to the standard mode of attacks.
We draw the following observations based on Table III. Firstly, we observe highest transferability rates for MCSEFRON and, in particular, for BNN-S. For the latter, one may postulate that it is due to the similar base architectures between BNN-S and ResNet18 as BNN-S uses ResNet18 as a structure while replacing components with binarized and stochastic counterparts.
Secondly, for SNN and BSN-L model variants and attack types not including ModCWL2, the success rate is low, thereby showing a certain robustness of SNN and BSN-L against direct transfer attacks. We consider it an important contribution of our study, demanding further investigation.
A third observation is that ModCWL2 performs well across all architectures when compared to the other attacks. This result shows another strength of ModCWL2. Only with BNN-S, it is clearly outperformed by BIM.
Iv Attacking Stochastic Architecture Mixtures
In the previous sections, we observed that several network architectures appear to be moderately robust against transferability attacks. Inspired by this, a defender could employ stochastic switching of a mixture of neural networks with differing architectures to circumvent adversarial attack attempts. To do this, at inference time the defender chooses at random a neural network to be used to evaluate the input sample. This is a special case of drawing a distribution over networks from e.g. a Dirichlet prior. We explore three different selected combinations of ensembles, 1) ResNet18 with BSN-L, 2) ResNet18 with BNN-S, and 3) ResNet18 with BSN-L and BNN-S. Here, we investigate the ASR in attacking against such ensembles. In our experiments, we applied the BIM attack due to its good performance against stochastic networks, using the mean of the gradients with respect to the input across the ensemble of models, with an attack strength chosen at . This is inspired by . While they considered ensembles of CNNs, we explore a stochastic mixture of differing architectures.
permits to estimate the ASR of a transferability attack against a stochastic mixture. For example, using a transferability attack against a-mixture of ResNet18 and BSN-L would result in an ASR of . Surprisingly we can see that directly attacking stochastic mixtures seems to perform poorly, at least for MNIST. As for BSN-L, transferability would result in an ASR of , which is much better than the observed in Table III. This raises the question whether such robustness of stochastic mixtures holds also for other datasets and larger neural networks, or whether more efficient attacks can be designed against stochastic mixtures.
One notable observation is that stochastic networks are almost equally very vulnerable as CNNs, when BIM is used with sufficient strength. It is the simplest of all considered attacks. Its advantage for stochastic networks is that it does not attempt to stay close to the decision boundary as explicitly enforced in boundary attacks, and implicitly enforced by CWL2 attacks where the regulariser term attempts to keep the adversarial close to the initial sample. For stochastic networks the decision boundary is defined only in an expected sense. Staying close to expected decision boundary results in a higher failure rate of adversarials. The simplicity of BIM allows it to take larger steps across the expected decision boundary.
Another observation is that transferability across architectures is limited, which calls for further investigation of non-averaged combination of different architectures.
We performed adversarial attacks on a wide variety of models (e.g. SNNs, BSNs, BNNs), across different datasets namely MNIST, CIFAR-10 and PCam in the raw input image space, with the goal of investigating the adversarial robustness of alternative variants of neural networks. We note that there exists alternative variants of neural networks (i.e. stochastic ANNs) that are vulnerable to the simple BIM and moderately robust against more elaborate adversarial attacks than conventional ANNs. It is a partially positive result that stochastic networks are more robust against elaborate attacks. Unfortunately, detecting a stochastic network by its outputs is trivial.
Given the above, we were motivated to modify a state-of-the-art CWL2 attack, in order to investigate the robustness of such models against this modified attack. We found that our modification do increase the ASR against such model variants substantially, though incurring higher norms in adversarial perturbations. We also analysed the hypothetical scenario whereby the attacker is unsure of the targeted image classifier and thus attempt a transferability attack based on a conventional ANN (i.e. ResNet18). We found that such an attack strategy would be highly ineffective, if there is an architecture mismatch between the source and target models. Finally, we question the success of adversarial attacks should an ensemble utilising a stochastic switch of networks for inference be employed, and found that though ASR do decrease, the change in MNIST is more pronounced than that of CIFAR-10, which calls for further investigation.
This work was supported by both ST Electronics and the National Research Foundation (NRF), Prime Minister’s Office, Singapore under Corporate Laboratory @ University Scheme (Programme Title: STEE Infosec-SUTD Corporate Laboratory). Alexander Binder also gratefully acknowledges the support by PIE-SGP-AI-2018-01.
-  (2013) Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation. pp. 1–12. External Links: Cited by: §I.
-  (2002) Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48 (1-4), pp. 17–37. Cited by: §II-C1.
-  (2018) Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. In ICLR, pp. 1–12. External Links: Cited by: §I, §II-A, §II-B.
-  (2017) Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. External Links: Cited by: §II-B.
-  (2017) Towards Evaluating the Robustness of Neural Networks. Proceedings - IEEE Symposium on Security and Privacy, pp. 39–57. External Links: Cited by: §I, §II-A.
-  (2019) User authentication based on mouse dynamics using deep neural networks: a comprehensive study. IEEE Transactions on Information Forensics and Security. Cited by: §I.
Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704. Cited by: §-A3.
-  (2015) Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Frontiers in Computational Neuroscience 9, pp. 1–9. External Links: Cited by: §I.
-  (2017) Attacking Binarized Neural Networks. pp. 1–14. External Links: Cited by: §I.
-  (1998) Rate coding versus temporal order coding: a theoretical approach. Biosystems 48 (1-3), pp. 57–65. Cited by: §II-C2.
-  (2017-01) Anomaly detection in cyber physical systems using recurrent neural networks. In 2017 IEEE 18th International Symposium on High Assurance Systems Engineering (HASE), Vol. , pp. 140–145. External Links: Cited by: §I.
-  (2014) Explaining and Harnessing Adversarial Examples. pp. 1–11. External Links: Cited by: §I, §I.
-  (2016) Deep residual learning for image recognition. In , pp. 770–778. Cited by: §III-A1.
-  (2017) Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong. External Links: Cited by: §IV.
-  (2016) Binarized neural networks. Advances in Neural Information Processing Systems (Nips), pp. 4114–4122. External Links: Cited by: §I, §III-A1.
-  (2018) Gradient descent for spiking neural networks. Advances in Neural Information Processing Systems 2018-Decem, pp. 1433–1443. External Links: Cited by: §I.
-  (2019) A novel method for extracting interpretable knowledge from a spiking neural classifier with time-varying synaptic weights. pp. 1–16. Cited by: §I, §I, §II-C1.
-  (2018) Combinatorial Attacks on Binarized Neural Networks. pp. 1–12. External Links: Cited by: §I.
STDP-based spiking deep convolutional neural networks for object recognition. Neural Networks 99, pp. 56–67. Cited by: §I.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: TABLE V.
-  (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §III.
-  (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §I.
-  (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §I, §II-A, §II-B.
-  (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §III-A.
-  (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §III.
-  (2019) Enabling Spike-based Backpropagation in State-of-the-art Deep Neural Network Architectures. pp. 1–25. External Links: Cited by: §I, §I.
-  (2015) How far can we go without convolution: improving fully-connected networks. arXiv preprint arXiv:1511.02580. Cited by: §III-A2.
-  (2016) Delving into Transferable Adversarial Examples and Black-box Attacks. (2), pp. 1–24. External Links: Cited by: §II-A.
Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Cited by: §I, §II-A.
-  (2019) SparseFool: a few pixels make a big difference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9087–9096. Cited by: §II-A.
-  (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582. Cited by: §I, §II-A.
-  (2019) SpykeTorch: efficient simulation of convolutional spiking neural networks with at most one spike per neuron. Frontiers in Neuroscience 13, pp. 625. External Links: Cited by: §-A2, §III.
-  (2019) Bio-inspired digit recognition using reward-modulated spike-timing-dependent plasticity in deep convolutional networks. Pattern Recognition 94, pp. 87–95. External Links: Cited by: §-A2, §I, §II-C2.
-  (2017) Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506–519. Cited by: §I, §II-A.
-  (2016) Practical Black-Box Attacks against Machine Learning. External Links: Cited by: §-E, §-E, §I, Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks .
-  (2016) The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. Cited by: §II-A.
-  (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. Cited by: §II-A.
-  (2017) Automatic differentiation in pytorch. Cited by: §III.
-  (2014) Techniques for Learning Binary Stochastic Feedforward Neural Networks. pp. 1–10. External Links: Cited by: §I.
-  (2017) Foolbox: a python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131. Cited by: §III.
-  (2018) Low Resource Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers. External Links: Cited by: §I.
-  (2019) Going deeper in spiking neural networks: vgg and residual architectures. Frontiers in neuroscience 13. Cited by: §I, §I.
-  (2019) A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks. External Links: Cited by: §I, §I.
-  (2016) Mastering the game of go with deep neural networks and tree search. nature 529 (7587), pp. 484. Cited by: §I.
-  (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §I, §II-A.
-  (2019) Adversarial attacks on remote user authentication using behavioural mouse dynamics. arXiv preprint arXiv:1905.11831. Cited by: §I.
-  (2018-06) Rotation equivariant CNNs for digital pathology. External Links: Cited by: §III.
-  (2019) Arm: Augment-Reinforce-Merge Gradient for Stochastic Binary Networks. pp. 1–21. External Links: Cited by: §I.