Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks

12/08/2019
by   Yi Xiang Marcus Tan, et al.
18

Recent discoveries in the field of adversarial machine learning have shown that Artificial Neural Networks (ANNs) are susceptible to adversarial attacks. These attacks cause misclassification of specially crafted adversarial samples. In light of this phenomenon, it is worth investigating whether other types of neural networks are less susceptible to adversarial attacks. In this work, we applied standard attack methods originally aimed at conventional ANNs, towards stochastic ANNs and also towards Spiking Neural Networks (SNNs), across three different datasets namely MNIST, CIFAR-10 and Patch Camelyon. We analysed their adversarial robustness against attacks performed in the raw image space of the different model variants. We employ a variety of attacks namely Basic Iterative Method (BIM), Carlini Wagner L2 attack (CWL2) and Boundary attack. Our results suggests that SNNs and stochastic ANNs exhibit some degree of adversarial robustness as compared to their ANN counterparts under certain attack methods. Namely, we found that the Boundary and the state-of-the-art CWL2 attacks are largely ineffective against stochastic ANNs. Following this observation, we proposed a modified version of the CWL2 attack and analysed the impact of this attack on the models' adversarial robustness. Our results suggest that with this modified CWL2 attack, many models are more easily fooled as compared to the vanilla CWL2 attack, albeit observing an increase in L2 norms of adversarial perturbations. Lastly, we also investigate the resilience of alternative neural networks against adversarial samples transferred from ResNet18. We show that the modified CWL2 attack provides an improved cross-architecture transferability compared to other attacks.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 7

page 8

page 9

page 10

research
05/07/2019

A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks

In this era of machine learning models, their functionality is being thr...
research
01/12/2023

Security-Aware Approximate Spiking Neural Networks

Deep Neural Networks (DNNs) and Spiking Neural Networks (SNNs) are both ...
research
10/20/2022

Chaos Theory and Adversarial Robustness

Neural Networks, being susceptible to adversarial attacks, should face a...
research
03/05/2020

Detection and Recovery of Adversarial Attacks with Injected Attractors

Many machine learning adversarial attacks find adversarial samples of a ...
research
07/10/2021

Identifying Layers Susceptible to Adversarial Attacks

Common neural network architectures are susceptible to attack by adversa...
research
08/22/2022

Different Spectral Representations in Optimized Artificial Neural Networks and Brains

Recent studies suggest that artificial neural networks (ANNs) that match...
research
05/04/2020

Guarantees on learning depth-2 neural networks under a data-poisoning attack

In recent times many state-of-the-art machine learning models have been ...

Please sign up or login with your details

Forgot password? Click here to reset