A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks

05/07/2019
by   Saima Sharmin, et al.
14

In this era of machine learning models, their functionality is being threatened by adversarial attacks. In the face of this struggle for making artificial neural networks robust, finding a model, resilient to these attacks, is very important. In this work, we present, for the first time, a comprehensive analysis of the behavior of more bio-plausible networks, namely Spiking Neural Network (SNN) under state-of-the-art adversarial tests. We perform a comparative study of the accuracy degradation between conventional VGG-9 Artificial Neural Network (ANN) and equivalent spiking network with CIFAR-10 dataset in both whitebox and blackbox setting for different types of single-step and multi-step FGSM (Fast Gradient Sign Method) attacks. We demonstrate that SNNs tend to show more resiliency compared to ANN under black-box attack scenario. Additionally, we find that SNN robustness is largely dependent on the corresponding training mechanism. We observe that SNNs trained by spike-based backpropagation are more adversarially robust than the ones obtained by ANN-to-SNN conversion rules in several whitebox and blackbox scenarios. Finally, we also propose a simple, yet, effective framework for crafting adversarial attacks from SNNs. Our results suggest that attacks crafted from SNNs following our proposed method are much stronger than those crafted from ANNs.

READ FULL TEXT

page 1

page 4

page 6

research
12/08/2019

Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks

Recent discoveries in the field of adversarial machine learning have sho...
research
08/22/2022

Different Spectral Representations in Optimized Artificial Neural Networks and Brains

Recent studies suggest that artificial neural networks (ANNs) that match...
research
03/23/2020

Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations

In the recent quest for trustworthy neural networks, we present Spiking ...
research
02/22/2018

Adversarial Training for Probabilistic Spiking Neural Networks

Classifiers trained using conventional empirical risk minimization or ma...
research
04/12/2022

Toward Robust Spiking Neural Network Against Adversarial Perturbation

As spiking neural networks (SNNs) are deployed increasingly in real-worl...
research
08/20/2023

HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds

Spiking neural networks (SNNs) offer promise for efficient and powerful ...
research
04/25/2023

Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient

Spiking Neural Networks (SNNs) are recognized as the candidate for the n...

Please sign up or login with your details

Forgot password? Click here to reset