Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters

12/09/2020
by   Rida El-Allami, et al.
14

Deep Learning (DL) algorithms have gained popularity owing to their practical problem-solving capacity. However, they suffer from a serious integrity threat, i.e., their vulnerability to adversarial attacks. In the quest for DL trustworthiness, recent works claimed the inherent robustness of Spiking Neural Networks (SNNs) to these attacks, without considering the variability in their structural spiking parameters. This paper explores the security enhancement of SNNs through internal structural parameters. Specifically, we investigate the SNNs robustness to adversarial attacks with different values of the neuron's firing voltage thresholds and time window boundaries. We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters. Our results show a significant impact of the structural parameters on the SNNs' security, and promising sweet spots can be reached to design trustworthy SNNs with 85 higher robustness than a traditional non-spiking DL system. To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks. The proposed contributions and the experimental framework is available online to the community for reproducible research.

READ FULL TEXT

page 2

page 5

research
03/23/2020

Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations

In the recent quest for trustworthy neural networks, we present Spiking ...
research
03/01/2021

Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis

Despite many proposed algorithms to provide robustness to deep learning ...
research
08/20/2023

HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds

Spiking neural networks (SNNs) offer promise for efficient and powerful ...
research
07/01/2021

DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks

Spiking Neural Networks (SNNs), despite being energy-efficient when impl...
research
09/07/2022

Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples

Spiking neural networks (SNNs) have attracted much attention for their h...
research
06/21/2022

Structural Stability of Spiking Neural Networks

The past decades have witnessed an increasing interest in spiking neural...

Please sign up or login with your details

Forgot password? Click here to reset