Security-Aware Approximate Spiking Neural Networks

01/12/2023
by   Syed Tihaam Ahmad, et al.
5

Deep Neural Networks (DNNs) and Spiking Neural Networks (SNNs) are both known for their susceptibility to adversarial attacks. Therefore, researchers in the recent past have extensively studied the robustness and defense of DNNs and SNNs under adversarial attacks. Compared to accurate SNNs (AccSNN), approximate SNNs (AxSNNs) are known to be up to 4X more energy-efficient for ultra-low power applications. Unfortunately, the robustness of AxSNNs under adversarial attacks is yet unexplored. In this paper, we first extensively analyze the robustness of AxSNNs with different structural parameters and approximation levels under two gradient-based and two neuromorphic attacks. Then, we propose two novel defense methods, i.e., precision scaling and approximate quantization-aware filtering (AQF), for securing AxSNNs. We evaluated the effectiveness of these two defense methods using both static and neuromorphic datasets. Our results demonstrate that AxSNNs are more prone to adversarial attacks than AccSNNs, but precision scaling and AQF significantly improve the robustness of AxSNNs. For instance, a PGD attack on AxSNN results in a 72% accuracy loss compared to AccSNN without any attack, whereas the same attack on the precision-scaled AxSNN leads to only a 17% accuracy loss in the static MNIST dataset (4X robustness improvement). Similarly, a Sparse Attack on AxSNN leads to a 77% accuracy loss when compared to AccSNN without any attack, whereas the same attack on an AxSNN with AQF leads to only a 2% accuracy loss in the neuromorphic DVS128 Gesture dataset (38X robustness improvement).

READ FULL TEXT

page 1

page 4

page 5

research
12/02/2021

Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?

Approximate computing is known for its effectiveness in improvising the ...
research
11/04/2022

Adversarial Defense via Neural Oscillation inspired Gradient Masking

Spiking neural networks (SNNs) attract great attention due to their low ...
research
02/13/2023

Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data

Deep neural networks (DNNs) have achieved excellent results in various t...
research
12/08/2019

Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks

Recent discoveries in the field of adversarial machine learning have sho...
research
07/01/2021

DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks

Spiking Neural Networks (SNNs), despite being energy-efficient when impl...
research
10/20/2022

Chaos Theory and Adversarial Robustness

Neural Networks, being susceptible to adversarial attacks, should face a...
research
08/04/2021

On the Robustness of Domain Adaption to Adversarial Attacks

State-of-the-art deep neural networks (DNNs) have been proved to have ex...

Please sign up or login with your details

Forgot password? Click here to reset