HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds

08/20/2023
by   Hejia Geng, et al.
0

Spiking neural networks (SNNs) offer promise for efficient and powerful neurally inspired computation. Common to other types of neural networks, however, SNNs face the severe issue of vulnerability to adversarial attacks. We present the first study that draws inspiration from neural homeostasis to develop a bio-inspired solution that counters the susceptibilities of SNNs to adversarial onslaughts. At the heart of our approach is a novel threshold-adapting leaky integrate-and-fire (TA-LIF) neuron model, which we adopt to construct the proposed adversarially robust homeostatic SNN (HoSNN). Distinct from traditional LIF models, our TA-LIF model incorporates a self-stabilizing dynamic thresholding mechanism, curtailing adversarial noise propagation and safeguarding the robustness of HoSNNs in an unsupervised manner. Theoretical analysis is presented to shed light on the stability and convergence properties of the TA-LIF neurons, underscoring their superior dynamic robustness under input distributional shifts over traditional LIF neurons. Remarkably, without explicit adversarial training, our HoSNNs demonstrate inherent robustness on CIFAR-10, with accuracy improvements to 72.6 respectively. Furthermore, with minimal FGSM adversarial training, our HoSNNs surpass previous models by 29.99 CIFAR-10. Our findings offer a new perspective on harnessing biological principles for bolstering SNNs adversarial robustness and defense, paving the way to more resilient neuromorphic computing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/23/2020

Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations

In the recent quest for trustworthy neural networks, we present Spiking ...
research
11/04/2022

Adversarial Defense via Neural Oscillation inspired Gradient Masking

Spiking neural networks (SNNs) attract great attention due to their low ...
research
12/09/2020

Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters

Deep Learning (DL) algorithms have gained popularity owing to their prac...
research
05/07/2019

A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks

In this era of machine learning models, their functionality is being thr...
research
08/29/2023

Unleashing the Potential of Spiking Neural Networks for Sequential Modeling with Contextual Embedding

The human brain exhibits remarkable abilities in integrating temporally ...
research
08/13/2020

Adversarial Training and Provable Robustness: A Tale of Two Objectives

We propose a principled framework that combines adversarial training and...

Please sign up or login with your details

Forgot password? Click here to reset