Adversarial Defense via Neural Oscillation inspired Gradient Masking

11/04/2022
by   Chunming Jiang, et al.
0

Spiking neural networks (SNNs) attract great attention due to their low power consumption, low latency, and biological plausibility. As they are widely deployed in neuromorphic devices for low-power brain-inspired computing, security issues become increasingly important. However, compared to deep neural networks (DNNs), SNNs currently lack specifically designed defense methods against adversarial attacks. Inspired by neural membrane potential oscillation, we propose a novel neural model that incorporates the bio-inspired oscillation mechanism to enhance the security of SNNs. Our experiments show that SNNs with neural oscillation neurons have better resistance to adversarial attacks than ordinary SNNs with LIF neurons on kinds of architectures and datasets. Furthermore, we propose a defense method that changes model's gradients by replacing the form of oscillation, which hides the original training gradients and confuses the attacker into using gradients of 'fake' neurons to generate invalid adversarial samples. Our experiments suggest that the proposed defense method can effectively resist both single-step and iterative attacks with comparable defense effectiveness and much less computational costs than adversarial training methods on DNNs. To the best of our knowledge, this is the first work that establishes adversarial defense through masking surrogate gradients on SNNs.

READ FULL TEXT
research
01/12/2023

Security-Aware Approximate Spiking Neural Networks

Deep Neural Networks (DNNs) and Spiking Neural Networks (SNNs) are both ...
research
06/10/2020

Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based Classifiers

Being an emerging class of in-memory computing architecture, brain-inspi...
research
08/29/2020

Improving Resistance to Adversarial Deformations by Regularizing Gradients

Improving the resistance of deep neural networks against adversarial att...
research
08/20/2023

HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds

Spiking neural networks (SNNs) offer promise for efficient and powerful ...
research
07/03/2018

Local Gradients Smoothing: Defense against localized adversarial attacks

Deep neural networks (DNNs) have shown vulnerability to adversarial atta...
research
12/18/2020

RAILS: A Robust Adversarial Immune-inspired Learning System

Adversarial attacks against deep neural networks are continuously evolvi...
research
02/13/2020

Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Convolutional neural networks are vulnerable to small ℓ^p adversarial at...

Please sign up or login with your details

Forgot password? Click here to reset