DeepAI AI Chat
Log In Sign Up

Improving Surrogate Gradient Learning in Spiking Neural Networks via Regularization and Normalization

by   Nandan Meda, et al.

Spiking neural networks (SNNs) are different from the classical networks used in deep learning: the neurons communicate using electrical impulses called spikes, just like biological neurons. SNNs are appealing for AI technology, because they could be implemented on low power neuromorphic chips. However, SNNs generally remain less accurate than their analog counterparts. In this report, we examine various regularization and normalization techniques with the goal of improving surrogate gradient learning in SNNs.


Surrogate Gradient Learning in Spiking Neural Networks

A growing number of neuromorphic spiking neural network processors that ...

Encrypted Internet traffic classification using a supervised Spiking Neural Network

Internet traffic recognition is an essential tool for access providers s...

Action Recognition Using Supervised Spiking Neural Networks

Biological neurons use spikes to process and learn temporally dynamic in...

StereoSpike: Depth Learning with a Spiking Neural Network

Depth estimation is an important computer vision task, useful in particu...

A superconducting nanowire spiking element for neural networks

As the limits of traditional von Neumann computing come into view, the b...

Surrogate Gradient Spiking Neural Networks as Encoders for Large Vocabulary Continuous Speech Recognition

Compared to conventional artificial neurons that produce dense and real-...

Fluctuation-driven initialization for spiking neural network training

Spiking neural networks (SNNs) underlie low-power, fault-tolerant inform...