A Spike in Performance: Training Hybrid-Spiking Neural Networks with Quantized Activation Functions

02/10/2020
by   Aaron R. Voelker, et al.
0

The machine learning community has become increasingly interested in the energy efficiency of neural networks. The Spiking Neural Network (SNN) is a promising approach to energy-efficient computing, since its activation levels are quantized into temporally sparse, one-bit values (i.e., "spike" events), which additionally converts the sum over weight-activity products into a simple addition of weights (one weight for each spike). However, the goal of maintaining state-of-the-art (SotA) accuracy when converting a non-spiking network into an SNN has remained an elusive challenge, primarily due to spikes having only a single bit of precision. Adopting tools from signal processing, we cast neural activation functions as quantizers with temporally-diffused error, and then train networks while smoothly interpolating between the non-spiking and spiking regimes. We apply this technique to the Legendre Memory Unit (LMU) to obtain the first known example of a hybrid SNN outperforming SotA recurrent architectures—including the LSTM, GRU, and NRU—in accuracy, while reducing activities to at most 3.74 bits on average with 1.26 significant bits multiplying each weight. We discuss how these methods can significantly improve the energy efficiency of neural networks.

READ FULL TEXT

page 3

page 7

research
10/12/2017

STDP Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy Efficient Recognition

Spiking Neural Networks (SNNs) with a large number of weights and varied...
research
08/31/2023

Artificial to Spiking Neural Networks Conversion for Scientific Machine Learning

We introduce a method to convert Physics-Informed Neural Networks (PINNs...
research
10/23/2022

Towards Energy-Efficient, Low-Latency and Accurate Spiking LSTMs

Spiking Neural Networks (SNNs) have emerged as an attractive spatio-temp...
research
09/06/2023

Are SNNs Truly Energy-efficient? - A Hardware Perspective

Spiking Neural Networks (SNNs) have gained attention for their energy-ef...
research
05/18/2021

IMPULSE: A 65nm Digital Compute-in-Memory Macro with Fused Weights and Membrane Potential for Spike-based Sequential Learning Tasks

The inherent dynamics of the neuron membrane potential in Spiking Neural...
research
04/29/2021

A Novel Approximate Hamming Weight Computing for Spiking Neural Networks: an FPGA Friendly Architecture

Hamming weights of sparse and long binary vectors are important modules ...
research
02/15/2022

Navigating Local Minima in Quantized Spiking Neural Networks

Spiking and Quantized Neural Networks (NNs) are becoming exceedingly imp...

Please sign up or login with your details

Forgot password? Click here to reset