Low Precision Quantization-aware Training in Spiking Neural Networks with Differentiable Quantization Function

05/30/2023
by   Ayan Shymyrbay, et al.
0

Deep neural networks have been proven to be highly effective tools in various domains, yet their computational and memory costs restrict them from being widely deployed on portable devices. The recent rapid increase of edge computing devices has led to an active search for techniques to address the above-mentioned limitations of machine learning frameworks. The quantization of artificial neural networks (ANNs), which converts the full-precision synaptic weights into low-bit versions, emerged as one of the solutions. At the same time, spiking neural networks (SNNs) have become an attractive alternative to conventional ANNs due to their temporal information processing capability, energy efficiency, and high biological plausibility. Despite being driven by the same motivation, the simultaneous utilization of both concepts has yet to be thoroughly studied. Therefore, this work aims to bridge the gap between recent progress in quantized neural networks and SNNs. It presents an extensive study on the performance of the quantization function, represented as a linear combination of sigmoid functions, exploited in low-bit weight quantization in SNNs. The presented quantization function demonstrates the state-of-the-art performance on four popular benchmarks, CIFAR10-DVS, DVS128 Gesture, N-Caltech101, and N-MNIST, for binary networks (64.05%, 95.45%, 68.71%, and 99.43% respectively) with small accuracy drops and up to 31× memory savings, which outperforms existing methods.

READ FULL TEXT

page 1

page 4

page 6

research
02/24/2020

Exploring the Connection Between Binary and Spiking Neural Networks

On-chip edge intelligence has necessitated the exploration of algorithmi...
research
02/08/2023

The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks

Energy efficient implementations and deployments of Spiking neural netwo...
research
05/16/2023

MINT: Multiplier-less Integer Quantization for Spiking Neural Networks

We propose Multiplier-less INTeger (MINT) quantization, an efficient uni...
research
12/20/2018

SQuantizer: Simultaneous Learning for Both Sparse and Low-precision Neural Networks

Deep neural networks have achieved state-of-the-art accuracies in a wide...
research
05/16/2020

NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips

Due to their proven efficiency, machine-learning systems are deployed in...
research
04/29/2021

Hessian Aware Quantization of Spiking Neural Networks

To achieve the low latency, high throughput, and energy efficiency benef...
research
07/05/2021

Q-SpiNN: A Framework for Quantizing Spiking Neural Networks

A prominent technique for reducing the memory footprint of Spiking Neura...

Please sign up or login with your details

Forgot password? Click here to reset