Q-SpiNN: A Framework for Quantizing Spiking Neural Networks

A prominent technique for reducing the memory footprint of Spiking Neural Networks (SNNs) without decreasing the accuracy significantly is quantization. However, the state-of-the-art only focus on employing the weight quantization directly from a specific quantization scheme, i.e., either the post-training quantization (PTQ) or the in-training quantization (ITQ), and do not consider (1) quantizing other SNN parameters (e.g., neuron membrane potential), (2) exploring different combinations of quantization approaches (i.e., quantization schemes, precision levels, and rounding schemes), and (3) selecting the SNN model with a good memory-accuracy trade-off at the end. Therefore, the memory saving offered by these state-of-the-art to meet the targeted accuracy is limited, thereby hindering processing SNNs on the resource-constrained systems (e.g., the IoT-Edge devices). Towards this, we propose Q-SpiNN, a novel quantization framework for memory-efficient SNNs. The key mechanisms of the Q-SpiNN are: (1) employing quantization for different SNN parameters based on their significance to the accuracy, (2) exploring different combinations of quantization schemes, precision levels, and rounding schemes to find efficient SNN model candidates, and (3) developing an algorithm that quantifies the benefit of the memory-accuracy trade-off obtained by the candidates, and selects the Pareto-optimal one. The experimental results show that, for the unsupervised network, the Q-SpiNN reduces the memory footprint by ca. 4x, while maintaining the accuracy within 1 the supervised network, the Q-SpiNN reduces the memory by ca. 2x, while keeping the accuracy within 2

READ FULL TEXT

page 1

page 2

page 7

research
06/17/2022

tinySNN: Towards Memory- and Energy-Efficient Spiking Neural Networks

Larger Spiking Neural Network (SNN) models are typically favorable as th...
research
05/16/2023

MINT: Multiplier-less Integer Quantization for Spiking Neural Networks

We propose Multiplier-less INTeger (MINT) quantization, an efficient uni...
research
07/17/2020

FSpiNN: An Optimization Framework for Memory- and Energy-Efficient Spiking Neural Networks

Spiking Neural Networks (SNNs) are gaining interest due to their event-d...
research
11/27/2020

Rethinking Generalization in American Sign Language Prediction for Edge Devices with Extremely Low Memory Footprint

Due to the boom in technical compute in the last few years, the world ha...
research
05/13/2023

Quantization in Spiking Neural Networks

In spiking neural networks (SNN), at each node, an incoming sequence of ...
research
05/30/2023

Low Precision Quantization-aware Training in Spiking Neural Networks with Differentiable Quantization Function

Deep neural networks have been proven to be highly effective tools in va...
research
10/26/2021

Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes

Quantization is a popular technique that transforms the parameter repres...

Please sign up or login with your details

Forgot password? Click here to reset