STDP Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy Efficient Recognition

10/12/2017
by   Nitin Rathi, et al.
0

Spiking Neural Networks (SNNs) with a large number of weights and varied weight distribution can be difficult to implement in emerging in-memory computing hardware due to the limitations on crossbar size (implementing dot product), the constrained number of conductance levels in non-CMOS devices and the power budget. We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels. Pruning is based on the power law weight-dependent Spike Timing Dependent Plasticity (STDP) model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The weights of the retained connections are quantized to the available number of conductance levels. The process of pruning non-critical connections and quantizing the weights of critical synapses is performed at regular intervals during training. We evaluated our sparse and quantized network on MNIST dataset and on a subset of images from Caltech-101 dataset. The compressed topology achieved a classification accuracy of 90.1 (91.6 improvement in energy and area, respectively. The compressed topology is energy and area efficient while maintaining the same classification accuracy of a 2-layer fully connected SNN topology.

READ FULL TEXT
research
02/10/2020

A Spike in Performance: Training Hybrid-Spiking Neural Networks with Quantized Activation Functions

The machine learning community has become increasingly interested in the...
research
02/13/2023

Workload-Balanced Pruning for Sparse Spiking Neural Networks

Pruning for Spiking Neural Networks (SNNs) has emerged as a fundamental ...
research
01/07/2020

Probabilistic spike propagation for FPGA implementation of spiking neural networks

Evaluation of spiking neural networks requires fetching a large number o...
research
02/08/2023

The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks

Energy efficient implementations and deployments of Spiking neural netwo...
research
07/17/2020

FSpiNN: An Optimization Framework for Memory- and Energy-Efficient Spiking Neural Networks

Spiking Neural Networks (SNNs) are gaining interest due to their event-d...
research
10/09/2020

Connection Pruning for Deep Spiking Neural Networks with On-Chip Learning

Long training time hinders the potential of the deep Spiking Neural Netw...
research
12/07/2012

Spike and Tyke, the Quantized Neuron Model

Modeling spike firing assumes that spiking statistics are Poisson, but r...

Please sign up or login with your details

Forgot password? Click here to reset