Intelligence Processing Units Accelerate Neuromorphic Learning

11/19/2022
by   Pao-Sheng Vincent Sun, et al.
0

Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency when performing inference with deep learning workloads. Error backpropagation is presently regarded as the most effective method for training SNNs, but in a twist of irony, when training on modern graphics processing units (GPUs) this becomes more expensive than non-spiking networks. The emergence of Graphcore's Intelligence Processing Units (IPUs) balances the parallelized nature of deep learning workloads with the sequential, reusable, and sparsified nature of operations prevalent when training SNNs. IPUs adopt multi-instruction multi-data (MIMD) parallelism by running individual processing threads on smaller data blocks, which is a natural fit for the sequential, non-vectorized steps required to solve spiking neuron dynamical state equations. We present an IPU-optimized release of our custom SNN Python package, snnTorch, which exploits fine-grained parallelism by utilizing low-level, pre-compiled custom operations to accelerate irregular and sparse data access patterns that are characteristic of training SNN workloads. We provide a rigorous performance assessment across a suite of commonly used spiking neuron models, and propose methods to further reduce training run-time via half-precision training. By amortizing the cost of sequential processing into vectorizable population codes, we ultimately demonstrate the potential for integrating domain-specific accelerators with the next generation of neural networks.

READ FULL TEXT

page 1

page 2

page 7

page 8

research
06/22/2023

Accelerating SNN Training with Stochastic Parallelizable Spiking Neurons

Spiking neural networks (SNN) are able to learn spatiotemporal features ...
research
06/03/2019

SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes

Event-based neuromorphic systems promise to reduce the energy consumptio...
research
03/03/2023

TopSpark: A Timestep Optimization Methodology for Energy-Efficient Spiking Neural Networks on Autonomous Mobile Agents

Autonomous mobile agents require low-power/energy-efficient machine lear...
research
05/18/2021

Sparse Spiking Gradient Descent

There is an increasing interest in emulating Spiking Neural Networks (SN...
research
05/19/2018

Reliable counting of weakly labeled concepts by a single spiking neuron model

Making an informed, correct and quick decision can be life-saving. It's ...
research
07/14/2019

An Artificial Spiking Quantum Neuron

Artificial spiking neural networks have found applications in areas wher...
research
04/28/2020

Spiking Machine Intelligence: What we can learn from biology and how spiking Neural Networks can help to improve Machine Learning

Up to now, modern Machine Learning is based on fitting high dimensional ...

Please sign up or login with your details

Forgot password? Click here to reset