Training spiking multi-layer networks with surrogate gradients on an analog neuromorphic substrate

06/12/2020
by   Benjamin Cramer, et al.
0

Spiking neural networks are nature's solution for parallel information processing with high temporal precision at a low metabolic energy cost. To that end, biological neurons integrate inputs as an analog sum and communicate their outputs digitally as spikes, i.e., sparse binary events in time. These architectural principles can be mirrored effectively in analog neuromorphic hardware. Nevertheless, training spiking neural networks with sparse activity on hardware devices remains a major challenge. Primarily this is due to the lack of suitable training methods that take into account device-specific imperfections and operate at the level of individual spikes instead of firing rates. To tackle this issue, we developed a hardware-in-the-loop strategy to train multi-layer spiking networks using surrogate gradients on the analog BrainScales-2 chip. Specifically, we used the hardware to compute the forward pass of the network, while the backward pass was computed in software. We evaluated our approach on downscaled 16x16 versions of the MNIST and the fashion MNIST datasets in which spike latencies encoded pixel intensities. The analog neuromorphic substrate closely matched the performance of equivalently sized networks implemented in software. It is capable of processing 70 k patterns per second with a power consumption of less than 300 mW. Added activity regularization resulted in sparse network activity with about 20 spikes per input, at little to no reduction in classification performance. Thus, overall, our work demonstrates low-energy spiking network processing on an analog neuromorphic substrate and sets several new benchmarks for hardware systems in terms of classification accuracy, processing speed, and efficiency. Importantly, our work emphasizes the value of hardware-in-the-loop training and paves the way toward energy-efficient information processing on non-von-Neumann architectures.

READ FULL TEXT
research
03/06/2017

Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System

Emulating spiking neural networks on analog neuromorphic hardware offers...
research
04/03/2020

Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware

With more and more event-based neuromorphic hardware systems being devel...
research
02/10/2022

Hardware calibrated learning to compensate heterogeneity in analog RRAM-based Spiking Neural Networks

Spiking Neural Networks (SNNs) can unleash the full power of analog Resi...
research
06/09/2020

Hardware Implementation of Spiking Neural Networks Using Time-To-First-Spike Encoding

Hardware-based spiking neural networks (SNNs) are regarded as promising ...
research
02/13/2023

Event-based Backpropagation for Analog Neuromorphic Hardware

Neuromorphic computing aims to incorporate lessons from studying biologi...
research
06/15/2021

Deep Phasor Networks: Connecting Conventional and Spiking Neural Networks

In this work, we extend standard neural networks by building upon an ass...
research
03/26/2021

Visual Explanations from Spiking Neural Networks using Interspike Intervals

Spiking Neural Networks (SNNs) compute and communicate with asynchronous...

Please sign up or login with your details

Forgot password? Click here to reset