MorphIC: A 65-nm 738k-Synapse/mm^2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning

04/17/2019
by   Charlotte Frenkel, et al.
0

Recent trends in the field of artificial neural networks (ANNs) and convolutional neural networks (CNNs) investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices. As full on-chip weight storage is necessary to avoid the high energy cost of off-chip memory accesses, memory reduction requirements for weight storage pushed toward the use of binary weights, which were demonstrated to have a limited accuracy reduction on many applications when quantization-aware training techniques are used. In parallel, spiking neural network (SNN) architectures are explored to further reduce power when processing sparse event-based data streams, while on-chip spike-based online learning appears as a key feature for applications constrained in power and resources during the training phase. However, designing power- and area-efficient spiking neural networks still requires the development of specific techniques in order to leverage on-chip online learning on binary weights without compromising the synapse density. In this work, we demonstrate MorphIC, a quad-core binary-weight digital neuromorphic processor embedding a stochastic version of the spike-driven synaptic plasticity (S-SDSP) learning rule and a hierarchical routing fabric for large-scale chip interconnection. The MorphIC SNN processor embeds a total of 2k leaky integrate-and-fire (LIF) neurons and more than two million plastic synapses for an active silicon area of 2.86mm^2 in 65nm CMOS, achieving a high density of 738k synapses/mm^2. MorphIC demonstrates an order-of-magnitude improvement in the area-accuracy tradeoff on the MNIST classification task compared to previously-proposed SNNs, while keeping a competitive energy-accuracy tradeoff.

READ FULL TEXT

page 1

page 6

page 7

research
02/25/2020

sBSNN: Stochastic-Bits Enabled Binary Spiking Neural Network with On-Chip Learning for Energy Efficient Neuromorphic Computing at the Edge

In this work, we propose stochastic Binary Spiking Neural Network (sBSNN...
research
09/15/2017

Recursive Binary Neural Network Learning Model with 2.28b/Weight Storage Requirement

This paper presents a storage-efficient learning model titled Recursive ...
research
05/13/2020

A 28-nm Convolutional Neuromorphic Processor Enabling Online Learning with Spike-Based Retinas

In an attempt to follow biological information representation and organi...
research
12/13/2021

Synapse Compression for Event-Based Convolutional-Neural-Network Accelerators

Manufacturing-viable neuromorphic chips require novel computer architect...
research
02/27/2016

Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks

Multilayered artificial neural networks (ANN) have found widespread util...
research
05/10/2020

Optimal Distribution of Spiking Neurons Over Multicore Neuromorphic Processors

In a multicore neuromorphic processor embedding a learning algorithm, a ...
research
04/03/2016

A New Learning Method for Inference Accuracy, Core Occupation, and Performance Co-optimization on TrueNorth Chip

IBM TrueNorth chip uses digital spikes to perform neuromorphic computing...

Please sign up or login with your details

Forgot password? Click here to reset