Synapse Compression for Event-Based Convolutional-Neural-Network Accelerators

12/13/2021
by   Lennart Bamberg, et al.
0

Manufacturing-viable neuromorphic chips require novel computer architectures to achieve the massively parallel and efficient information processing the brain supports so effortlessly. Emerging event-based architectures are making this dream a reality. However, the large memory requirements for synaptic connectivity are a showstopper for the execution of modern convolutional neural networks (CNNs) on massively parallel, event-based (spiking) architectures. This work overcomes this roadblock by contributing a lightweight hardware scheme to compress the synaptic memory requirements by several thousand times, enabling the execution of complex CNNs on a single chip of small form factor. A silicon implementation in a 12-nm technology shows that the technique increases the system's implementation cost by only 2 memory-footprint reduction of up to 374x compared to the best previously published technique.

READ FULL TEXT
research
01/19/2023

ETLP: Event-based Three-factor Local Plasticity for online learning with neuromorphic hardware

Neuromorphic perception with event-based sensors, asynchronous hardware ...
research
04/17/2019

MorphIC: A 65-nm 738k-Synapse/mm^2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning

Recent trends in the field of artificial neural networks (ANNs) and conv...
research
06/26/2023

CIMulator: A Comprehensive Simulation Platform for Computing-In-Memory Circuit Macros with Low Bit-Width and Real Memory Materials

This paper presents a simulation platform, namely CIMulator, for quantif...
research
06/08/2019

5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory

In-memory computing is an emerging computing paradigm that could enable ...
research
07/11/2016

Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity

Spike-timing-dependent plasticity (STDP) incurs both causal and acausal ...
research
02/23/2022

Shisha: Online scheduling of CNN pipelines on heterogeneous architectures

Chiplets have become a common methodology in modern chip design. Chiplet...
research
03/13/2018

A case for multiple and parallel RRAMs as synaptic model for training SNNs

To enable a dense integration of model synapses in a spiking neural netw...

Please sign up or login with your details

Forgot password? Click here to reset