Synapse Compression for Event-Based Convolutional-Neural-Network Accelerators

12/13/2021
by   Lennart Bamberg, et al.
0

Manufacturing-viable neuromorphic chips require novel computer architectures to achieve the massively parallel and efficient information processing the brain supports so effortlessly. Emerging event-based architectures are making this dream a reality. However, the large memory requirements for synaptic connectivity are a showstopper for the execution of modern convolutional neural networks (CNNs) on massively parallel, event-based (spiking) architectures. This work overcomes this roadblock by contributing a lightweight hardware scheme to compress the synaptic memory requirements by several thousand times, enabling the execution of complex CNNs on a single chip of small form factor. A silicon implementation in a 12-nm technology shows that the technique increases the system's implementation cost by only 2 memory-footprint reduction of up to 374x compared to the best previously published technique.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset