SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for Benchmarking Spiking Neural Networks

10/24/2022
by   Abhishek Moitra, et al.
0

SNNs are an active research domain towards energy efficient machine intelligence. Compared to conventional ANNs, SNNs use temporal spike data and bio-plausible neuronal activation functions such as Leaky-Integrate Fire/Integrate Fire (LIF/IF) for data processing. However, SNNs incur significant dot-product operations causing high memory and computation overhead in standard von-Neumann computing platforms. Today, In-Memory Computing (IMC) architectures have been proposed to alleviate the "memory-wall bottleneck" prevalent in von-Neumann architectures. Although recent works have proposed IMC-based SNN hardware accelerators, the following have been overlooked- 1) the adverse effects of crossbar non-ideality on SNN performance due to repeated analog dot-product operations over multiple time-steps, 2) hardware overheads of essential SNN-specific components such as the LIF/IF and data communication modules. To this end, we propose SpikeSim, a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs. SpikeSim consists of a practical monolithic IMC architecture called SpikeFlow for mapping SNNs. Additionally, the non-ideality computation engine (NICE) and energy-latency-area (ELA) engine performs hardware-realistic evaluation of SpikeFlow-mapped SNNs. Based on 65nm CMOS implementation and experiments on CIFAR10, CIFAR100 and TinyImagenet datasets, we find that the LIF/IF neuronal module has significant area contribution (>11 propose SNN topological modifications leading to 1.24x and 10x reduction in the neuronal module's area and the overall energy-delay-product value, respectively. Furthermore, in this work, we perform a holistic comparison between IMC implemented ANN and SNNs and conclude that lower number of time-steps are the key to achieve higher throughput and energy-efficiency for SNNs compared to 4-bit ANNs.

READ FULL TEXT

page 1

page 5

page 10

page 13

research
09/06/2023

Are SNNs Truly Energy-efficient? - A Hardware Perspective

Spiking Neural Networks (SNNs) have gained attention for their energy-ef...
research
11/03/2022

Hardware/Software co-design with ADC-Less In-memory Computing Hardware for Spiking Neural Networks

Spiking Neural Networks (SNNs) are bio-plausible models that hold great ...
research
10/23/2022

Towards Energy-Efficient, Low-Latency and Accurate Spiking LSTMs

Spiking Neural Networks (SNNs) have emerged as an attractive spatio-temp...
research
05/09/2023

A High-performance, Energy-efficient Modular DMA Engine Architecture

Data transfers are essential in today's computing systems as latency and...
research
02/28/2019

Application-level Studies of Cellular Neural Network-based Hardware Accelerators

As cost and performance benefits associated with Moore's Law scaling slo...
research
09/02/2020

CONTRA: Area-Constrained Technology Mapping Framework For Memristive Memory Processing Unit

Data-intensive applications are poised to benefit directly from processi...
research
03/04/2022

Efficient Analog CAM Design

Content Addressable Memories (CAMs) are considered a key-enabler for in-...

Please sign up or login with your details

Forgot password? Click here to reset