Biologically Plausible Learning on Neuromorphic Hardware Architectures

12/29/2022
by   Christopher Wolters, et al.
0

With an ever-growing number of parameters defining increasingly complex networks, Deep Learning has led to several breakthroughs surpassing human performance. As a result, data movement for these millions of model parameters causes a growing imbalance known as the memory wall. Neuromorphic computing is an emerging paradigm that confronts this imbalance by performing computations directly in analog memories. On the software side, the sequential Backpropagation algorithm prevents efficient parallelization and thus fast convergence. A novel method, Direct Feedback Alignment, resolves inherent layer dependencies by directly passing the error from the output to each layer. At the intersection of hardware/software co-design, there is a demand for developing algorithms that are tolerable to hardware nonidealities. Therefore, this work explores the interrelationship of implementing bio-plausible learning in-situ on neuromorphic hardware, emphasizing energy, area, and latency constraints. Using the benchmarking framework DNN+NeuroSim, we investigate the impact of hardware nonidealities and quantization on algorithm performance, as well as how network topologies and algorithm-level design choices can scale latency, energy and area consumption of a chip. To the best of our knowledge, this work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa. The best results achieved for accuracy remain Backpropagation-based, notably when facing hardware imperfections. Direct Feedback Alignment, on the other hand, allows for significant speedup due to parallelization, reducing training time by a factor approaching N for N-layered networks.

READ FULL TEXT

page 1

page 5

page 6

research
06/13/2021

The Backpropagation Algorithm Implemented on Spiking Neuromorphic Hardware

The capabilities of natural neural systems have inspired new generations...
research
12/14/2022

Directional Direct Feedback Alignment: Estimating Backpropagation Paths for Efficient Learning on Neural Processors

The error Backpropagation algorithm (BP) is a key method for training de...
research
10/27/2021

BioGrad: Biologically Plausible Gradient-Based Learning for Spiking Neural Networks

Spiking neural networks (SNN) are delivering energy-efficient, massively...
research
12/11/2020

Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment

The scaling hypothesis motivates the expansion of models past trillions ...
research
04/01/2022

Physical Deep Learning with Biologically Plausible Training Method

The ever-growing demand for further advances in artificial intelligence ...
research
09/21/2022

Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics

The massive use of artificial neural networks (ANNs), increasingly popul...
research
05/05/2020

Towards On-Chip Bayesian Neuromorphic Learning

If edge devices are to be deployed to critical applications where their ...

Please sign up or login with your details

Forgot password? Click here to reset