Reliability-Performance Trade-offs in Neuromorphic Computing

by   Twisha Titirsha, et al.

Neuromorphic architectures built with Non-Volatile Memory (NVM) can significantly improve the energy efficiency of machine learning tasks designed with Spiking Neural Networks (SNNs). A major source of voltage drop in a crossbar of these architectures are the parasitic components on the crossbar's bitlines and wordlines, which are deliberately made longer to achieve lower cost-per-bit. We observe that the parasitic voltage drops create a significant asymmetry in programming speed and reliability of NVM cells in a crossbar. Specifically, NVM cells that are on shorter current paths are faster to program but have lower endurance than those on longer current paths, and vice versa. This asymmetry in neuromorphic architectures create reliability-performance trade-offs, which can be exploited efficiently using SNN mapping techniques. In this work, we demonstrate such trade-offs using a previously-proposed SNN mapping technique with 10 workloads from contemporary machine learning tasks for a state-of-the art neuromoorphic hardware.



There are no comments yet.


page 3


Thermal-Aware Compilation of Spiking Neural Networks to Neuromorphic Hardware

Hardware implementation of neuromorphic computing can significantly impr...

How Flexible is Your Computing System

In literature computer architectures are frequently claimed to be highly...

A Framework to Explore Workload-Specific Performance and Lifetime Trade-offs in Neuromorphic Computing

Neuromorphic hardware with non-volatile memory (NVM) can implement machi...

Diversity-enabled sweet spots in layered architectures and speed-accuracy trade-offs in sensorimotor control

Nervous systems sense, communicate, compute, and actuate movement using ...

Evaluating complexity and resilience trade-offs in emerging memory inference machines

Neuromorphic-style inference only works well if limited hardware resourc...

Revisiting Multi-Step Nonlinearity Compensation with Machine Learning

For the efficient compensation of fiber nonlinearity, one of the guiding...

A Brief Review on Some Architectures Providing Support for DIFT

Dynamic Information Flow Tracking (DIFT) is a technique to track potenti...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.