A Blueprint for Precise and Fault-Tolerant Analog Neural Networks

09/19/2023
by   Cansu Demirkiran, et al.
0

Analog computing has reemerged as a promising avenue for accelerating deep neural networks (DNNs) due to its potential to overcome the energy efficiency and scalability challenges posed by traditional digital architectures. However, achieving high precision and DNN accuracy using such technologies is challenging, as high-precision data converters are costly and impractical. In this paper, we address this challenge by using the residue number system (RNS). RNS allows composing high-precision operations from multiple low-precision operations, thereby eliminating the information loss caused by the limited precision of the data converters. Our study demonstrates that analog accelerators utilizing the RNS-based approach can achieve ≥99% of FP32 accuracy for state-of-the-art DNN inference using data converters with only 6-bit precision whereas a conventional analog core requires more than 8-bit precision to achieve the same accuracy in the same DNNs. The reduced precision requirements imply that using RNS can reduce the energy consumption of analog accelerators by several orders of magnitude while maintaining the same throughput and precision. Our study extends this approach to DNN training, where we can efficiently train DNNs using 7-bit integer arithmetic while achieving accuracy comparable to FP32 precision. Lastly, we present a fault-tolerant dataflow using redundant RNS error-correcting codes to protect the computation against noise and errors inherent within an analog accelerator.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2023

Leveraging Residue Number System for Designing High-Precision Analog Deep Neural Network Accelerators

Achieving high accuracy, while maintaining good energy efficiency, in an...
research
04/17/2023

RAELLA: Reforming the Arithmetic for Efficient, Low-Resolution, and Low-Loss Analog PIM: No Retraining Required!

Processing-In-Memory (PIM) accelerators have the potential to efficientl...
research
02/12/2021

Dynamic Precision Analog Computing for Neural Networks

Analog electronic and optical computing exhibit tremendous advantages ov...
research
11/27/2019

Representable Matrices: Enabling High Accuracy Analog Computation for Inference of DNNs using Memristors

Analog computing based on memristor technology is a promising solution t...
research
06/27/2019

Mixed-Signal Charge-Domain Acceleration of Deep Neural networks through Interleaved Bit-Partitioned Arithmetic

Low-power potential of mixed-signal design makes it an alluring option t...
research
08/29/2023

OSA-HCIM: On-The-Fly Saliency-Aware Hybrid SRAM CIM with Dynamic Precision Configuration

Computing-in-Memory (CIM) has shown great potential for enhancing effici...
research
04/02/2019

Improving Noise Tolerance of Mixed-Signal Neural Networks

Mixed-signal hardware accelerators for deep learning achieve orders of m...

Please sign up or login with your details

Forgot password? Click here to reset