An Algorithm-Hardware Co-design Framework to Overcome Imperfections of Mixed-signal DNN Accelerators

08/29/2022
by   Payman Behnam, et al.
0

In recent years, processing in memory (PIM) based mixedsignal designs have been proposed as energy- and area-efficient solutions with ultra high throughput to accelerate DNN computations. However, PIM designs are sensitive to imperfections such as noise, weight and conductance variations that substantially degrade the DNN accuracy. To address this issue, we propose a novel algorithm-hardware co-design framework hereafter referred to as HybridAC that simultaneously avoids accuracy degradation due to imperfections, improves area utilization, and reduces data movement and energy dissipation. We derive a data-movement-aware weight selection method that does not require retraining to preserve its original performance. It computes a fraction of the results with a small number of variation-sensitive weights using a robust digital accelerator, while the main computation is performed in analog PIM units. This is the first work that not only provides a variation-robust architecture, but also improves the area, power, and energy of the existing designs considerably. HybridAC is adapted to leverage the preceding weight selection method by reducing ADC precision, peripheral circuitry, and hybrid quantization to optimize the design. Our comprehensive experiments show that, even in the presence of variation as high as 50 - 90 datasets. In addition to providing more robust computation, compared to the ISAAC (SRE), HybridAC improves the execution time, energy, area, power, area-efficiency, and power-efficiency by 26 57

READ FULL TEXT

page 1

page 2

research
05/20/2022

QADAM: Quantization-Aware DNN Accelerator Modeling for Pareto-Optimality

As the machine learning and systems communities strive to achieve higher...
research
05/17/2022

QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN Accelerators

As the machine learning and systems community strives to achieve higher ...
research
05/07/2020

SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation

We present SmartExchange, an algorithm-hardware co-design framework to t...
research
06/30/2022

QUIDAM: A Framework for Quantization-Aware DNN Accelerator and Model Co-Exploration

As the machine learning and systems communities strive to achieve higher...
research
03/10/2018

Newton: Gravitating Towards the Physical Limits of Crossbar Acceleration

Many recent works have designed accelerators for Convolutional Neural Ne...
research
08/04/2020

Helix: Algorithm/Architecture Co-design for Accelerating Nanopore Genome Base-calling

Nanopore genome sequencing is the key to enabling personalized medicine,...
research
12/17/2019

Defects Mitigation in Resistive Crossbars for Analog Vector Matrix Multiplication

With storage and computation happening at the same place, computing in r...

Please sign up or login with your details

Forgot password? Click here to reset