Analyzing and Mitigating the Impact of Permanent Faults on a Systolic Array Based Neural Network Accelerator

02/11/2018
by   Zhang, et al.
0

Due to their growing popularity and computational cost, deep neural networks (DNNs) are being targeted for hardware acceleration. A popular architecture for DNN acceleration, adopted by the Google Tensor Processing Unit (TPU), utilizes a systolic array based matrix multiplication unit at its core. This paper deals with the design of fault-tolerant, systolic array based DNN accelerators for high defect rate technologies. To this end, we empirically show that the classification accuracy of a baseline TPU drops significantly even at extremely low fault rates (as low as 0.006%). We then propose two novel strategies, fault-aware pruning (FAP) and fault-aware pruning+retraining (FAP+T), that enable the TPU to operate at fault rates of up to 50%, with negligible drop in classification accuracy (as low as 0.1%) and no run-time performance overhead. The FAP+T does introduce a one-time retraining penalty per TPU chip before it is deployed, but we propose optimizations that reduce this one-time penalty to under 12 minutes. The penalty is then amortized over the entire lifetime of the TPU's operation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2023

Reduce: A Framework for Reducing the Overheads of Fault-Aware Retraining

Fault-aware retraining has emerged as a prominent technique for mitigati...
research
06/05/2020

High-level Modeling of Manufacturing Faults in Deep Neural Network Accelerators

The advent of data-driven real-time applications requires the implementa...
research
06/09/2021

HyCA: A Hybrid Computing Architecture for Fault Tolerant Deep Learning

Hardware faults on the regular 2-D computing array of a typical deep lea...
research
01/12/2023

Improving Reliability of Spiking Neural Networks through Fault Aware Threshold Voltage Optimization

Spiking neural networks have made breakthroughs in computer vision by le...
research
05/21/2023

FAQ: Mitigating the Impact of Faults in the Weight Memory of DNN Accelerators through Fault-Aware Quantization

Permanent faults induced due to imperfections in the manufacturing proce...
research
06/16/2021

Improving DNN Fault Tolerance using Weight Pruning and Differential Crossbar Mapping for ReRAM-based Edge AI

Recent research demonstrated the promise of using resistive random acces...
research
04/27/2020

FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN Model Training

Modern deep learning models have high memory and computation cost. To ma...

Please sign up or login with your details

Forgot password? Click here to reset