eFAT: Improving the Effectiveness of Fault-Aware Training for Mitigating Permanent Faults in DNN Hardware Accelerators

04/20/2023
by   Muhammad Abdullah Hanif, et al.
0

Fault-Aware Training (FAT) has emerged as a highly effective technique for addressing permanent faults in DNN accelerators, as it offers fault mitigation without significant performance or accuracy loss, specifically at low and moderate fault rates. However, it leads to very high retraining overheads, especially when used for large DNNs designed for complex AI applications. Moreover, as each fabricated chip can have a distinct fault pattern, FAT is required to be performed for each faulty chip individually, considering its unique fault map, which further aggravates the problem. To reduce the overheads of FAT while maintaining its benefits, we propose (1) the concepts of resilience-driven retraining amount selection, and (2) resilience-driven grouping and fusion of multiple fault maps (belonging to different chips) to perform consolidated retraining for a group of faulty chips. To realize these concepts, in this work, we present a novel framework, eFAT, that computes the resilience of a given DNN to faults at different fault rates and with different levels of retraining, and it uses that knowledge to build a resilience map given a user-defined accuracy constraint. Then, it uses the resilience map to compute the amount of retraining required for each chip, considering its unique fault map. Afterward, it performs resilience and reward-driven grouping and fusion of fault maps to further reduce the number of retraining iterations required for tuning the given DNN for the given set of faulty chips. We demonstrate the effectiveness of our framework for a systolic array-based DNN accelerator experiencing permanent faults in the computational array. Our extensive results for numerous chips show that the proposed technique significantly reduces the retraining cost when used for tuning a DNN for multiple faulty chips.

READ FULL TEXT

page 1

page 5

research
05/21/2023

Reduce: A Framework for Reducing the Overheads of Fault-Aware Retraining

Fault-aware retraining has emerged as a prominent technique for mitigati...
research
05/21/2023

FAQ: Mitigating the Impact of Faults in the Weight Memory of DNN Accelerators through Fault-Aware Quantization

Permanent faults induced due to imperfections in the manufacturing proce...
research
01/12/2023

Exposing Reliability Degradation and Mitigation in Approximate DNNs under Permanent Faults

Approximate computing is known for enhancing deep neural network acceler...
research
01/08/2021

Exploring Fault-Energy Trade-offs in Approximate DNN Hardware Accelerators

Systolic array-based deep neural network (DNN) accelerators have recentl...
research
03/14/2023

ISimDL: Importance Sampling-Driven Acceleration of Fault Injection Simulations for Evaluating the Robustness of Deep Learning

Deep Learning (DL) systems have proliferated in many applications, requi...
research
06/08/2020

Yield Loss Reduction and Test of AI and Deep Learning Accelerators

With data-driven analytics becoming mainstream, the global demand for de...
research
12/26/2019

On the Resilience of Deep Learning for Reduced-voltage FPGAs

Deep Neural Networks (DNNs) are inherently computation-intensive and als...

Please sign up or login with your details

Forgot password? Click here to reset