ISimDL: Importance Sampling-Driven Acceleration of Fault Injection Simulations for Evaluating the Robustness of Deep Learning

03/14/2023
by   Alessio Colucci, et al.
0

Deep Learning (DL) systems have proliferated in many applications, requiring specialized hardware accelerators and chips. In the nano-era, devices have become increasingly more susceptible to permanent and transient faults. Therefore, we need an efficient methodology for analyzing the resilience of advanced DL systems against such faults, and understand how the faults in neural accelerator chips manifest as errors at the DL application level, where faults can lead to undetectable and unrecoverable errors. Using fault injection, we can perform resilience investigations of the DL system by modifying neuron weights and outputs at the software-level, as if the hardware had been affected by a transient fault. Existing fault models reduce the search space, allowing faster analysis, but requiring a-priori knowledge on the model, and not allowing further analysis of the filtered-out search space. Therefore, we propose ISimDL, a novel methodology that employs neuron sensitivity to generate importance sampling-based fault-scenarios. Without any a-priori knowledge of the model-under-test, ISimDL provides an equivalent reduction of the search space as existing works, while allowing long simulations to cover all the possible faults, improving on existing model requirements. Our experiments show that the importance sampling provides up to 15x higher precision in selecting critical faults than the random uniform sampling, reaching such precision in less than 100 faults. Additionally, we showcase another practical use-case for importance sampling for reliable DNN design, namely Fault Aware Training (FAT). By using ISimDL to select the faults leading to errors, we can insert the faults during the DNN training process to harden the DNN against such faults. Using importance sampling in FAT reduces the overhead required for finding faults that lead to a predetermined drop in accuracy by more than 12x.

READ FULL TEXT

page 1

page 9

research
02/13/2021

MOARD: Modeling Application Resilience to Transient Faults on Data Objects

Understanding application resilience (or error tolerance) in the presenc...
research
12/05/2022

Thales: Formulating and Estimating Architectural Vulnerability Factors for DNN Accelerators

As Deep Neural Networks (DNNs) are increasingly deployed in safety criti...
research
05/21/2023

Reduce: A Framework for Reducing the Overheads of Fault-Aware Retraining

Fault-aware retraining has emerged as a prominent technique for mitigati...
research
04/20/2023

eFAT: Improving the Effectiveness of Fault-Aware Training for Mitigating Permanent Faults in DNN Hardware Accelerators

Fault-Aware Training (FAT) has emerged as a highly effective technique f...
research
08/10/2023

Checkpoint Placement for Systematic Fault-Injection Campaigns

Shrinking hardware structures and decreasing operating voltages lead to ...
research
10/12/2021

MoRS: An Approximate Fault Modelling Framework for Reduced-Voltage SRAMs

On-chip memory (usually based on Static RAMs-SRAMs) are crucial componen...
research
06/19/2023

Understanding the Effects of Permanent Faults in GPU's Parallelism Management and Control Units

Graphics Processing Units (GPUs) are over-stressed to accelerate High-Pe...

Please sign up or login with your details

Forgot password? Click here to reset