FAT: Training Neural Networks for Reliable Inference Under Hardware Faults

11/11/2020
by   Ussama Zahid, et al.
0

Deep neural networks (DNNs) are state-of-the-art algorithms for multiple applications, spanning from image classification to speech recognition. While providing excellent accuracy, they often have enormous compute and memory requirements. As a result of this, quantized neural networks (QNNs) are increasingly being adopted and deployed especially on embedded devices, thanks to their high accuracy, but also since they have significantly lower compute and memory requirements compared to their floating point equivalents. QNN deployment is also being evaluated for safety-critical applications, such as automotive, avionics, medical or industrial. These systems require functional safety, guaranteeing failure-free behaviour even in the presence of hardware faults. In general fault tolerance can be achieved by adding redundancy to the system, which further exacerbates the overall computational demands and makes it difficult to meet the power and performance requirements. In order to decrease the hardware cost for achieving functional safety, it is vital to explore domain-specific solutions which can exploit the inherent features of DNNs. In this work we present a novel methodology called fault-aware training (FAT), which includes error modeling during neural network (NN) training, to make QNNs resilient to specific fault models on the device. Our experiments show that by injecting faults in the convolutional layers during training, highly accurate convolutional neural networks (CNNs) can be trained which exhibits much better error tolerance compared to the original. Furthermore, we show that redundant systems which are built from QNNs trained with FAT achieve higher worse-case accuracy at lower hardware cost. This has been validated for numerous classification tasks including CIFAR10, GTSRB, SVHN and ImageNet.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

12/16/2019

Efficient Error-Tolerant Quantized Neural Network Accelerators

Neural Networks are currently one of the most widely deployed machine le...
08/16/2021

Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision

Convolutional neural networks (CNNs) have become an established part of ...
07/06/2019

Adversarial Fault Tolerant Training for Deep Neural Networks

Deep Learning Accelerators are prone to faults which manifest in the for...
12/13/2020

Fault Injectors for TensorFlow: Evaluation of the Impact of Random Hardware Faults on Deep CNNs

Today, Deep Learning (DL) enhances almost every industrial sector, inclu...
04/19/2021

Arithmetic-Intensity-Guided Fault Tolerance for Neural Network Inference on GPUs

Neural networks (NNs) are increasingly employed in domains that require ...
06/16/2021

Improving DNN Fault Tolerance using Weight Pruning and Differential Crossbar Mapping for ReRAM-based Edge AI

Recent research demonstrated the promise of using resistive random acces...
09/30/2019

A Self-Healing Hardware Architecture for Safety-Critical Digital Embedded Devices

Digital Embedded Devices of next-generation safety-critical industrial a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.