Bit Error Tolerance Metrics for Binarized Neural Networks

02/02/2021
by   Sebastian Buschjäger, et al.
5

To reduce the resource demand of neural network (NN) inference systems, it has been proposed to use approximate memory, in which the supply voltage and the timing parameters are tuned trading accuracy with energy consumption and performance. Tuning these parameters aggressively leads to bit errors, which can be tolerated by NNs when bit flips are injected during training. However, bit flip training, which is the state of the art for achieving bit error tolerance, does not scale well; it leads to massive overheads and cannot be applied for high bit error rates (BERs). Alternative methods to achieve bit error tolerance in NNs are needed, but the underlying principles behind the bit error tolerance of NNs have not been reported yet. With this lack of understanding, further progress in the research on NN bit error tolerance will be restrained. In this study, our objective is to investigate the internal changes in the NNs that bit flip training causes, with a focus on binarized NNs (BNNs). To this end, we quantify the properties of bit error tolerant BNNs with two metrics. First, we propose a neuron-level bit error tolerance metric, which calculates the margin between the pre-activation values and batch normalization thresholds. Secondly, to capture the effects of bit error tolerance on the interplay of neurons, we propose an inter-neuron bit error tolerance metric, which measures the importance of each neuron and computes the variance over all importance values. Our experimental results support that these two metrics are strongly related to bit error tolerance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2020

Towards Explainable Bit Error Tolerance of Resistive RAM-Based Binarized Neural Networks

Non-volatile memory, such as resistive RAM (RRAM), is an emerging energy...
research
12/20/2013

Support for Error Tolerance in the Real-Time Transport Protocol

Streaming applications often tolerate bit errors in their received data ...
research
02/28/2021

SparkXD: A Framework for Resilient and Energy-Efficient Spiking Neural Network Inference using Approximate DRAM

Spiking Neural Networks (SNNs) have the potential for achieving low ener...
research
05/21/2018

AxTrain: Hardware-Oriented Neural Network Training for Approximate Inference

The intrinsic error tolerance of neural network (NN) makes approximate c...
research
03/10/2022

SoftSNN: Low-Cost Fault Tolerance for Spiking Neural Network Accelerators under Soft Errors

Specialized hardware accelerators have been designed and employed to max...
research
07/26/2018

Computationally Efficient Measures of Internal Neuron Importance

The challenge of assigning importance to individual neurons in a network...

Please sign up or login with your details

Forgot password? Click here to reset