When Single Event Upset Meets Deep Neural Networks: Observations, Explorations, and Remedies

09/10/2019
by   Zheyu Yan, et al.
0

Deep Neural Network has proved its potential in various perception tasks and hence become an appealing option for interpretation and data processing in security sensitive systems. However, security-sensitive systems demand not only high perception performance, but also design robustness under various circumstances. Unlike prior works that study network robustness from software level, we investigate from hardware perspective about the impact of Single Event Upset (SEU) induced parameter perturbation (SIPP) on neural networks. We systematically define the fault models of SEU and then provide the definition of sensitivity to SIPP as the robustness measure for the network. We are then able to analytically explore the weakness of a network and summarize the key findings for the impact of SIPP on different types of bits in a floating point parameter, layer-wise robustness within the same network and impact of network depth. Based on those findings, we propose two remedy solutions to protect DNNs from SIPPs, which can mitigate accuracy degradation from 28 ResNet with merely 0.24-bit SRAM area overhead per parameter.

READ FULL TEXT
research
03/30/2020

DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips

Security of machine learning is increasingly becoming a major concern du...
research
06/03/2019

Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks

Deep neural networks (DNNs) have been shown to tolerate "brain damage": ...
research
11/23/2019

Training Modern Deep Neural Networks for Memory-Fault Robustness

Because deep neural networks (DNNs) rely on a large number of parameters...
research
05/03/2018

Exploration of Numerical Precision in Deep Neural Networks

Reduced numerical precision is a common technique to reduce computationa...
research
06/10/2020

Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption

We argue that the vulnerability of model parameters is of crucial value ...
research
06/18/2019

On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks

Obtaining the state of the art performance of deep learning models impos...

Please sign up or login with your details

Forgot password? Click here to reset