Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks

06/03/2019
by   Sanghyun Hong, et al.
0

Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e.g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy. However, the limits of this natural resilience are not well understood in the presence of small adversarial changes to the DNN parameters' underlying memory representation, such as bit-flips that may be induced by hardware fault attacks. We study the effects of bitwise corruptions on 19 DNN models---six architectures on three image classification tasks---and we show that most models have at least one parameter that, after a specific bit-flip in their bitwise representation, causes an accuracy loss of over 90 simple heuristics to efficiently identify the parameters likely to be vulnerable. We estimate that 40-50 an accuracy drop greater than 10 single-bit perturbations. To demonstrate how an adversary could take advantage of this vulnerability, we study the impact of an exemplary hardware fault attack, Rowhammer, on DNNs. Specifically, we show that a Rowhammer enabled attacker co-located in the same physical machine can inflict significant accuracy drops (up to 99 knowledge of the model. Our results expose the limits of DNNs' resilience against parameter perturbations induced by real-world fault attacks. We conclude by discussing possible mitigations and future research directions towards fault attack-resilient DNNs.

READ FULL TEXT

page 8

page 9

research
11/02/2021

HASHTAG: Hash Signatures for Online Detection of Fault-Injection Attacks on Deep Neural Networks

We propose HASHTAG, the first framework that enables high-accuracy detec...
research
09/11/2019

ScieNet: Deep Learning with Spike-assisted Contextual Information Extraction

Deep neural networks (DNNs) provide high image classification accuracy, ...
research
09/10/2019

When Single Event Upset Meets Deep Neural Networks: Observations, Explorations, and Remedies

Deep Neural Network has proved its potential in various perception tasks...
research
05/28/2019

Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks

Despite the great achievements of deep neural networks (DNNs), the vulne...
research
10/03/2021

Design and Evaluate Recomposited OR-AND-XOR-PUF

Physical Unclonable Function (PUF) is a hardware security primitive with...
research
12/25/2021

Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping

Recently, deep neural networks (DNNs) have been deployed in safety-criti...
research
09/12/2023

Unveiling Signle-Bit-Flip Attacks on DNN Executables

Recent research has shown that bit-flip attacks (BFAs) can manipulate de...

Please sign up or login with your details

Forgot password? Click here to reset