HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks

Deep Neural Networks (DNNs) are employed in an increasing number of applications, some of which are safety critical. Unfortunately, DNNs are known to be vulnerable to so-called adversarial attacks that manipulate inputs to cause incorrect results that can be beneficial to an attacker or damaging to the victim. Multiple defenses have been proposed to increase the robustness of DNNs. In general, these defenses have high overhead, some require attack-specific re-training of the model or careful tuning to adapt to different attacks. This paper presents HASI, a hardware-accelerated defense that uses a process we call stochastic inference to detect adversarial inputs. We show that by carefully injecting noise into the model at inference time, we can differentiate adversarial inputs from benign ones. HASI uses the output distribution characteristics of noisy inference compared to a non-noisy reference to detect adversarial inputs. We show an adversarial detection rate of 86 the detection rate of the state of the art approaches, with a much lower overhead. We demonstrate two software/hardware-accelerated co-designs, which reduces the performance impact of stochastic inference to 1.58X-2X relative to the unprotected baseline, compared to 15X-20X overhead for a software-only GPU implementation.

READ FULL TEXT

page 1

page 6

research
07/31/2022

DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning

DNNs are known to be vulnerable to so-called adversarial attacks that ma...
research
08/25/2020

Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks

Deep Neural Networks (DNNs) have been shown to be prone to adversarial a...
research
08/23/2020

Ptolemy: Architecture Support for Robust Deep Learning

Deep learning is vulnerable to adversarial attacks, where carefully-craf...
research
07/20/2021

Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks

Deep neural network (DNN) classifiers are powerful tools that drive a br...
research
10/16/2021

TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural Networks

Deep neural networks (DNNs) are now the de facto choice for computer vis...
research
07/03/2018

Local Gradients Smoothing: Defense against localized adversarial attacks

Deep neural networks (DNNs) have shown vulnerability to adversarial atta...
research
11/09/2021

A Statistical Difference Reduction Method for Escaping Backdoor Detection

Recent studies show that Deep Neural Networks (DNNs) are vulnerable to b...

Please sign up or login with your details

Forgot password? Click here to reset