Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks

07/20/2021
by   Saikat Majumdar, et al.
0

Deep neural network (DNN) classifiers are powerful tools that drive a broad spectrum of important applications, from image recognition to autonomous vehicles. Unfortunately, DNNs are known to be vulnerable to adversarial attacks that affect virtually all state-of-the-art models. These attacks make small imperceptible modifications to inputs that are sufficient to induce the DNNs to produce the wrong classification. In this paper we propose a novel, lightweight adversarial correction and/or detection mechanism for image classifiers that relies on undervolting (running a chip at a voltage that is slightly below its safe margin). We propose using controlled undervolting of the chip running the inference process in order to introduce a limited number of compute errors. We show that these errors disrupt the adversarial input in a way that can be used either to correct the classification or detect the input as adversarial. We evaluate the proposed solution in an FPGA design and through software simulation. We evaluate 10 attacks and show average detection rates of 77

READ FULL TEXT

page 4

page 9

page 10

research
07/31/2022

DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning

DNNs are known to be vulnerable to so-called adversarial attacks that ma...
research
06/09/2021

HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks

Deep Neural Networks (DNNs) are employed in an increasing number of appl...
research
04/20/2021

MixDefense: A Defense-in-Depth Framework for Adversarial Example Detection Based on Statistical and Semantic Analysis

Machine learning with deep neural networks (DNNs) has become one of the ...
research
03/11/2018

Combating Adversarial Attacks Using Sparse Representations

It is by now well-known that small adversarial perturbations can induce ...
research
06/10/2020

Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based Classifiers

Being an emerging class of in-memory computing architecture, brain-inspi...
research
11/14/2015

Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

Deep learning algorithms have been shown to perform extremely well on ma...
research
11/18/2018

The Taboo Trap: Behavioural Detection of Adversarial Samples

Deep Neural Networks (DNNs) have become a powerful tool for a wide range...

Please sign up or login with your details

Forgot password? Click here to reset