Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation

11/18/2021
by   Mehdi Sadi, et al.
0

Universal Adversarial Perturbations are image-agnostic and model-independent noise that when added with any image can mislead the trained Deep Convolutional Neural Networks into the wrong prediction. Since these Universal Adversarial Perturbations can seriously jeopardize the security and integrity of practical Deep Learning applications, existing techniques use additional neural networks to detect the existence of these noises at the input image source. In this paper, we demonstrate an attack strategy that when activated by rogue means (e.g., malware, trojan) can bypass these existing countermeasures by augmenting the adversarial noise at the AI hardware accelerator stage. We demonstrate the accelerator-level universal adversarial noise attack on several deep Learning models using co-simulation of the software kernel of Conv2D function and the Verilog RTL model of the hardware under the FuseSoC environment.

READ FULL TEXT

page 5

page 6

page 7

research
10/10/2022

Universal Adversarial Perturbations: Efficiency on a small image dataset

Although neural networks perform very well on the image classification t...
research
06/12/2022

Consistent Attack: Universal Adversarial Perturbation on Embodied Vision Navigation

Embodied agents in vision navigation coupled with deep neural networks h...
research
07/11/2018

With Friends Like These, Who Needs Adversaries?

The vulnerability of deep image classification networks to adversarial a...
research
05/16/2020

Universal Adversarial Perturbations: A Survey

Over the past decade, Deep Learning has emerged as a useful and efficien...
research
08/03/2020

Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks

Recent studies identify that Deep learning Neural Networks (DNNs) are vu...
research
06/08/2019

Sensitivity of Deep Convolutional Networks to Gabor Noise

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Un...
research
04/17/2023

Evil from Within: Machine Learning Backdoors through Hardware Trojans

Backdoors pose a serious threat to machine learning, as they can comprom...

Please sign up or login with your details

Forgot password? Click here to reset