Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks

08/03/2020
by   Haoqiang Guo, et al.
0

Recent studies identify that Deep learning Neural Networks (DNNs) are vulnerable to subtle perturbations, which are not perceptible to human visual system but can fool the DNN models and lead to wrong outputs. A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations under different circumstances. These algorithms are the first efforts to move forward secure deep learning by providing an avenue to train future defense networks, however, the intrinsic complexity of them prevents their broader usage. In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays. Our design significantly improves the throughput of a visual adversarial perturbation system, which can further improve the robustness and security of future deep learning systems. Based on the algorithm uniqueness, we propose four implementations for the adversarial attack accelerator (A^3) to improve the throughput, energy efficiency, and computational efficiency.

READ FULL TEXT

page 3

page 7

research
06/14/2020

Adversarial Sparsity Attacks on Deep Neural Networks

Adversarial attacks have exposed serious vulnerabilities in Deep Neural ...
research
02/14/2018

Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks

DNN is presenting human-level performance for many complex intelligent t...
research
11/18/2021

Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation

Universal Adversarial Perturbations are image-agnostic and model-indepen...
research
01/10/2023

Over-The-Air Adversarial Attacks on Deep Learning Wi-Fi Fingerprinting

Empowered by deep neural networks (DNNs), Wi-Fi fingerprinting has recen...
research
04/22/2020

QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks

Deep Neural Networks (DNNs) have been shown to be vulnerable to adversar...
research
07/30/2020

A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be...
research
09/11/2021

2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency

The recent breakthroughs of deep neural networks (DNNs) and the advent o...

Please sign up or login with your details

Forgot password? Click here to reset