Amplification trojan network: Attack deep neural networks by amplifying their inherent weakness

05/28/2023
by   Zhanhao Hu, et al.
0

Recent works found that deep neural networks (DNNs) can be fooled by adversarial examples, which are crafted by adding adversarial noise on clean inputs. The accuracy of DNNs on adversarial examples will decrease as the magnitude of the adversarial noise increase. In this study, we show that DNNs can be also fooled when the noise is very small under certain circumstances. This new type of attack is called Amplification Trojan Attack (ATAttack). Specifically, we use a trojan network to transform the inputs before sending them to the target DNN. This trojan network serves as an amplifier to amplify the inherent weakness of the target DNN. The target DNN, which is infected by the trojan network, performs normally on clean data while being more vulnerable to adversarial examples. Since it only transforms the inputs, the trojan network can hide in DNN-based pipelines, e.g. by infecting the pre-processing procedure of the inputs before sending them to the DNNs. This new type of threat should be considered in developing safe DNNs.

READ FULL TEXT
research
12/25/2021

Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping

Recently, deep neural networks (DNNs) have been deployed in safety-criti...
research
12/11/2014

Towards Deep Neural Network Architectures Robust to Adversarial Examples

Recent work has shown deep neural networks (DNNs) to be highly susceptib...
research
06/30/2021

Understanding Adversarial Examples Through Deep Neural Network's Response Surface and Uncertainty Regions

Deep neural network (DNN) is a popular model implemented in many systems...
research
08/01/2018

EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report)

Deep neural networks (DNNs) are inherently vulnerable to adversarial inp...
research
03/20/2022

Adversarial Parameter Attack on Deep Neural Networks

In this paper, a new parameter perturbation attack on DNNs, called adver...
research
01/25/2021

Probabilistic Robustness Analysis for DNNs based on PAC Learning

This paper proposes a black box based approach for analysing deep neural...
research
10/25/2017

Deep Neural Networks

Deep Neural Networks (DNNs) are universal function approximators providi...

Please sign up or login with your details

Forgot password? Click here to reset