Adversarial Parameter Attack on Deep Neural Networks

03/20/2022
by   Lijia Yu, et al.
0

In this paper, a new parameter perturbation attack on DNNs, called adversarial parameter attack, is proposed, in which small perturbations to the parameters of the DNN are made such that the accuracy of the attacked DNN does not decrease much, but its robustness becomes much lower. The adversarial parameter attack is stronger than previous parameter perturbation attacks in that the attack is more difficult to be recognized by users and the attacked DNN gives a wrong label for any modified sample input with high probability. The existence of adversarial parameters is proved. For a DNN F_Θ with the parameter set Θ satisfying certain conditions, it is shown that if the depth of the DNN is sufficiently large, then there exists an adversarial parameter set Θ_a for Θ such that the accuracy of F_Θ_a is equal to that of F_Θ, but the robustness measure of F_Θ_a is smaller than any given bound. An effective training algorithm is given to compute adversarial parameters and numerical experiments are used to demonstrate that the algorithms are effective to produce high quality adversarial parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2023

Amplification trojan network: Attack deep neural networks by amplifying their inherent weakness

Recent works found that deep neural networks (DNNs) can be fooled by adv...
research
04/21/2022

A Mask-Based Adversarial Defense Scheme

Adversarial attacks hamper the functionality and accuracy of Deep Neural...
research
06/30/2021

Bio-Inspired Adversarial Attack Against Deep Neural Networks

The paper develops a new adversarial attack against deep neural networks...
research
08/10/2022

Customized Watermarking for Deep Neural Networks via Label Distribution Perturbation

With the increasing application value of machine learning, the intellect...
research
05/31/2021

Dominant Patterns: Critical Features Hidden in Deep Neural Networks

In this paper, we find the existence of critical features hidden in Deep...
research
05/14/2018

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing

Recently, it has been shown that deep neural networks (DNN) are subject ...
research
04/01/2023

GradMDM: Adversarial Attack on Dynamic Networks

Dynamic neural networks can greatly reduce computation redundancy withou...

Please sign up or login with your details

Forgot password? Click here to reset