Adversarial Parameter Defense by Multi-Step Risk Minimization

09/07/2021
by   Zhiyuan Zhang, et al.
3

Previous studies demonstrate DNNs' vulnerability to adversarial examples and adversarial training can establish a defense to adversarial examples. In addition, recent studies show that deep neural networks also exhibit vulnerability to parameter corruptions. The vulnerability of model parameters is of crucial value to the study of model robustness and generalization. In this work, we introduce the concept of parameter corruption and propose to leverage the loss change indicators for measuring the flatness of the loss basin and the parameter robustness of neural network parameters. On such basis, we analyze parameter corruptions and propose the multi-step adversarial corruption algorithm. To enhance neural networks, we propose the adversarial parameter defense algorithm that minimizes the average risk of multiple adversarial parameter corruptions. Experimental results show that the proposed algorithm can improve both the parameter robustness and accuracy of neural networks.

READ FULL TEXT
research
05/28/2019

Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss

Recent studies have highlighted that deep neural networks (DNNs) are vul...
research
06/10/2020

Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption

We argue that the vulnerability of model parameters is of crucial value ...
research
05/23/2022

Collaborative Adversarial Training

The vulnerability of deep neural networks (DNNs) to adversarial examples...
research
11/09/2021

Tightening the Approximation Error of Adversarial Risk with Auto Loss Function Search

Numerous studies have demonstrated that deep neural networks are easily ...
research
12/28/2022

Publishing Efficient On-device Models Increases Adversarial Vulnerability

Recent increases in the computational demands of deep neural networks (D...
research
07/21/2022

Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness

Defending deep neural networks against adversarial examples is a key cha...
research
10/27/2019

Adversarial Defense Via Local Flatness Regularization

Adversarial defense is a popular and important research area. Due to its...

Please sign up or login with your details

Forgot password? Click here to reset