Smoothness Analysis of Loss Functions of Adversarial Training

03/02/2021
by   Sekitoshi Kanai, et al.
0

Deep neural networks are vulnerable to adversarial attacks. Recent studies of adversarial robustness focus on the loss landscape in the parameter space since it is related to optimization performance. These studies conclude that it is hard to optimize the loss function for adversarial training with respect to parameters because the loss function is not smooth: i.e., its gradient is not Lipschitz continuous. However, this analysis ignores the dependence of adversarial attacks on parameters. Since adversarial attacks are the worst noise for the models, they should depend on the parameters of the models. In this study, we analyze the smoothness of the loss function of adversarial training for binary linear classification considering the dependence. We reveal that the Lipschitz continuity depends on the types of constraints of adversarial attacks in this case. Specifically, under the L2 constraints, the adversarial loss is smooth except at zero.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2018

Improved robustness to adversarial examples using Lipschitz regularization of the loss

Adversarial training is an effective method for improving robustness to ...
research
09/29/2018

Interpreting Adversarial Robustness: A View from Decision Surface in Input Space

One popular hypothesis of neural network generalization is that the flat...
research
10/17/2019

Enforcing Linearity in DNN succours Robustness and Adversarial Image Generation

Recent studies on the adversarial vulnerability of neural networks have ...
research
01/07/2021

The Effect of Prior Lipschitz Continuity on the Adversarial Robustness of Bayesian Neural Networks

It is desirable, and often a necessity, for machine learning models to b...
research
05/23/2023

Expressive Losses for Verified Robustness via Convex Combinations

In order to train networks for verified adversarial robustness, previous...
research
02/22/2018

L2-Nonexpansive Neural Networks

This paper proposes a class of well-conditioned neural networks in which...
research
10/01/2022

On the tightness of linear relaxation based robustness certification methods

There has been a rapid development and interest in adversarial training ...

Please sign up or login with your details

Forgot password? Click here to reset