Lower Difficulty and Better Robustness: A Bregman Divergence Perspective for Adversarial Training

08/26/2022
by   Zihui Wu, et al.
0

In this paper, we investigate on improving the adversarial robustness obtained in adversarial training (AT) via reducing the difficulty of optimization. To better study this problem, we build a novel Bregman divergence perspective for AT, in which AT can be viewed as the sliding process of the training data points on the negative entropy curve. Based on this perspective, we analyze the learning objectives of two typical AT methods, i.e., PGD-AT and TRADES, and we find that the optimization process of TRADES is easier than PGD-AT for that TRADES separates PGD-AT. In addition, we discuss the function of entropy in TRADES, and we find that models with high entropy can be better robustness learners. Inspired by the above findings, we propose two methods, i.e., FAIT and MER, which can both not only reduce the difficulty of optimization under the 10-step PGD adversaries, but also provide better robustness. Our work suggests that reducing the difficulty of optimization under the 10-step PGD adversaries is a promising approach for enhancing the adversarial robustness in AT.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/25/2023

Combining Adversaries with Anti-adversaries in Training

Adversarial training is an effective learning technique to improve the r...
research
03/18/2020

Improving Adversarial Robustness Through Progressive Hardening

Adversarial training (AT) has become a popular choice for training robus...
research
02/07/2020

Semantic Robustness of Models of Source Code

Deep neural networks are vulnerable to adversarial examples - small inpu...
research
11/23/2018

Robustness via curvature regularization, and vice versa

State-of-the-art classifiers have been shown to be largely vulnerable to...
research
09/06/2021

Exposing Length Divergence Bias of Textual Matching Models

Despite the remarkable success deep models have achieved in Textual Matc...
research
09/07/2023

Adversarially Robust Deep Learning with Optimal-Transport-Regularized Divergences

We introduce the ARMOR_D methods as novel approaches to enhancing the ad...
research
05/03/2022

Adversarial Training for High-Stakes Reliability

In the future, powerful AI systems may be deployed in high-stakes settin...

Please sign up or login with your details

Forgot password? Click here to reset