Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks

11/26/2018
by   Jianyu Wang, et al.
0

In this paper, we study fast training of adversarially robust models. From the analyses on the state-of-the-art defense method, i.e., the multi-step adversarial training madry2017towards, we hypothesize that the gradient magnitude links to the model robustness. Motivated by this, we propose to perturb both the image and the label during training, which we call Bilateral Adversarial Training (BAT). To generate the adversarial label, we derive an closed-form heuristic solution. To generate the adversarial image, we use one-step targeted attack with the target label being the most confusing class. In the experiment, we first show that random start and the most confusing target attack effectively prevent the label leaking and gradient masking problem. Then coupled with the adversarial label part, our model significantly improves the state-of-the-art results. For example, against PGD100 attack with cross-entropy loss, on CIFAR10, we achieve 63.7% versus 47.2%; on SVHN, we achieve 59.1% versus 42.1%; on CIFAR100, we achieve 25.3% versus 23.4%. Note that these results are obtained by the fast one-step adversarial training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/09/2020

Fast Gradient Projection Method for Text Adversary Generation and Adversarial Training

Adversarial training has shown effectiveness and efficiency in improving...
research
04/22/2023

MAWSEO: Adversarial Wiki Search Poisoning for Illicit Online Promotion

As a prominent instance of vandalism edits, Wiki search poisoning for il...
research
06/17/2019

MixUp as Directional Adversarial Training

In this work, we explain the working mechanism of MixUp in terms of adve...
research
12/30/2022

Guidance Through Surrogate: Towards a Generic Diagnostic Attack

Adversarial training is an effective approach to make deep neural networ...
research
06/13/2021

ATRAS: Adversarially Trained Robust Architecture Search

In this paper, we explore the effect of architecture completeness on adv...
research
07/21/2022

Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness

Defending deep neural networks against adversarial examples is a key cha...
research
10/03/2020

Efficient Robust Training via Backward Smoothing

Adversarial training is so far the most effective strategy in defending ...

Please sign up or login with your details

Forgot password? Click here to reset