DeepAI AI Chat
Log In Sign Up

THAT: Two Head Adversarial Training for Improving Robustness at Scale

by   Zuxuan Wu, et al.

Many variants of adversarial training have been proposed, with most research focusing on problems with relatively few classes. In this paper, we propose Two Head Adversarial Training (THAT), a two-stream adversarial learning network that is designed to handle the large-scale many-class ImageNet dataset. The proposed method trains a network with two heads and two loss functions; one to minimize feature-space domain shift between natural and adversarial images, and one to promote high classification accuracy. This combination delivers a hardened network that achieves state of the art robust accuracy while maintaining high natural accuracy on ImageNet. Through extensive experiments, we demonstrate that the proposed framework outperforms alternative methods under both standard and "free" adversarial training settings.


page 1

page 7


Adversarial Training for Free!

Adversarial training, in which a network is trained on adversarial examp...

Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks

Adversarial training has been proven to be an effective technique for im...

A Data Fusion Framework for Multi-Domain Morality Learning

Language models can be trained to recognize the moral sentiment of text,...

Gradient Information Guided Deraining with A Novel Network and Adversarial Training

In recent years, deep learning based methods have made significant progr...

Learning Sample Reweighting for Accuracy and Adversarial Robustness

There has been great interest in enhancing the robustness of neural netw...

Feature prioritization and regularization improve standard accuracy and adversarial robustness

Adversarial training has been successfully applied to build robust model...

Improving filling level classification with adversarial training

We investigate the problem of classifying - from a single image - the le...