Two Heads are Better than One: Robust Learning Meets Multi-branch Models

08/17/2022
by   Dong Huang, et al.
0

Deep neural networks (DNNs) are vulnerable to adversarial examples, in which DNNs are misled to false outputs due to inputs containing imperceptible perturbations. Adversarial training, a reliable and effective method of defense, may significantly reduce the vulnerability of neural networks and becomes the de facto standard for robust learning. While many recent works practice the data-centric philosophy, such as how to generate better adversarial examples or use generative models to produce additional training data, we look back to the models themselves and revisit the adversarial robustness from the perspective of deep feature distribution as an insightful complementarity. In this paper, we propose Branch Orthogonality adveRsarial Training (BORT) to obtain state-of-the-art performance with solely the original dataset for adversarial training. To practice our design idea of integrating multiple orthogonal solution spaces, we leverage a simple and straightforward multi-branch neural network that eclipses adversarial attacks with no increase in inference time. We heuristically propose a corresponding loss function, branch-orthogonal loss, to make each solution space of the multi-branch model orthogonal. We evaluate our approach on CIFAR-10, CIFAR-100, and SVHN against ℓ_∞ norm-bounded perturbations of size ϵ= 8/255, respectively. Exhaustive experiments are conducted to show that our method goes beyond all state-of-the-art methods without any tricks. Compared to all methods that do not use additional data for training, our models achieve 67.3 41.5 state-of-the-art by +7.23 training set with a far larger scale than ours. All our models and codes are available online at https://github.com/huangd1999/BORT.

READ FULL TEXT
research
05/27/2017

A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

Some recent works revealed that deep neural networks (DNNs) are vulnerab...
research
11/22/2022

Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors

As data become increasingly vital for deep learning, a company would be ...
research
12/17/2021

A Robust Optimization Approach to Deep Learning

Many state-of-the-art adversarial training methods leverage upper bounds...
research
10/18/2021

Improving Robustness using Generated Data

Recent work argues that robust training requires substantially larger da...
research
11/21/2022

Efficient Generalization Improvement Guided by Random Weight Perturbation

To fully uncover the great potential of deep neural networks (DNNs), var...
research
03/16/2020

Toward Adversarial Robustness via Semi-supervised Robust Training

Adversarial examples have been shown to be the severe threat to deep neu...
research
06/29/2022

IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound

Recent works have tried to increase the verifiability of adversarially t...

Please sign up or login with your details

Forgot password? Click here to reset