Adversarially Optimized Mixup for Robust Classification

03/22/2021
by   Jason Bunk, et al.
0

Mixup is a procedure for data augmentation that trains networks to make smoothly interpolated predictions between datapoints. Adversarial training is a strong form of data augmentation that optimizes for worst-case predictions in a compact space around each data-point, resulting in neural networks that make much more robust predictions. In this paper, we bring these ideas together by adversarially probing the space between datapoints, using projected gradient descent (PGD). The fundamental approach in this work is to leverage backpropagation through the mixup interpolation during training to optimize for places where the network makes unsmooth and incongruous predictions. Additionally, we also explore several modifications and nuances, like optimization of the mixup ratio and geometrical label assignment, and discuss their impact on enhancing network robustness. Through these ideas, we have been able to train networks that robustly generalize better; experiments on CIFAR-10 and CIFAR-100 demonstrate consistent improvements in accuracy against strong adversaries, including the recent strong ensemble attack AutoAttack. Our source code would be released for reproducibility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2021

Data Augmentation Can Improve Robustness

Adversarial training suffers from robust overfitting, a phenomenon where...
research
02/16/2015

Invariant backpropagation: how to train a transformation-invariant neural network

In many classification problems a classifier should be robust to small v...
research
02/07/2020

Semantic Robustness of Models of Source Code

Deep neural networks are vulnerable to adversarial examples - small inpu...
research
02/04/2023

Interpolation for Robust Learning: Data Augmentation on Geodesics

We propose to study and promote the robustness of a model as per its per...
research
03/02/2021

Fixing Data Augmentation to Improve Adversarial Robustness

Adversarial training suffers from robust overfitting, a phenomenon where...
research
09/03/2019

Certified Robustness to Adversarial Word Substitutions

State-of-the-art NLP models can often be fooled by adversaries that appl...

Please sign up or login with your details

Forgot password? Click here to reset