VarMixup: Exploiting the Latent Space for Robust Training and Inference

03/14/2020
by   Puneet Mangla, et al.
0

The vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has led to the development of many defense approaches. Among them, Adversarial Training (AT) is a popular and widely used approach for training adversarially robust models. Mixup Training (MT), a recent popular training algorithm, improves the generalization performance of models by introducing globally linear behavior in between training examples. Although still in its early phase, we observe a shift in trend of exploiting Mixup from perspectives of generalisation to that of adversarial robustness. It has been shown that the Mixup trained models improves the robustness of models but only passively. A recent approach, Mixup Inference (MI), proposes an inference principle for Mixup trained models to counter adversarial examples at inference time by mixing the input with other random clean samples. In this work, we propose a new approach - VarMixup (Variational Mixup) - to better sample mixup images by using the latent manifold underlying the data. Our experiments on CIFAR-10, CIFAR-100, SVHN and Tiny-Imagenet demonstrate that VarMixup beats state-of-the-art AT techniques without training the model adversarially. Additionally, we also conduct ablations that show that models trained on VarMixup samples are also robust to various input corruptions/perturbations, have low calibration error and are transferable.

READ FULL TEXT
research
09/25/2019

Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks

It has been widely recognized that adversarial examples can be easily cr...
research
05/13/2019

Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models

Neural networks are vulnerable to adversarial attacks -- small visually ...
research
07/08/2020

How benign is benign overfitting?

We investigate two causes for adversarial vulnerability in deep neural n...
research
09/05/2020

Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks

Adversarial training is a popular defense strategy against attack threat...
research
09/06/2021

Robustness and Generalization via Generative Adversarial Training

While deep neural networks have achieved remarkable success in various c...
research
02/16/2021

Globally-Robust Neural Networks

The threat of adversarial examples has motivated work on training certif...
research
11/10/2021

Robust Learning via Ensemble Density Propagation in Deep Neural Networks

Learning in uncertain, noisy, or adversarial environments is a challengi...

Please sign up or login with your details

Forgot password? Click here to reset