Semantic Perturbations with Normalizing Flows for Improved Generalization

08/18/2021
by   Oğuz Kaan Yüksel, et al.
0

Data augmentation is a widely adopted technique for avoiding overfitting when training deep neural networks. However, this approach requires domain-specific knowledge and is often limited to a fixed set of hard-coded transformations. Recently, several works proposed to use generative models for generating semantically meaningful perturbations to train a classifier. However, because accurate encoding and decoding are critical, these methods, which use architectures that approximate the latent-variable inference, remained limited to pilot studies on small datasets. Exploiting the exactly reversible encoder-decoder structure of normalizing flows, we perform on-manifold perturbations in the latent space to define fully unsupervised data augmentations. We demonstrate that such perturbations match the performance of advanced data augmentation techniques – reaching 96.6 accuracy for CIFAR-10 using ResNet-18 and outperform existing methods, particularly in low data regimes – yielding 10–25 test accuracy from classical training. We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective, yielding the first test accuracy improvement results on real-world datasets – CIFAR-10/100 – via latent-space perturbations.

READ FULL TEXT
research
03/24/2019

Variational Inference with Latent Space Quantization for Adversarial Resilience

Despite their tremendous success in modelling high-dimensional data mani...
research
10/15/2020

Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness

Adversarial data augmentation has shown promise for training robust deep...
research
11/09/2021

Data Augmentation Can Improve Robustness

Adversarial training suffers from robust overfitting, a phenomenon where...
research
07/05/2022

Predicting Out-of-Domain Generalization with Local Manifold Smoothness

Understanding how machine learning models generalize to new environments...
research
03/02/2021

Fixing Data Augmentation to Improve Adversarial Robustness

Adversarial training suffers from robust overfitting, a phenomenon where...
research
11/05/2020

Data Augmentation via Structured Adversarial Perturbations

Data augmentation is a major component of many machine learning methods ...
research
09/15/2019

Wasserstein Diffusion Tikhonov Regularization

We propose regularization strategies for learning discriminative models ...

Please sign up or login with your details

Forgot password? Click here to reset