Data Augmentation via Structured Adversarial Perturbations

11/05/2020
by   Calvin Luo, et al.
0

Data augmentation is a major component of many machine learning methods with state-of-the-art performance. Common augmentation strategies work by drawing random samples from a space of transformations. Unfortunately, such sampling approaches are limited in expressivity, as they are unable to scale to rich transformations that depend on numerous parameters due to the curse of dimensionality. Adversarial examples can be considered as an alternative scheme for data augmentation. By being trained on the most difficult modifications of the inputs, the resulting models are then hopefully able to handle other, presumably easier, modifications as well. The advantage of adversarial augmentation is that it replaces sampling with the use of a single, calculated perturbation that maximally increases the loss. The downside, however, is that these raw adversarial perturbations appear rather unstructured; applying them often does not produce a natural transformation, contrary to a desirable data augmentation technique. To address this, we propose a method to generate adversarial examples that maintain some desired natural structure. We first construct a subspace that only contains perturbations with the desired structure. We then project the raw adversarial gradient onto this space to select a structured transformation that would maximally increase the loss when applied. We demonstrate this approach through two types of image transformations: photometric and geometric. Furthermore, we show that training on such structured adversarial images improves generalization.

READ FULL TEXT

page 2

page 6

research
10/08/2020

Affine-Invariant Robust Training

The field of adversarial robustness has attracted significant attention ...
research
11/13/2022

Adversarial and Random Transformations for Robust Domain Adaptation and Generalization

Data augmentation has been widely used to improve generalization in trai...
research
06/07/2018

Training Augmentation with Adversarial Examples for Robust Speech Recognition

This paper explores the use of adversarial examples in training speech r...
research
07/02/2020

Trace-Norm Adversarial Examples

White box adversarial perturbations are sought via iterative optimizatio...
research
06/03/2019

Achieving Generalizable Robustness of Deep Neural Networks by Stability Training

We study the recently introduced stability training as a general-purpose...
research
01/20/2018

Visual Data Augmentation through Learning

The rapid progress in machine learning methods has been empowered by i) ...
research
08/18/2021

Semantic Perturbations with Normalizing Flows for Improved Generalization

Data augmentation is a widely adopted technique for avoiding overfitting...

Please sign up or login with your details

Forgot password? Click here to reset