Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations

12/06/2019
by   Sven Gowal, et al.
14

Recent research has made the surprising finding that state-of-the-art deep learning models sometimes fail to generalize to small variations of the input. Adversarial training has been shown to be an effective approach to overcome this problem. However, its application has been limited to enforcing invariance to analytically defined transformations like ℓ_p-norm bounded perturbations. Such perturbations do not necessarily cover plausible real-world variations that preserve the semantics of the input (such as a change in lighting conditions). In this paper, we propose a novel approach to express and formalize robustness to these kinds of real-world transformations of the input. The two key ideas underlying our formulation are (1) leveraging disentangled representations of the input to define different factors of variations, and (2) generating new input images by adversarially composing the representations of different images. We use a StyleGAN model to demonstrate the efficacy of this framework. Specifically, we leverage the disentangled latent representations computed by a StyleGAN model to generate perturbations of an image that are similar to real-world variations (like adding make-up, or changing the skin-tone of a person) and train models to be invariant to these perturbations. Extensive experiments show that our method improves generalization and reduces the effect of spurious correlations.

READ FULL TEXT

page 6

page 8

page 11

page 12

page 13

research
05/10/2021

Robust Training Using Natural Transformation

Previous robustness approaches for deep learning models such as data aug...
research
07/16/2020

Learning perturbation sets for robust machine learning

Although much progress has been made towards robust deep learning, a sig...
research
04/21/2022

Robustness of Machine Learning Models Beyond Adversarial Attacks

Correctly quantifying the robustness of machine learning models is a cen...
research
09/06/2021

Robustness and Generalization via Generative Adversarial Training

While deep neural networks have achieved remarkable success in various c...
research
09/22/2021

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

In safety-critical machine learning applications, it is crucial to defen...
research
02/07/2016

Disentangled Representations in Neural Models

Representation learning is the foundation for the recent success of neur...
research
10/27/2020

On the Transfer of Disentangled Representations in Realistic Settings

Learning meaningful representations that disentangle the underlying stru...

Please sign up or login with your details

Forgot password? Click here to reset