Invariant Representations through Adversarial Forgetting

11/11/2019
by   Ayush Jaiswal, et al.
29

We propose a novel approach to achieving invariance for deep neural networks in the form of inducing amnesia to unwanted factors of data through a new adversarial forgetting mechanism. We show that the forgetting mechanism serves as an information-bottleneck, which is manipulated by the adversarial training to learn invariance to unwanted factors. Empirical results show that the proposed framework achieves state-of-the-art performance at learning invariance in both nuisance and bias settings on a diverse collection of datasets and tasks.

READ FULL TEXT

page 4

page 6

research
05/07/2019

Unified Adversarial Invariance

We present a unified invariance framework for supervised neural networks...
research
12/02/2019

Discovery and Separation of Features for Invariant Representation Learning

Supervised machine learning models often associate irrelevant nuisance f...
research
05/24/2018

Evading the Adversary in Invariant Representation

Representations of data that are invariant to changes in specified nuisa...
research
11/25/2020

De-STT: De-entaglement of unwanted Nuisances and Biases in Speech to Text System using Adversarial Forgetting

Training a robust Speech to Text (STT) system requires tens of thousands...
research
09/26/2018

Unsupervised Adversarial Invariance

Data representations that contain all the information about target varia...
research
12/14/2021

A Style and Semantic Memory Mechanism for Domain Generalization

Mainstream state-of-the-art domain generalization algorithms tend to pri...
research
06/05/2017

Emergence of Invariance and Disentangling in Deep Representations

Using established principles from Information Theory and Statistics, we ...

Please sign up or login with your details

Forgot password? Click here to reset