Discovery and Separation of Features for Invariant Representation Learning

12/02/2019
by   Ayush Jaiswal, et al.
47

Supervised machine learning models often associate irrelevant nuisance factors with the prediction target, which hurts generalization. We propose a framework for training robust neural networks that induces invariance to nuisances through learning to discover and separate predictive and nuisance factors of data. We present an information theoretic formulation of our approach, from which we derive training objectives and its connections with previous methods. Empirical results on a wide array of datasets show that the proposed framework achieves state-of-the-art performance, without requiring nuisance annotations during training.

READ FULL TEXT

page 7

page 8

research
11/11/2019

Invariant Representations through Adversarial Forgetting

We propose a novel approach to achieving invariance for deep neural netw...
research
05/07/2019

Unified Adversarial Invariance

We present a unified invariance framework for supervised neural networks...
research
05/24/2018

Evading the Adversary in Invariant Representation

Representations of data that are invariant to changes in specified nuisa...
research
09/26/2018

Unsupervised Adversarial Invariance

Data representations that contain all the information about target varia...
research
05/30/2021

On the benefits of representation regularization in invariance based domain generalization

A crucial aspect in reliable machine learning is to design a deployable ...
research
05/26/2023

Manifold Regularization for Memory-Efficient Training of Deep Neural Networks

One of the prevailing trends in the machine- and deep-learning community...
research
07/13/2021

On Designing Good Representation Learning Models

The goal of representation learning is different from the ultimate objec...

Please sign up or login with your details

Forgot password? Click here to reset