Learning Conditional Invariance through Cycle Consistency

11/25/2021
by   Maxim Samarin, et al.
0

Identifying meaningful and independent factors of variation in a dataset is a challenging learning task frequently addressed by means of deep latent variable models. This task can be viewed as learning symmetry transformations preserving the value of a chosen property along latent dimensions. However, existing approaches exhibit severe drawbacks in enforcing the invariance property in the latent space. We address these shortcomings with a novel approach to cycle consistency. Our method involves two separate latent subspaces for the target property and the remaining input information, respectively. In order to enforce invariance as well as sparsity in the latent space, we incorporate semantic knowledge by using cycle consistency constraints relying on property side information. The proposed method is based on the deep information bottleneck and, in contrast to other approaches, allows using continuous target properties and provides inherent model selection capabilities. We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models with improved invariance properties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/17/2018

Learning Sparse Latent Representations with the Deep Copula Information Bottleneck

Deep latent variable models are powerful tools for representation learni...
research
12/09/2019

No Representation without Transformation

We propose to extend Latent Variable Models with a simple idea: learn to...
research
02/07/2020

Inverse Learning of Symmetry Transformations

Symmetry transformations induce invariances and are a crucial building b...
research
04/27/2018

Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders

Generative models that learn disentangled representations for different ...
research
07/21/2020

Learning Structured Latent Factors from Dependent Data:A Generative Model Framework from Information-Theoretic Perspective

Learning controllable and generalizable representation of multivariate d...
research
02/26/2020

NestedVAE: Isolating Common Factors via Weak Supervision

Fair and unbiased machine learning is an important and active field of r...
research
03/04/2022

Sparsity-Inducing Categorical Prior Improves Robustness of the Information Bottleneck

The information bottleneck framework provides a systematic approach to l...

Please sign up or login with your details

Forgot password? Click here to reset