Learning Disentangled Representations via Mutual Information Estimation

12/09/2019
by   Eduardo Hugo Sanchez, et al.
0

In this paper, we investigate the problem of learning disentangled representations. Given a pair of images sharing some attributes, we aim to create a low-dimensional representation which is split into two parts: a shared representation that captures the common information between the images and an exclusive representation that contains the specific information of each image. To address this issue, we propose a model based on mutual information estimation without relying on image reconstruction or image generation. Mutual information maximization is performed to capture the attributes of data in the shared and exclusive representations while we minimize the mutual information between the shared and exclusive representation to enforce representation disentanglement. We show that these representations are useful to perform downstream tasks such as image classification and image retrieval based on the shared or exclusive component. Moreover, classification results show that our model outperforms the state-of-the-art model based on VAE/GAN approaches in representation disentanglement.

READ FULL TEXT

page 7

page 8

research
03/08/2021

Multimodal Representation Learning via Maximization of Local Mutual Information

We propose and demonstrate a representation learning approach by maximiz...
research
03/21/2019

Learning Disentangled Representations of Satellite Image Time Series

In this paper, we investigate how to learn a suitable representation of ...
research
06/04/2019

Information Competing Process for Learning Diversified Representations

Learning representations with diversified information remains an open pr...
research
03/13/2020

Learning Unbiased Representations via Mutual Information Backpropagation

We are interested in learning data-driven representations that can gener...
research
05/27/2019

Wyner VAE: Joint and Conditional Generation with Succinct Common Representation Learning

A new variational autoencoder (VAE) model is proposed that learns a succ...
research
11/25/2019

Bridging Disentanglement with Independence and Conditional Independence via Mutual Information for Representation Learning

Existing works on disentangled representation learning usually lie on a ...
research
05/17/2021

Disentangled Variational Information Bottleneck for Multiview Representation Learning

Multiview data contain information from multiple modalities and have pot...

Please sign up or login with your details

Forgot password? Click here to reset