Blocked and Hierarchical Disentangled Representation From Information Theory Perspective

01/21/2021
by   Ziwen Liu, et al.
30

We propose a novel and theoretical model, blocked and hierarchical variational autoencoder (BHiVAE), to get better-disentangled representation. It is well known that information theory has an excellent explanatory meaning for the network, so we start to solve the disentanglement problem from the perspective of information theory. BHiVAE mainly comes from the information bottleneck theory and information maximization principle. Our main idea is that (1) Neurons block not only one neuron node is used to represent attribute, which can contain enough information; (2) Create a hierarchical structure with different attributes on different layers, so that we can segment the information within each layer to ensure that the final representation is disentangled. Furthermore, we present supervised and unsupervised BHiVAE, respectively, where the difference is mainly reflected in the separation of information between different blocks. In supervised BHiVAE, we utilize the label information as the standard to separate blocks. In unsupervised BHiVAE, without extra information, we use the Total Correlation (TC) measure to achieve independence, and we design a new prior distribution of the latent space to guide the representation learning. It also exhibits excellent disentanglement results in experiments and superior classification accuracy in representation learning.

READ FULL TEXT

page 6

page 7

page 8

research
04/18/2019

Disentangled Representation Learning with Information Maximizing Autoencoder

Learning disentangled representation from any unlabelled data is a non-t...
research
07/07/2020

README: REpresentation learning by fairness-Aware Disentangling MEthod

Fair representation learning aims to encode invariant representation wit...
research
12/30/2019

Disentangled Representation Learning with Wasserstein Total Correlation

Unsupervised learning of disentangled representations involves uncoverin...
research
10/01/2021

Unsupervised Belief Representation Learning in Polarized Networks with Information-Theoretic Variational Graph Auto-Encoders

This paper develops a novel unsupervised algorithm for belief representa...
research
05/29/2019

A Heuristic for Unsupervised Model Selection for Variational Disentangled Representation Learning

Disentangled representations have recently been shown to improve data ef...
research
09/21/2020

Improving Robustness and Generality of NLP Models Using Disentangled Representations

Supervised neural networks, which first map an input x to a single repre...
research
08/20/2018

Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies

Intelligent behaviour in the real-world requires the ability to acquire ...

Please sign up or login with your details

Forgot password? Click here to reset