FAVAE: Sequence Disentanglement using Information Bottleneck Principle

02/22/2019
by   Masanori Yamada, et al.
0

We propose the factorized action variational autoencoder (FAVAE), a state-of-the-art generative model for learning disentangled and interpretable representations from sequential data via the information bottleneck without supervision. The purpose of disentangled representation learning is to obtain interpretable and transferable representations from data. We focused on the disentangled representation of sequential data since there is a wide range of potential applications if disentanglement representation is extended to sequential data such as video, speech, and stock market. Sequential data are characterized by dynamic and static factors: dynamic factors are time dependent, and static factors are independent of time. Previous models disentangle static and dynamic factors by explicitly modeling the priors of latent variables to distinguish between these factors. However, these models cannot disentangle representations between dynamic factors, such as disentangling "picking up" and "throwing" in robotic tasks. FAVAE can disentangle multiple dynamic factors. Since it does not require modeling priors, it can disentangle "between" dynamic factors. We conducted experiments to show that FAVAE can extract disentangled dynamic factors.

READ FULL TEXT

page 1

page 4

page 6

research
10/22/2021

Contrastively Disentangled Sequential Variational Autoencoder

Self-supervised disentangled representation learning is a critical task ...
research
09/22/2017

Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data

We present a factorized hierarchical variational autoencoder, which lear...
research
01/24/2019

Learning Disentangled Representations with Reference-Based Variational Autoencoders

Learning disentangled representations from visual data, where different ...
research
01/19/2021

Disentangled Recurrent Wasserstein Autoencoder

Learning disentangled representations leads to interpretable models and ...
research
04/25/2018

Unsupervised Disentangled Representation Learning with Analogical Relations

Learning the disentangled representation of interpretable generative fac...
research
05/17/2021

Learning Disentangled Representations for Time Series

Time-series representation learning is a fundamental task for time-serie...
research
03/12/2021

VDSM: Unsupervised Video Disentanglement with State-Space Modeling and Deep Mixtures of Experts

Disentangled representations support a range of downstream tasks includi...

Please sign up or login with your details

Forgot password? Click here to reset