Multifactor Sequential Disentanglement via Structured Koopman Autoencoders

03/30/2023
by   Nimrod Berman, et al.
0

Disentangling complex data to its latent factors of variation is a fundamental task in representation learning. Existing work on sequential disentanglement mostly provides two factor representations, i.e., it separates the data to time-varying and time-invariant factors. In contrast, we consider multifactor disentanglement in which multiple (more than two) semantic disentangled components are generated. Key to our approach is a strong inductive bias where we assume that the underlying dynamics can be represented linearly in the latent space. Under this assumption, it becomes natural to exploit the recently introduced Koopman autoencoder models. However, disentangled representations are not guaranteed in Koopman approaches, and thus we propose a novel spectral loss term which leads to structured Koopman matrices and disentanglement. Overall, we propose a simple and easy to code new deep model that is fully unsupervised and it supports multifactor disentanglement. We showcase new disentangling abilities such as swapping of individual static factors between characters, and an incremental swap of disentangled factors from the source to the target. Moreover, we evaluate our method extensively on two factor standard benchmark tasks where we significantly improve over competing unsupervised approaches, and we perform competitively in comparison to weakly- and self-supervised state-of-the-art approaches. The code is available at https://github.com/azencot-group/SKD.

READ FULL TEXT

page 6

page 8

page 9

page 19

page 20

page 21

page 23

page 24

research
10/22/2021

Contrastively Disentangled Sequential Variational Autoencoder

Self-supervised disentangled representation learning is a critical task ...
research
05/25/2023

Sample and Predict Your Latent: Modality-free Sequential Disentanglement via Contrastive Estimation

Unsupervised disentanglement is a long-standing challenge in representat...
research
01/19/2021

Disentangled Recurrent Wasserstein Autoencoder

Learning disentangled representations leads to interpretable models and ...
research
05/23/2020

S3VAE: Self-Supervised Sequential VAE for Representation Disentanglement and Data Generation

We propose a sequential variational autoencoder to learn disentangled re...
research
02/13/2022

Unsupervised Disentanglement with Tensor Product Representations on the Torus

The current methods for learning representations with auto-encoders almo...
research
07/22/2019

Product of Orthogonal Spheres Parameterization for Disentangled Representation Learning

Learning representations that can disentangle explanatory attributes und...
research
03/02/2021

Learning disentangled representations via product manifold projection

We propose a novel approach to disentangle the generative factors of var...

Please sign up or login with your details

Forgot password? Click here to reset