Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies

08/20/2018
by   Alessandro Achille, et al.
0

Intelligent behaviour in the real-world requires the ability to acquire new knowledge from an ongoing sequence of experiences while preserving and reusing past knowledge. We propose a novel algorithm for unsupervised representation learning from piece-wise stationary visual data: Variational Autoencoder with Shared Embeddings (VASE). Based on the Minimum Description Length principle, VASE automatically detects shifts in the data distribution and allocates spare representational capacity to new knowledge, while simultaneously protecting previously learnt representations from catastrophic forgetting. Our approach encourages the learnt representations to be disentangled, which imparts a number of desirable properties: VASE can deal sensibly with ambiguous inputs, it can enhance its own representations through imagination-based exploration, and most importantly, it exhibits semantically meaningful sharing of latents between different datasets. Compared to baselines with entangled representations, our approach is able to reason beyond surface-level statistics and perform semantically meaningful cross-domain inference.

READ FULL TEXT

page 6

page 8

page 14

page 15

research
09/15/2023

FedDCSR: Federated Cross-domain Sequential Recommendation via Disentangled Representation Learning

Cross-domain Sequential Recommendation (CSR) which leverages user sequen...
research
04/04/2020

Cross-domain Face Presentation Attack Detection via Multi-domain Disentangled Representation Learning

Face presentation attack detection (PAD) has been an urgent problem to b...
research
07/20/2020

Learning latent representations across multiple data domains using Lifelong VAEGAN

The problem of catastrophic forgetting occurs in deep learning models tr...
research
05/24/2017

Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations

We would like to learn a representation of the data which decomposes an ...
research
04/01/2022

Learning Disentangled Representations of Negation and Uncertainty

Negation and uncertainty modeling are long-standing tasks in natural lan...
research
01/21/2021

Blocked and Hierarchical Disentangled Representation From Information Theory Perspective

We propose a novel and theoretical model, blocked and hierarchical varia...
research
09/29/2020

Geometric Disentanglement by Random Convex Polytopes

Finding and analyzing meaningful representations of data is the purpose ...

Please sign up or login with your details

Forgot password? Click here to reset