Gated Variational AutoEncoders: Incorporating Weak Supervision to Encourage Disentanglement

11/15/2019
by   Matthew J. Vowels, et al.
28

Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Previous research has highlighted the benefits of achieving representations that are disentangled, particularly for downstream tasks. However, there is some debate about how to encourage disentanglement with VAEs and evidence indicates that existing implementations of VAEs do not achieve disentanglement consistently. The evaluation of how well a VAE's latent space has been disentangled is often evaluated against our subjective expectations of which attributes should be disentangled for a given problem. Therefore, by definition, we already have domain knowledge of what should be achieved and yet we use unsupervised approaches to achieve it. We propose a weakly-supervised approach that incorporates any available domain knowledge into the training process to form a Gated-VAE. The process involves partitioning the representational embedding and gating backpropagation. All partitions are utilised on the forward pass but gradients are backpropagated through different partitions according to selected image/target pairings. The approach can be used to modify existing VAE models such as beta-VAE, InfoVAE and DIP-VAE-II. Experiments demonstrate that using gated backpropagation, latent factors are represented in their intended partition. The approach is applied to images of faces for the purpose of disentangling head-pose from facial expression. Quantitative metrics show that using Gated-VAE improves average disentanglement, completeness and informativeness, as compared with un-gated implementations. Qualitative assessment of latent traversals demonstrate its disentanglement of head-pose from expression, even when only weak/noisy supervision is available.

READ FULL TEXT

page 5

page 6

page 8

page 9

research
12/11/2019

Variational Learning with Disentanglement-PyTorch

Unsupervised learning of disentangled representations is an open problem...
research
10/18/2022

Out of Distribution Reasoning by Weakly-Supervised Disentangled Logic Variational Autoencoder

Out-of-distribution (OOD) detection, i.e., finding test samples derived ...
research
04/22/2020

Polarized-VAE: Proximity Based Disentangled Representation Learning for Text Generation

Learning disentangled representations of real world data is a challengin...
research
03/20/2022

Attri-VAE: attribute-based, disentangled and interpretable representations of medical images with variational autoencoders

Deep learning (DL) methods where interpretability is intrinsically consi...
research
10/26/2020

Robust Disentanglement of a Few Factors at a Time

Disentanglement is at the forefront of unsupervised learning, as disenta...
research
11/26/2019

A Preliminary Study of Disentanglement With Insights on the Inadequacy of Metrics

Disentangled encoding is an important step towards a better representati...
research
12/06/2021

Encouraging Disentangled and Convex Representation with Controllable Interpolation Regularization

We focus on controllable disentangled representation learning (C-Dis-RL)...

Please sign up or login with your details

Forgot password? Click here to reset