Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs

01/21/2019
by   Nicholas Watters, et al.
0

We present a simple neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations. Instead of the deconvolutional network typically used in the decoder of VAEs, we tile (broadcast) the latent vector across space, concatenate fixed X- and Y-"coordinate" channels, and apply a fully convolutional network with 1x1 stride. This provides an architectural prior for dissociating positional from non-positional features in the latent distribution of VAEs, yet without providing any explicit supervision to this effect. We show that this architecture, which we term the Spatial Broadcast decoder, improves disentangling, reconstruction accuracy, and generalization to held-out regions in data space. It provides a particularly dramatic benefit when applied to datasets with small objects. We also emphasize a method for visualizing learned latent spaces that helped us diagnose our models and may prove useful for others aiming to assess data representations. Finally, we show the Spatial Broadcast Decoder is complementary to state-of-the-art (SOTA) disentangling techniques and when incorporated improves their performance.

READ FULL TEXT
research
12/03/2019

Singing Voice Conversion with Disentangled Representations of Singer and Vocal Technique Using Variational Autoencoders

We propose a flexible framework that deals with both singer conversion a...
research
05/12/2022

Consensus Capacity of Noisy Broadcast Channels

We study communication with consensus over a broadcast channel - the rec...
research
10/14/2019

Variational Tracking and Prediction with Generative Disentangled State-Space Models

We address tracking and prediction of multiple moving objects in visual ...
research
10/12/2016

Deep disentangled representations for volumetric reconstruction

We introduce a convolutional neural network for inferring a compact dise...
research
05/08/2020

Variance Constrained Autoencoding

Recent state-of-the-art autoencoder based generative models have an enco...
research
02/27/2022

Data Overlap: A Prerequisite For Disentanglement

Learning disentangled representations with variational autoencoders (VAE...
research
05/17/2022

How do Variational Autoencoders Learn? Insights from Representational Similarity

The ability of Variational Autoencoders (VAEs) to learn disentangled rep...

Please sign up or login with your details

Forgot password? Click here to reset