Evidential Sparsification of Multimodal Latent Spaces in Conditional Variational Autoencoders

10/19/2020
by   Masha Itkina, et al.
0

Discrete latent spaces in variational autoencoders have been shown to effectively capture the data distribution for many real-world problems such as natural language understanding, human intent prediction, and visual scene representation. However, discrete latent spaces need to be sufficiently large to capture the complexities of real-world data, rendering downstream tasks computationally challenging. For instance, performing motion planning in a high-dimensional latent representation of the environment could be intractable. We consider the problem of sparsifying the discrete latent space of a trained conditional variational autoencoder, while preserving its learned multimodality. As a post hoc latent space reduction technique, we use evidential theory to identify the latent classes that receive direct evidence from a particular input condition and filter out those that do not. Experiments on diverse tasks, such as image generation and human behavior prediction, demonstrate the effectiveness of our proposed technique at reducing the discrete latent sample space size of a model while maintaining its learned multimodality.

READ FULL TEXT

page 15

page 19

page 20

research
10/12/2022

ControlVAE: Model-Based Learning of Generative Controllers for Physics-Based Characters

In this paper, we introduce ControlVAE, a novel model-based framework fo...
research
07/22/2022

TRUST-LAPSE: An Explainable Actionable Mistrust Scoring Framework for Model Monitoring

Continuous monitoring of trained ML models to determine when their predi...
research
05/17/2019

Dueling Decoders: Regularizing Variational Autoencoder Latent Spaces

Variational autoencoders learn unsupervised data representations, but th...
research
11/15/2017

Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models

Deep generative neural networks have proven effective at both conditiona...
research
11/26/2018

Unsupervised learning with sparse space-and-time autoencoders

We use spatially-sparse two, three and four dimensional convolutional au...
research
03/22/2023

Variantional autoencoder with decremental information bottleneck for disentanglement

One major challenge of disentanglement learning with variational autoenc...
research
12/18/2018

Sparsity in Variational Autoencoders

Working in high-dimensional latent spaces, the internal encoding of data...

Please sign up or login with your details

Forgot password? Click here to reset