Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility

09/09/2021
by   Liyun Tu, et al.
0

Probabilistic generative models are attractive for scientific modeling because their inferred parameters can be used to generate hypotheses and design experiments. This requires that the learned model provide an accurate representation of the input data and yield a latent space that effectively predicts outcomes relevant to the scientific question. Supervised Variational Autoencoders (SVAEs) have previously been used for this purpose, where a carefully designed decoder can be used as an interpretable generative model while the supervised objective ensures a predictive latent representation. Unfortunately, the supervised objective forces the encoder to learn a biased approximation to the generative posterior distribution, which renders the generative parameters unreliable when used in scientific models. This issue has remained undetected as reconstruction losses commonly used to evaluate model performance do not detect bias in the encoder. We address this previously-unreported issue by developing a second order supervision framework (SOS-VAE) that influences the decoder to induce a predictive latent representation. This ensures that the associated encoder maintains a reliable generative interpretation. We extend this technique to allow the user to trade-off some bias in the generative parameters for improved predictive performance, acting as an intermediate option between SVAEs and our new SOS-VAE. We also use this methodology to address missing data issues that often arise when combining recordings from multiple scientific experiments. We demonstrate the effectiveness of these developments using synthetic data and electrophysiological recordings with an emphasis on how our learned representations can be used to design scientific experiments.

READ FULL TEXT

page 1

page 8

page 17

research
12/06/2022

Three Variations on Variational Autoencoders

Variational autoencoders (VAEs) are one class of generative probabilisti...
research
03/17/2020

Characterizing and Avoiding Problematic Global Optima of Variational Autoencoders

Variational Auto-encoders (VAEs) are deep generative latent variable mod...
research
07/05/2023

On the Adversarial Robustness of Generative Autoencoders in the Latent Space

The generative autoencoders, such as the variational autoencoders or the...
research
03/03/2019

Variational Auto-Decoder: Neural Generative Modeling from Partial Data

Learning a generative model from partial data (data with missingness) is...
research
02/18/2021

VAE Approximation Error: ELBO and Conditional Independence

The importance of Variational Autoencoders reaches far beyond standalone...
research
10/13/2021

The deep generative decoder: Using MAP estimates of representations

A deep generative model is characterized by a representation space, its ...
research
11/05/2018

TzK Flow - Conditional Generative Model

We introduce TzK (pronounced "task"), a conditional flow-based encoder/d...

Please sign up or login with your details

Forgot password? Click here to reset