Learning with MISELBO: The Mixture Cookbook

09/30/2022
by   Oskar Kviman, et al.
40

Mixture models in variational inference (VI) is an active field of research. Recent works have established their connection to multiple importance sampling (MIS) through the MISELBO and advanced the use of ensemble approximations for large-scale problems. However, as we show here, an independent learning of the ensemble components can lead to suboptimal diversity. Hence, we study the effect of instead using MISELBO as an objective function for learning mixtures, and we propose the first ever mixture of variational approximations for a normalizing flow-based hierarchical variational autoencoder (VAE) with VampPrior and a PixelCNN decoder network. Two major insights led to the construction of this novel composite model. First, mixture models have potential to be off-the-shelf tools for practitioners to obtain more flexible posterior approximations in VAEs. Therefore, we make them more accessible by demonstrating how to apply them to four popular architectures. Second, the mixture components cooperate in order to cover the target distribution while trying to maximize their diversity when MISELBO is the objective function. We explain this cooperative behavior by drawing a novel connection between VI and adaptive importance sampling. Finally, we demonstrate the superiority of the Mixture VAEs' learned feature representations on both image and single-cell transcriptome data, and obtain state-of-the-art results among VAE architectures in terms of negative log-likelihood on the MNIST and FashionMNIST datasets. Code available here: <https://github.com/Lagergren-Lab/MixtureVAEs>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2022

Multiple Importance Sampling ELBO and Deep Ensembles of Variational Approximations

In variational inference (VI), the marginal log-likelihood is estimated ...
research
02/17/2020

Decision-Making with Auto-Encoding Variational Bayes

To make decisions based on a model fit by Auto-Encoding Variational Baye...
research
12/01/2021

An adaptive mixture-population Monte Carlo method for likelihood-free inference

This paper focuses on variational inference with intractable likelihood ...
research
10/18/2022

Optimizing Hierarchical Image VAEs for Sample Quality

While hierarchical variational autoencoders (VAEs) have achieved great d...
research
06/30/2021

Monte Carlo Variational Auto-Encoders

Variational auto-encoders (VAE) are popular deep latent variable models ...
research
05/11/2023

HINT: Hierarchical Mixture Networks For Coherent Probabilistic Forecasting

We present the Hierarchical Mixture Networks (HINT), a model family for ...
research
10/22/2021

Learning Proposals for Practical Energy-Based Regression

Energy-based models (EBMs) have experienced a resurgence within machine ...

Please sign up or login with your details

Forgot password? Click here to reset