Variational Autoencoders with Normalizing Flow Decoders

04/12/2020
by   Rogan Morrow, et al.
34

Recently proposed normalizing flow models such as Glow have been shown to be able to generate high quality, high dimensional images with relatively fast sampling speed. Due to their inherently restrictive architecture, however, it is necessary that they are excessively deep in order to train effectively. In this paper we propose to combine Glow with an underlying variational autoencoder in order to counteract this issue. We demonstrate that our proposed model is competitive with Glow in terms of image quality and test likelihood while requiring far less time for training.

READ FULL TEXT

page 3

page 4

page 5

page 6

research
05/25/2021

Self-Organized Variational Autoencoders (Self-VAE) for Learned Image Compression

In end-to-end optimized learned image compression, it is standard practi...
research
11/30/2020

Prior Flow Variational Autoencoder: A density estimation model for Non-Intrusive Load Monitoring

Non-Intrusive Load Monitoring (NILM) is a computational technique to est...
research
01/16/2023

Improving the Bootstrap of Blind Equalizers with Variational Autoencoders

We evaluate the start-up of blind equalizers at critical working points,...
research
03/31/2022

Efficient Maximal Coding Rate Reduction by Variational Forms

The principle of Maximal Coding Rate Reduction (MCR^2) has recently been...
research
12/10/2017

Shape optimization in laminar flow with a label-guided variational autoencoder

Computational design optimization in fluid dynamics usually requires to ...
research
03/01/2022

Particle-based Fast Jet Simulation at the LHC with Variational Autoencoders

We study how to use Deep Variational Autoencoders for a fast simulation ...
research
07/15/2022

Subgroup Discovery in Unstructured Data

Subgroup discovery is a descriptive and exploratory data mining techniqu...

Please sign up or login with your details

Forgot password? Click here to reset