Multi-Rate VAE: Train Once, Get the Full Rate-Distortion Curve

12/07/2022
by   Juhan Bae, et al.
19

Variational autoencoders (VAEs) are powerful tools for learning latent representations of data used in a wide range of applications. In practice, VAEs usually require multiple training rounds to choose the amount of information the latent variable should retain. This trade-off between the reconstruction error (distortion) and the KL divergence (rate) is typically parameterized by a hyperparameter β. In this paper, we introduce Multi-Rate VAE (MR-VAE), a computationally efficient framework for learning optimal parameters corresponding to various β in a single training run. The key idea is to explicitly formulate a response function that maps β to the optimal parameters using hypernetworks. MR-VAEs construct a compact response hypernetwork where the pre-activations are conditionally gated based on β. We justify the proposed architecture by analyzing linear VAEs and showing that it can represent response functions exactly for linear VAEs. With the learned hypernetwork, MR-VAEs can construct the rate-distortion curve without additional training and can be deployed with significantly less hyperparameter tuning. Empirically, our approach is competitive and often exceeds the performance of multiple β-VAEs training with minimal computation and memory overheads.

READ FULL TEXT
research
09/14/2023

Dataset Size Dependence of Rate-Distortion Curve and Threshold of Posterior Collapse in Linear VAE

In the Variational Autoencoder (VAE), the variational posterior often al...
research
07/30/2020

Quantitative Understanding of VAE by Interpreting ELBO as Rate Distortion Cost of Transform Coding

VAE (Variational autoencoder) estimates the posterior parameters (mean a...
research
03/10/2023

QVRF: A Quantization-error-aware Variable Rate Framework for Learned Image Compression

Learned image compression has exhibited promising compression performanc...
research
06/30/2022

Optimizing Training Trajectories in Variational Autoencoders via Latent Bayesian Optimization Approach

Unsupervised and semi-supervised ML methods such as variational autoenco...
research
12/06/2021

Conditional Deep Hierarchical Variational Autoencoder for Voice Conversion

Variational autoencoder-based voice conversion (VAE-VC) has the advantag...
research
06/23/2020

Simple and Effective VAE Training with Calibrated Decoders

Variational autoencoders (VAEs) provide an effective and simple method f...

Please sign up or login with your details

Forgot password? Click here to reset