Sampling From Autoencoders' Latent Space via Quantization And Probability Mass Function Concepts

by   Aymene Mohammed Bouayed, et al.

In this study, we focus on sampling from the latent space of generative models built upon autoencoders so as the reconstructed samples are lifelike images. To do to, we introduce a novel post-training sampling algorithm rooted in the concept of probability mass functions, coupled with a quantization process. Our proposed algorithm establishes a vicinity around each latent vector from the input data and then proceeds to draw samples from these defined neighborhoods. This strategic approach ensures that the sampled latent vectors predominantly inhabit high-probability regions, which, in turn, can be effectively transformed into authentic real-world images. A noteworthy point of comparison for our sampling algorithm is the sampling technique based on Gaussian mixture models (GMM), owing to its inherent capability to represent clusters. Remarkably, we manage to improve the time complexity from the previous π’ͺ(nΓ— d Γ— k Γ— i) associated with GMM sampling to a much more streamlined π’ͺ(nΓ— d), thereby resulting in substantial speedup during runtime. Moreover, our experimental results, gauged through the FrΓ©chet inception distance (FID) for image generation, underscore the superior performance of our sampling algorithm across a diverse range of models and datasets. On the MNIST benchmark dataset, our approach outperforms GMM sampling by yielding a noteworthy improvement of up to 0.89 in FID value. Furthermore, when it comes to generating images of faces and ocular images, our approach showcases substantial enhancements with FID improvements of 1.69 and 0.87 respectively, as compared to GMM sampling, as evidenced on the CelebA and MOBIUS datasets. Lastly, we substantiate our methodology's efficacy in estimating latent space distributions in contrast to GMM sampling, particularly through the lens of the Wasserstein distance.


page 14

page 15

page 17

page 18

page 19

page 20

page 21

page 22

βˆ™ 01/16/2023

Simplex Autoencoders

Synthetic data generation is increasingly important due to privacy conce...
βˆ™ 09/18/2023

Learning Nonparametric High-Dimensional Generative Models: The Empirical-Beta-Copula Autoencoder

By sampling from the latent space of an autoencoder and decoding the lat...
βˆ™ 10/02/2020

Encoded Prior Sliced Wasserstein AutoEncoder for learning latent manifold representations

While variational autoencoders have been successful generative models fo...
βˆ™ 06/05/2018

On Latent Distributions Without Finite Mean in Generative Models

We investigate the properties of multidimensional probability distributi...
βˆ™ 03/29/2019

From Variational to Deterministic Autoencoders

Variational Autoencoders (VAEs) provide a theoretically-backed framework...
βˆ™ 11/23/2021

Smoothing the Generative Latent Space with Mixup-based Distance Learning

Producing diverse and realistic images with generative models such as GA...

Please sign up or login with your details

Forgot password? Click here to reset