Learning Distributions via Monte-Carlo Marginalization

08/11/2023
by   Chenqiu Zhao, et al.
0

We propose a novel method to learn intractable distributions from their samples. The main idea is to use a parametric distribution model, such as a Gaussian Mixture Model (GMM), to approximate intractable distributions by minimizing the KL-divergence. Based on this idea, there are two challenges that need to be addressed. First, the computational complexity of KL-divergence is unacceptable when the dimensions of distributions increases. The Monte-Carlo Marginalization (MCMarg) is proposed to address this issue. The second challenge is the differentiability of the optimization process, since the target distribution is intractable. We handle this problem by using Kernel Density Estimation (KDE). The proposed approach is a powerful tool to learn complex distributions and the entire process is differentiable. Thus, it can be a better substitute of the variational inference in variational auto-encoders (VAE). One strong evidence of the benefit of our method is that the distributions learned by the proposed approach can generate better images even based on a pre-trained VAE's decoder. Based on this point, we devise a distribution learning auto-encoder which is better than VAE under the same network architecture. Experiments on standard dataset and synthetic data demonstrate the efficiency of the proposed approach.

READ FULL TEXT

page 4

page 6

page 7

research
12/29/2020

Learning Energy-Based Model with Variational Auto-Encoder as Amortized Sampler

Due to the intractable partition function, training energy-based models ...
research
08/30/2021

An Introduction to Variational Inference

Approximating complex probability densities is a core problem in modern ...
research
05/31/2021

Consistency Regularization for Variational Auto-Encoders

Variational auto-encoders (VAEs) are a powerful approach to unsupervised...
research
01/06/2021

Cauchy-Schwarz Regularized Autoencoder

Recent work in unsupervised learning has focused on efficient inference ...
research
07/12/2020

Fisher Auto-Encoders

It has been conjectured that the Fisher divergence is more robust to mod...
research
10/07/2020

Learning from demonstration using products of experts: applications to manipulation and task prioritization

Probability distributions are key components of many learning from demon...
research
02/21/2020

Stein Self-Repulsive Dynamics: Benefits From Past Samples

We propose a new Stein self-repulsive dynamics for obtaining diversified...

Please sign up or login with your details

Forgot password? Click here to reset