Approximations of conditional probability density functions in Lebesgue spaces via mixture of experts models

12/04/2020
by   Hien Duy Nguyen, et al.
1

Mixture of experts (MoE) models are widely applied for conditional probability density estimation problems. We demonstrate the richness of the class of MoE models by proving denseness results in Lebesgue spaces, when inputs and outputs variables are both compactly supported. We further prove an almost uniform convergence result when the input is univariate. Auxiliary lemmas are proved regarding the richness of the soft-max gating function class, and their relationships to the class of Gaussian gating functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2019

Minimax Density Estimation on Sobolev Spaces With Dominating Mixed Smoothness

We study minimax density estimation on the product space R^d_1×R^d_2. We...
research
02/11/2016

A Universal Approximation Theorem for Mixture of Experts Models

The mixture of experts (MoE) model is a popular neural network architect...
research
12/06/2020

Multivariate Density Estimation with Deep Neural Mixture Models

Albeit worryingly underrated in the recent literature on machine learnin...
research
04/23/2018

Randomized Mixture Models for Probability Density Approximation and Estimation

Randomized neural networks (NNs) are an interesting alternative to conve...
research
11/02/2018

Neural Likelihoods via Cumulative Distribution Functions

We leverage neural networks as universal approximators of monotonic func...
research
05/25/2023

Modeling Task Relationships in Multi-variate Soft Sensor with Balanced Mixture-of-Experts

Accurate estimation of multiple quality variables is critical for buildi...
research
02/28/2022

Functional mixture-of-experts for classification

We develop a mixtures-of-experts (ME) approach to the multiclass classif...

Please sign up or login with your details

Forgot password? Click here to reset