Mixture Density Network Estimation of Continuous Variable Maximum Likelihood Using Discrete Training Samples

03/24/2021
by   Charles Burton, et al.
0

Mixture Density Networks (MDNs) can be used to generate probability density functions of model parameters θ given a set of observables 𝐱. In some applications, training data are available only for discrete values of a continuous parameter θ. In such situations a number of performance-limiting issues arise which can result in biased estimates. We demonstrate the usage of MDNs for parameter estimation, discuss the origins of the biases, and propose a corrective method for each issue.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/07/2020

Phase Transitions of the Maximum Likelihood Estimates in the p-Spin Curie-Weiss Model

In this paper we consider the problem of parameter estimation in the p-s...
research
06/09/2021

Gaussian Mixture Estimation from Weighted Samples

We consider estimating the parameters of a Gaussian mixture density with...
research
04/23/2018

Randomized Mixture Models for Probability Density Approximation and Estimation

Randomized neural networks (NNs) are an interesting alternative to conve...
research
05/07/2020

Phase Transitions of the Maximum Likelihood Estimates in the Tensor Curie-Weiss Model

The p-tensor Curie-Weiss model is a two-parameter discrete exponential f...
research
12/06/2020

Guitar Effects Recognition and Parameter Estimation with Convolutional Neural Networks

Despite the popularity of guitar effects, there is very little existing ...
research
02/27/2023

Linear pretraining in recurrent mixture density networks

We present a method for pretraining a recurrent mixture density network ...
research
02/06/2019

Modelling the effect of training on performance in road cycling: estimation of the Banister model parameters using field data

We suppose that performance is a random variable whose expectation is re...

Please sign up or login with your details

Forgot password? Click here to reset