Beyond Black Box Densities: Parameter Learning for the Deviated Components

02/05/2022
by   Dat Do, et al.
0

As we collect additional samples from a data population for which a known density function estimate may have been previously obtained by a black box method, the increased complexity of the data set may result in the true density being deviated from the known estimate by a mixture distribution. To model this phenomenon, we consider the deviating mixture model (1-λ^*)h_0 + λ^* (∑_i = 1^k p_i^* f(x|θ_i^*)), where h_0 is a known density function, while the deviated proportion λ^* and latent mixing measure G_* = ∑_i = 1^k p_i^*δ_θ_i^* associated with the mixture distribution are unknown. Via a novel notion of distinguishability between the known density h_0 and the deviated mixture distribution, we establish rates of convergence for the maximum likelihood estimates of λ^* and G^* under Wasserstein metric. Simulation studies are carried out to illustrate the theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2019

Maximum Likelihood Estimation of a Semiparametric Two-component Mixture Model using Log-concave Approximation

Motivated by studies in biological sciences to detect differentially exp...
research
10/08/2020

Mixture-based estimation of entropy

The entropy is a measure of uncertainty that plays a central role in inf...
research
11/12/2021

Wasserstein convergence in Bayesian deconvolution models

We study the reknown deconvolution problem of recovering a distribution ...
research
01/13/2023

Weighted RML using ensemble-methods for data assimilation

The weighting of critical-point samples in the weighted randomized maxim...
research
05/08/2017

Density Estimation for Geolocation via Convolutional Mixture Density Network

Nowadays, geographic information related to Twitter is crucially importa...
research
07/29/2021

Binomial Mixture Model With U-shape Constraint

In this article, we study the binomial mixture model under the regime th...
research
11/02/2018

Neural Likelihoods via Cumulative Distribution Functions

We leverage neural networks as universal approximators of monotonic func...

Please sign up or login with your details

Forgot password? Click here to reset