
An Operator Theoretic Approach to Nonparametric Mixture Models
When estimating finite mixture models, it is common to make assumptions ...
read it

QuasiBernoulli Stickbreaking: Infinite Mixture with Cluster Consistency
In mixture modeling and clustering application, the number of components...
read it

Posterior Distribution for the Number of Clusters in Dirichlet Process Mixture Models
Dirichlet process mixture models (DPMM) play a central role in Bayesian ...
read it

MCMC computations for Bayesian mixture models using repulsive point processes
Repulsive mixture models have recently gained popularity for Bayesian cl...
read it

IID Sampling from Intractable Distributions
We propose a novel methodology for drawing iid realizations from any tar...
read it

On a Lossbased prior for the number of components in mixture models
We propose a prior distribution for the number of components of a finite...
read it

Lifelong Infinite Mixture Model Based on KnowledgeDriven Dirichlet Process
Recent research efforts in lifelong learning propose to grow a mixture o...
read it
Finite mixture models are typically inconsistent for the number of components
Scientists and engineers are often interested in learning the number of subpopulations (or components) present in a data set. Practitioners commonly use a Dirichlet process mixture model (DPMM) for this purpose; in particular, they count the number of clusters—i.e. components containing at least one data point—in the DPMM posterior. But Miller and Harrison (2013) warn that the DPMM clustercount posterior is severely inconsistent for the number of latent components when the data are truly generated from a finite mixture; that is, the clustercount posterior probability on the true generating number of components goes to zero in the limit of infinite data. A potential alternative is to use a finite mixture model (FMM) with a prior on the number of components. Past work has shown the resulting FMM componentcount posterior is consistent. But existing results crucially depend on the assumption that the component likelihoods are perfectly specified. In practice, this assumption is unrealistic, and empirical evidence (Miller and Dunson, 2019) suggests that the FMM posterior on the number of components is sensitive to the likelihood choice. In this paper, we add rigor to dataanalysis folk wisdom by proving that under even the slightest model misspecification, the FMM posterior on the number of components is ultraseverely inconsistent: for any finite k ∈ℕ, the posterior probability that the number of components is k converges to 0 in the limit of infinite data. We illustrate practical consequences of our theory on simulated and real data sets.
READ FULL TEXT
Comments
There are no comments yet.