
Efficiency of maximum likelihood estimation for a multinomial distribution with known probability sums
For a multinomial distribution, suppose that we have prior knowledge of ...
read it

Distributionally Robust Parametric Maximum Likelihood Estimation
We consider the parameter estimation problem of a probabilistic generati...
read it

How much is optimal reinsurance degraded by error?
The literature on optimal reinsurance does not deal with how much the ef...
read it

On a class of distributions generated by stochastic mixture of the extreme order statistics of a sample of size two
This paper considers a family of distributions constructed by a stochast...
read it

An application of time truncated single acceptance sampling inspection plan based on transmuted Rayleigh distribution
In this paper, we introduce single acceptance sampling inspection plan (...
read it

Asymptotic efficiency of M.L.E. using prior survey in multinomial distributions
Incorporating information from a prior survey is generally supposed to d...
read it

On discrimination between the Lindley and xgamma distributions
For a given data set the problem of selecting either Lindley or xgamma d...
read it
MLE convergence speed to information projection of exponential family: Criterion for model dimension and sample size – complete proof version–
For a parametric model of distributions, the closest distribution in the model to the true distribution located outside the model is considered. Measuring the closeness between two distributions with the KullbackLeibler (KL) divergence, the closest distribution is called the "information projection." The estimation risk of the maximum likelihood estimator (MLE) is defined as the expectation of KL divergence between the information projection and the predictive distribution with pluggedin MLE. Here, the asymptotic expansion of the risk is derived up to n^2order, and the sufficient condition on the risk for the Bayes error rate between the true distribution and the information projection to be lower than a specified value is investigated. Combining these results, the "pn criterion" is proposed, which determines whether the MLE is sufficiently close to the information projection for the given model and sample. In particular, the criterion for an exponential family model is relatively simple and can be used for a complex model with no explicit form of normalizing constant. This criterion can constitute a solution to the sample size or model acceptance problem. Use of the pn criteria is demonstrated for two practical datasets. The relationship between the results and information criteria is also studied.
READ FULL TEXT
Comments
There are no comments yet.