Geometry of asymptotic bias reduction of plug-in estimators with adjusted likelihood

11/30/2020
by   Masayo Y. Hirose, et al.
0

A geometric framework to improve a plug-in estimator in terms of asymptotic bias is developed. It is based on an adjustment of a likelihood, that is, multiplying a non-random function of the parameter, called the adjustment factor, to the likelihood. The condition for the second-order asymptotic unbiasedness (no bias up to O(n^-1) for a sample of size n) is derived. Bias of a plug-in estimator emerges as departure from a kind of harmonicity of the function of the plug-in estimator, and the adjustment of the likelihood is equivalent to modify the model manifold such that the departure from the harmonicity is canceled out. The adjustment is achieved by solving a partial differential equation. In some cases the adjustment factor is given as an explicit integral. Especially, if a plug-in estimator is a function of the geodesic distance, an explicit representation in terms of the geodesic distance is available, thanks to differential geometric techniques for solving partial differential equations. As an example of the adjustment factor, the Jeffreys prior is specifically discussed. Some illustrative examples are provided.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2022

Parameter estimation for ergodic linear SDEs from partial and discrete observations

We consider a problem of parameter estimation for the state space model ...
research
04/18/2020

Efficient implementation of median bias reduction

In numerous regular statistical models, median bias reduction (Kenne Pag...
research
03/20/2023

D-Module Techniques for Solving Differential Equations in the Context of Feynman Integrals

Feynman integrals are solutions to linear partial differential equations...
research
08/09/2021

Effect of stepwise adjustment of Damping factor upon PageRank

The effect of adjusting damping factor α, from a small initial value α0 ...
research
10/16/2021

Exact Bias Correction for Linear Adjustment of Randomized Controlled Trials

In an influential critique of empirical practice, Freedman (2008) showed...
research
09/22/2017

On overfitting and asymptotic bias in batch reinforcement learning with partial observability

This paper stands in the context of reinforcement learning with partial ...
research
09/11/2020

Bootstrap method for misspecified ergodic Lévy driven stochastic differential equation models

In this paper, we consider possibly misspecified stochastic differential...

Please sign up or login with your details

Forgot password? Click here to reset