DeepAI AI Chat
Log In Sign Up

Optimal Bias Correction of the Log-periodogram Estimator of the Fractional Parameter: A Jackknife Approach

by   Kanchana Nadarajah, et al.

We use the jackknife to bias correct the log-periodogram regression (LPR) estimator of the fractional parameter, d, in a stationary fractionally integrated model. The weights used to construct the jackknife estimator are chosen such that bias reduction occurs to an order of n^-α (where n is the sample size) for some 0<α<1, while the increase in variance is minimized - with the weights viewed as 'optimal' in this sense. We show that under regularity, the bias-corrected estimator is consistent and asymptotically normal with the same asymptotic variance and n^α/2 rate of convergence as the original LPR estimator. In other words, the use of optimal weights enables bias reduction to be achieved without the usual increase in asymptotic variance being incurred. These theoretical results are valid under both the non-overlapping and moving-block sub-sampling schemes that can be used in the jackknife technique, and do not require the assumption of Gaussianity for the data generating process. A Monte Carlo study explores the finite sample performance of different versions of the optimal jackknife estimator under a variety of data generating processes, including alternative specifications for the short memory dynamics. The comparators in the simulation exercise are the raw (unadjusted) LPR estimator and two alternative bias-adjusted estimators, namely the weighted-average estimator of Guggenberger and Sun (2006) and the pre-filtered sieve bootstrap-based estimator of Poskitt, Martin and Grose (2016). The paper concludes with some discussion of open issues and possible extensions to the work.


Asymptotically Optimal Bias Reduction for Parametric Models

An important challenge in statistical analysis concerns the control of t...

Bias corrected minimum distance estimator for short and long memory processes

This work proposes a new minimum distance estimator (MDE) for the parame...

On the Subbagging Estimation for Massive Data

This article introduces subbagging (subsample aggregating) estimation ap...

Bias, Consistency, and Alternative Perspectives of the Infinitesimal Jackknife

Though introduced nearly 50 years ago, the infinitesimal jackknife (IJ) ...

Nonuniform Negative Sampling and Log Odds Correction with Rare Events Data

We investigate the issue of parameter estimation with nonuniform negativ...