Two-step estimation in linear regressions with adaptive learning

04/11/2022
by   Alexander Mayer, et al.
0

Weak consistency and asymptotic normality of the ordinary least-squares estimator in a linear regression with adaptive learning is derived when the crucial, so-called, `gain' parameter is estimated in a first step by nonlinear least squares from an auxiliary model. The singular limiting distribution of the two-step estimator is normal and in general affected by the sampling uncertainty from the first step. However, this `generated-regressor' issue disappears for certain parameter combinations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2022

A note on centering in subsample selection for linear regression

Centering is a commonly used technique in linear regression analysis. Wi...
research
02/09/2019

Asymptotic normality of the time-domain generalized least squares estimator for linear regression models

In linear models, the generalized least squares (GLS) estimator is appli...
research
06/20/2022

Fast calibration of weak FARIMA models

In this paper, we investigate the asymptotic properties of Le Cam's one-...
research
06/30/2021

Adaptive Capped Least Squares

This paper proposes the capped least squares regression with an adaptive...
research
09/16/2023

Least squares estimation in nonlinear cohort panels with learning from experience

We discuss techniques of estimation and inference for nonlinear cohort p...
research
07/14/2023

Adaptive Linear Estimating Equations

Sequential data collection has emerged as a widely adopted technique for...
research
11/03/2014

A Nonparametric Adaptive Nonlinear Statistical Filter

We use statistical learning methods to construct an adaptive state estim...

Please sign up or login with your details

Forgot password? Click here to reset