Inference for local parameters in convexity constrained models
We consider the problem of inference for local parameters of a convex regression function f_0: [0,1] →ℝ based on observations from a standard nonparametric regression model, using the convex least squares estimator (LSE) f_n. For x_0 ∈ (0,1), the local parameters include the pointwise function value f_0(x_0), the pointwise derivative f_0'(x_0), and the anti-mode (i.e., the smallest minimizer) of f_0. The existing limiting distribution of the estimation error (f_n(x_0) - f_0(x_0), f_n'(x_0) - f_0'(x_0) ) depends on the unknown second derivative f_0”(x_0), and is therefore not directly applicable for inference. To circumvent this impasse, we show that the following locally normalized errors (LNEs) enjoy pivotal limiting behavior: Let [u(x_0), v(x_0)] be the maximal interval containing x_0 where f_n is linear. Then, under standard conditions, √(n(v(x_0)-u(x_0)))(f_n(x_0)-f_0(x_0)) √(n(v(x_0)-u(x_0))^3)(f_n'(x_0)-f_0'(x_0))⇝σ·𝕃^(0)_2𝕃^(1)_2, where n is the sample size, σ is the standard deviation of the errors, and 𝕃^(0)_2, 𝕃^(1)_2 are universal random variables. This asymptotically pivotal LNE theory instantly yields a simple tuning-free procedure for constructing CIs with asymptotically exact coverage and optimal length for f_0(x_0) and f_0'(x_0). We also construct an asymptotically pivotal LNE for the anti-mode of f_0, and its limiting distribution does not even depend on σ. These asymptotically pivotal LNE theories are further extended to other convexity/concavity constrained models (e.g., log-concave density estimation) for which a limit distribution theory is available for problem-specific estimators.
READ FULL TEXT