Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices

12/23/2019
by   Jaouad Mourtada, et al.
0

The first part of this paper is devoted to the decision-theoretic analysis of random-design linear prediction with square loss. It is known that, under boundedness constraints on the response (and thus regression coefficients), the minimax excess risk scales as Cσ^2d/n up to constants, where d is the model dimension, n the sample size, and σ^2 the noise parameter. Here, we study the expected excess risk with respect to the full linear class. We show that the ordinary least squares (OLS) estimator is minimax optimal in the well-specified case, for every distribution of covariates and noise level. Further, we express the minimax risk in terms of the distribution of statistical leverage scores of individual samples. We deduce a precise minimax lower bound of σ^2d/(n-d+1), valid for any distribution of covariates, which nearly matches the risk of OLS for Gaussian covariates. We then obtain nonasymptotic upper bounds on the minimax risk for covariates that satisfy a "small ball"-type regularity condition, which scale as (1+o(1))σ^2d/n as d=o(n), both in the well-specified and misspecified cases. Our main technical contribution is the study of the lower tail of the smallest singular value of empirical covariance matrices around 0. We establish a general lower bound on this lower tail, together with a matching upper bound under a necessary regularity condition. Our proof relies on the PAC-Bayesian technique for controlling empirical processes, and extends an analysis of Oliveira (2016) devoted to a different part of the lower tail. Equivalently, our upper bound shows that the operator norm of the inverse sample covariance matrix has bounded L^q norm up to q n, and this exponent is unimprovable. Finally, we show that the regularity condition on the design naturally holds for independent coordinates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2020

Non-asymptotic Optimal Prediction Error for RKHS-based Partially Functional Linear Models

Under the framework of reproducing kernel Hilbert space (RKHS), we consi...
research
09/07/2016

Chaining Bounds for Empirical Risk Minimization

This paper extends the standard chaining technique to prove excess risk ...
research
04/27/2020

Minimax testing and quadratic functional estimation for circular convolution

In a circular convolution model, we aim to infer on the density of a cir...
research
05/28/2021

On the condition number of the shifted real Ginibre ensemble

We derive an accurate lower tail estimate on the lowest singular value σ...
research
11/02/2018

Minimax Estimation of Neural Net Distance

An important class of distance metrics proposed for training generative ...
research
01/17/2022

Risk bounds for PU learning under Selected At Random assumption

Positive-unlabeled learning (PU learning) is known as a special case of ...
research
02/13/2016

A Minimax Theory for Adaptive Data Analysis

In adaptive data analysis, the user makes a sequence of queries on the d...

Please sign up or login with your details

Forgot password? Click here to reset