DeepAI
Log In Sign Up

Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices

12/23/2019
by   Jaouad Mourtada, et al.
0

The first part of this paper is devoted to the decision-theoretic analysis of random-design linear prediction with square loss. It is known that, under boundedness constraints on the response (and thus regression coefficients), the minimax excess risk scales as Cσ^2d/n up to constants, where d is the model dimension, n the sample size, and σ^2 the noise parameter. Here, we study the expected excess risk with respect to the full linear class. We show that the ordinary least squares (OLS) estimator is minimax optimal in the well-specified case, for every distribution of covariates and noise level. Further, we express the minimax risk in terms of the distribution of statistical leverage scores of individual samples. We deduce a precise minimax lower bound of σ^2d/(n-d+1), valid for any distribution of covariates, which nearly matches the risk of OLS for Gaussian covariates. We then obtain nonasymptotic upper bounds on the minimax risk for covariates that satisfy a "small ball"-type regularity condition, which scale as (1+o(1))σ^2d/n as d=o(n), both in the well-specified and misspecified cases. Our main technical contribution is the study of the lower tail of the smallest singular value of empirical covariance matrices around 0. We establish a general lower bound on this lower tail, together with a matching upper bound under a necessary regularity condition. Our proof relies on the PAC-Bayesian technique for controlling empirical processes, and extends an analysis of Oliveira (2016) devoted to a different part of the lower tail. Equivalently, our upper bound shows that the operator norm of the inverse sample covariance matrix has bounded L^q norm up to q n, and this exponent is unimprovable. Finally, we show that the regularity condition on the design naturally holds for independent coordinates.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/10/2020

Non-asymptotic Optimal Prediction Error for RKHS-based Partially Functional Linear Models

Under the framework of reproducing kernel Hilbert space (RKHS), we consi...
09/07/2016

Chaining Bounds for Empirical Risk Minimization

This paper extends the standard chaining technique to prove excess risk ...
12/01/2021

Minimax Analysis for Inverse Risk in Nonparametric Planer Invertible Regression

We study a minimax risk of estimating inverse functions on a plane, whil...
05/28/2021

On the condition number of the shifted real Ginibre ensemble

We derive an accurate lower tail estimate on the lowest singular value σ...
02/22/2018

Learning Without Mixing: Towards A Sharp Analysis of Linear System Identification

We prove that the ordinary least-squares (OLS) estimator attains nearly ...
02/14/2019

Optimal disclosure risk assessment

Protection against disclosure is a legal and ethical obligation for agen...
01/17/2022

Risk bounds for PU learning under Selected At Random assumption

Positive-unlabeled learning (PU learning) is known as a special case of ...