Fundamental Limits of Ridge-Regularized Empirical Risk Minimization in High Dimensions

06/16/2020
by   Hossein Taheri, et al.
5

Empirical Risk Minimization (ERM) algorithms are widely used in a variety of estimation and prediction tasks in signal-processing and machine learning applications. Despite their popularity, a theory that explains their statistical properties in modern regimes where both the number of measurements and the number of unknown parameters is large is only recently emerging. In this paper, we characterize for the first time the fundamental limits on the statistical accuracy of convex ERM for inference in high-dimensional generalized linear models. For a stylized setting with Gaussian features and problem dimensions that grow large at a proportional rate, we start with sharp performance characterizations and then derive tight lower bounds on the estimation and prediction error that hold over a wide class of loss functions and for any value of the regularization parameter. Our precise analysis has several attributes. First, it leads to a recipe for optimally tuning the loss function and the regularization parameter. Second, it allows to precisely quantify the sub-optimality of popular heuristic choices: for instance, we show that optimally-tuned least-squares is (perhaps surprisingly) approximately optimal for standard logistic data, but the sub-optimality gap grows drastically as the signal strength increases. Third, we use the bounds to precisely assess the merits of ridge-regularization as a function of the over-parameterization ratio. Notably, our bounds are expressed in terms of the Fisher Information of random variables that are simple functions of the data distribution, thus making ties to corresponding bounds in classical statistics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2020

Sharp Asymptotics and Optimal Performance for Inference in Binary Models

We study convex empirical risk minimization for high-dimensional inferen...
research
07/05/2023

The distribution of Ridgeless least squares interpolators

The Ridgeless minimum ℓ_2-norm interpolator in overparametrized linear r...
research
09/07/2016

Chaining Bounds for Empirical Risk Minimization

This paper extends the standard chaining technique to prove excess risk ...
research
07/10/2014

On the Optimality of Averaging in Distributed Statistical Learning

A common approach to statistical learning with big-data is to randomly s...
research
05/31/2019

High Dimensional Classification via Empirical Risk Minimization: Improvements and Optimality

In this article, we investigate a family of classification algorithms de...
research
06/07/2020

On Suboptimality of Least Squares with Application to Estimation of Convex Bodies

We develop a technique for establishing lower bounds on the sample compl...
research
11/16/2020

Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View

Contemporary machine learning applications often involve classification ...

Please sign up or login with your details

Forgot password? Click here to reset