Revisiting complexity and the bias-variance tradeoff

06/17/2020
by   Raaz Dwivedi, et al.
26

The recent success of high-dimensional models, such as deep neural networks (DNNs), has led many to question the validity of the bias-variance tradeoff principle in high dimensions. We reexamine it with respect to two key choices: the model class and the complexity measure. We argue that failing to suitably specify either one can falsely suggest that the tradeoff does not hold. This observation motivates us to seek a valid complexity measure, defined with respect to a reasonably good class of models. Building on Rissanen's principle of minimum description length (MDL), we propose a novel MDL-based complexity (MDL-COMP). We focus on the context of linear models, which have been recently used as a stylized tractable approximation to DNNs in high-dimensions. MDL-COMP is defined via an optimality criterion over the encodings induced by a good Ridge estimator class. We derive closed-form expressions for MDL-COMP and show that for a dataset with n observations and d parameters it is not always equal to d/n, and is a function of the singular values of the design matrix and the signal-to-noise ratio. For random Gaussian design, we find that while MDL-COMP scales linearly with d in low-dimensions (d<n), for high-dimensions (d>n) the scaling is exponentially smaller, scaling as log d. We hope that such a slow growth of complexity in high-dimensions can help shed light on the good generalization performance of several well-tuned high-dimensional models. Moreover, via an array of simulations and real-data experiments, we show that a data-driven Prac-MDL-COMP can inform hyper-parameter tuning for ridge regression in limited data settings, sometimes improving upon cross-validation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2019

Estimation of variance components, heritability and the ridge penalty in high-dimensional generalized linear models

For high-dimensional linear regression models, we review and compare sev...
research
05/24/2022

Bandwidth Selection for Gaussian Kernel Ridge Regression via Jacobian Control

Most machine learning methods depend on the tuning of hyper-parameters. ...
research
10/29/2020

Group-regularized ridge regression via empirical Bayes noise level cross-validation

Features in predictive models are not exchangeable, yet common supervise...
research
12/30/2019

A Parameter Choice Rule for Tikhonov Regularization Based on Predictive Risk

In this work, we propose a new criterion for choosing the regularization...
research
09/09/2022

Penalization-induced shrinking without rotation in high dimensional GLM regression: a cavity analysis

In high dimensional regression, where the number of covariates is of the...
research
08/08/2022

Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality

Avoiding overfitting is a central challenge in machine learning, yet man...

Please sign up or login with your details

Forgot password? Click here to reset