Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality

08/08/2022
by   Vudtiwat Ngampruetikorn, et al.
0

Avoiding overfitting is a central challenge in machine learning, yet many large neural networks readily achieve zero training loss. This puzzling contradiction necessitates new approaches to the study of overfitting. Here we quantify overfitting via residual information, defined as the bits in fitted models that encode noise in training data. Information efficient learning algorithms minimize residual information while maximizing the relevant bits, which are predictive of the unknown generative models. We solve this optimization to obtain the information content of optimal algorithms for a linear regression problem and compare it to that of randomized ridge regression. Our results demonstrate the fundamental tradeoff between residual and relevant information and characterize the relative information efficiency of randomized regression with respect to optimal algorithms. Finally, using results from random matrix theory, we reveal the information complexity of learning a linear map in high dimensions and unveil information-theoretic analogs of double and multiple descent phenomena.

READ FULL TEXT
research
01/18/2016

Statistical Mechanics of High-Dimensional Inference

To model modern large-scale datasets, we need efficient algorithms to in...
research
03/02/2023

High-dimensional analysis of double descent for linear regression with random projections

We consider linear regression problems with a varying number of random p...
research
02/04/2021

Undecidability of Underfitting in Learning Algorithms

Using recent machine learning results that present an information-theore...
research
10/13/2017

User Modelling for Avoiding Overfitting in Interactive Knowledge Elicitation for Prediction

In human-in-the-loop machine learning, the user provides information bey...
research
10/25/2022

Characterizing information loss in a chaotic double pendulum with the Information Bottleneck

A hallmark of chaotic dynamics is the loss of information with time. Alt...
research
06/09/2023

Bayes optimal learning in high-dimensional linear regression with network side information

Supervised learning problems with side information in the form of a netw...
research
06/17/2020

Revisiting complexity and the bias-variance tradeoff

The recent success of high-dimensional models, such as deep neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset