Six Lectures on Linearized Neural Networks

08/25/2023
by   Theodor Misiakiewicz, et al.
0

In these six lectures, we examine what can be learnt about the behavior of multi-layer neural networks from the analysis of linear models. We first recall the correspondence between neural networks and linear models via the so-called lazy regime. We then review four models for linearized neural networks: linear regression with concentrated features, kernel ridge regression, random feature model and neural tangent model. Finally, we highlight the limitations of the linear theory and discuss how other approaches can overcome them.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2017

Learning how to explain neural networks: PatternNet and PatternAttribution

DeConvNet, Guided BackProp, LRP, were invented to better understand deep...
research
01/20/2022

Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High Dimensions

Empirical observation of high dimensional phenomena, such as the double ...
research
04/27/2019

Linearized two-layers neural networks in high dimension

We consider the problem of learning an unknown function f_ on the d-dime...
research
02/05/2020

Forecasting Industrial Aging Processes with Machine Learning Methods

By accurately predicting industrial aging processes (IAPs), it is possib...
research
05/01/2020

Generalization Error of Generalized Linear Models in High Dimensions

At the heart of machine learning lies the question of generalizability o...
research
03/10/2022

Deep Regression Ensembles

We introduce a methodology for designing and training deep neural networ...
research
05/16/2023

Unwrapping All ReLU Networks

Deep ReLU Networks can be decomposed into a collection of linear models,...

Please sign up or login with your details

Forgot password? Click here to reset