Exchangeability and Kernel Invariance in Trained MLPs

10/19/2018
by   Russell Tsuchida, et al.
0

In the analysis of machine learning models, it is often convenient to assume that the parameters are IID. This assumption is not satisfied when the parameters are updated through training processes such as SGD. A relaxation of the IID condition is a probabilistic symmetry known as exchangeability. We show the sense in which the weights in MLPs are exchangeable. This yields the result that in certain instances, the layer-wise kernel of fully-connected layers remains approximately constant during training. We identify a sharp change in the macroscopic behavior of networks as the covariance between weights changes from zero.

READ FULL TEXT

page 10

page 16

page 17

page 19

page 21

page 24

page 25

page 26

research
05/21/2021

Properties of the After Kernel

The Neural Tangent Kernel (NTK) is the wide-network limit of a kernel de...
research
09/27/2021

Unrolling SGD: Understanding Factors Influencing Machine Unlearning

Machine unlearning is the process through which a deployed machine learn...
research
02/26/2021

Experiments with Rich Regime Training for Deep Learning

In spite of advances in understanding lazy training, recent work attribu...
research
09/04/2022

On Kernel Regression with Data-Dependent Kernels

The primary hyperparameter in kernel regression (KR) is the choice of ke...
research
05/12/2023

Online Learning Under A Separable Stochastic Approximation Framework

We propose an online learning algorithm for a class of machine learning ...
research
06/05/2017

Emergence of Invariance and Disentangling in Deep Representations

Using established principles from Information Theory and Statistics, we ...
research
02/17/2021

Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n Parameters

Recent works have demonstrated reasonable success of representation lear...

Please sign up or login with your details

Forgot password? Click here to reset