Learning from Randomly Initialized Neural Network Features

02/13/2022
by   Ehsan Amid, et al.
0

We present the surprising result that randomly initialized neural networks are good feature extractors in expectation. These random features correspond to finite-sample realizations of what we call Neural Network Prior Kernel (NNPK), which is inherently infinite-dimensional. We conduct ablations across multiple architectures of varying sizes as well as initializations and activation functions. Our analysis suggests that certain structures that manifest in a trained model are already present at initialization. Therefore, NNPK may provide further insight into why neural networks are so effective in learning such structures.

READ FULL TEXT
research
04/02/2023

Infinite-dimensional reservoir computing

Reservoir computing approximation and generalization bounds are proved f...
research
03/28/2020

Memorizing Gaussians with no over-parameterizaion via gradient decent on neural networks

We prove that a single step of gradient decent over depth two network, w...
research
11/02/2020

Reducing Neural Network Parameter Initialization Into an SMT Problem

Training a neural network (NN) depends on multiple factors, including bu...
research
08/27/2023

The inverse problem for neural networks

We study the problem of computing the preimage of a set under a neural n...
research
04/19/2023

Parallel Neural Networks in Golang

This paper describes the design and implementation of parallel neural ne...
research
08/19/2022

Demystifying Randomly Initialized Networks for Evaluating Generative Models

Evaluation of generative models is mostly based on the comparison betwee...
research
05/19/2022

Minimal Explanations for Neural Network Predictions

Explaining neural network predictions is known to be a challenging probl...

Please sign up or login with your details

Forgot password? Click here to reset