Information in Infinite Ensembles of Infinitely-Wide Neural Networks

11/20/2019
by   Ravid Schwartz-Ziv, et al.
0

In this preliminary work, we study the generalization properties of infinite ensembles of infinitely-wide neural networks. Amazingly, this model family admits tractable calculations for many information-theoretic quantities. We report analytical and empirical investigations in the search for signals that correlate with generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2022

Information Flow in Deep Neural Networks

Although deep neural networks have been immensely successful, there is n...
research
07/11/2020

Bayesian Deep Ensembles via the Neural Tangent Kernel

We explore the link between deep ensembles and Gaussian processes (GPs) ...
research
10/07/2021

On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications

This paper follows up on a recent work of (Neu, 2021) and presents new a...
research
10/20/2019

Towards Further Understanding of Sparse Filtering via Information Bottleneck

In this paper we examine a formalization of feature distribution learnin...
research
03/01/2023

Information Plane Analysis for Dropout Neural Networks

The information-theoretic framework promises to explain the predictive p...
research
06/20/2022

Limitations of the NTK for Understanding Generalization in Deep Learning

The “Neural Tangent Kernel” (NTK) (Jacot et al 2018), and its empirical ...
research
02/11/2021

A Compositional Atlas of Tractable Circuit Operations: From Simple Transformations to Complex Information-Theoretic Queries

Circuit representations are becoming the lingua franca to express and re...

Please sign up or login with your details

Forgot password? Click here to reset