Non-Gaussian processes and neural networks at finite widths

09/30/2019
by   Sho Yaida, et al.
0

Gaussian processes are ubiquitous in nature and engineering. A case in point is a class of neural networks in the infinite-width limit, whose priors correspond to Gaussian processes. Here we perturbatively extend this correspondence to finite-width neural networks, yielding non-Gaussian processes as priors. The methodology developed herein allows us to track the flow of preactivation distributions by progressively integrating out random variables from lower to higher layers, reminiscent of renormalization-group flow. We further develop a perturbative procedure to perform Bayesian inference with weakly non-Gaussian priors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2021

Scale Mixtures of Neural Network Gaussian Processes

Recent works have revealed that infinitely-wide feed-forward or recurren...
research
01/13/2022

An Overview of Uncertainty Quantification Methods for Infinite Neural Networks

To better understand the theoretical behavior of large neural networks, ...
research
10/12/2021

Uncertainty-based out-of-distribution detection requires suitable function space priors

The need to avoid confident predictions on unfamiliar data has sparked i...
research
05/17/2023

Deep quantum neural networks form Gaussian processes

It is well known that artificial neural networks initialized from indepe...
research
03/16/2018

Gaussian Processes indexed on the symmetric group: prediction and learning

In the framework of the supervised learning of a real function defined o...
research
12/10/2021

Unified Field Theory for Deep and Recurrent Neural Networks

Understanding capabilities and limitations of different network architec...
research
03/12/2022

Beyond Gaussian processes: Flexible Bayesian modeling and inference for geostatistical processes

This work proposes a novel family of geostatistical models to account fo...

Please sign up or login with your details

Forgot password? Click here to reset