Exact priors of finite neural networks

04/23/2021
by   Jacob A. Zavatone-Veth, et al.
0

Bayesian neural networks are theoretically well-understood only in the infinite-width limit, where Gaussian priors over network weights yield Gaussian priors over network outputs. Recent work has suggested that finite Bayesian networks may outperform their infinite counterparts, but their non-Gaussian output priors have been characterized only though perturbative approaches. Here, we derive exact solutions for the output priors for individual input examples of a class of finite fully-connected feedforward Bayesian neural networks. For deep linear networks, the prior has a simple expression in terms of the Meijer G-function. The prior of a finite ReLU network is a mixture of the priors of linear networks of smaller widths, corresponding to different numbers of active units in each layer. Our results unify previous descriptions of finite network priors in terms of their tail decay and large-width behavior.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/06/2021

Bayesian neural network unit priors and generalized Weibull-tail property

The connection between Bayesian neural networks and Gaussian processes g...
06/01/2021

Asymptotics of representation learning in finite Bayesian neural networks

Recent works have suggested that finite Bayesian neural networks may out...
10/11/2018

Bayesian neural networks increasingly sparsify their units with depth

We investigate deep Bayesian neural networks with Gaussian priors on the...
12/20/2021

Bayesian neural network priors for edge-preserving inversion

We consider Bayesian inverse problems wherein the unknown state is assum...
06/11/2021

Precise characterization of the prior predictive distribution of deep ReLU networks

Recent works on Bayesian neural networks (BNNs) have highlighted the nee...
12/12/2019

Towards Expressive Priors for Bayesian Neural Networks: Poisson Process Radial Basis Function Networks

While Bayesian neural networks have many appealing characteristics, curr...
11/23/2021

Depth induces scale-averaging in overparameterized linear Bayesian neural networks

Inference in deep Bayesian neural networks is only fully understood in t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.