Bayesian Learning of Neural Network Architectures

01/14/2019
by   Georgi Dikov, et al.
32

In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth. We do this by learning concrete distributions over these parameters. Our results show that regular networks with a learnt structure can generalise better on small datasets, while fully stochastic networks can be more robust to parameter initialisation. The proposed method relies on standard neural variational learning and, unlike randomised architecture search, does not require a retraining of the model, thus keeping the computational overhead at minimum.

READ FULL TEXT
research
11/22/2019

DBSN: Measuring Uncertainty through Bayesian Learning of Deep Neural Network Structures

Bayesian neural networks (BNNs) introduce uncertainty estimation to deep...
research
11/11/2022

Do Bayesian Neural Networks Need To Be Fully Stochastic?

We investigate the efficacy of treating all the parameters in a Bayesian...
research
01/23/2018

Bayesian Neural Networks

This paper describes and discusses Bayesian Neural Network (BNN). The pa...
research
11/22/2018

Towards Robust Neural Networks with Lipschitz Continuity

Deep neural networks have shown remarkable performance across a wide ran...
research
02/06/2020

Variational Depth Search in ResNets

One-shot neural architecture search allows joint learning of weights and...
research
07/04/2022

Look beyond labels: Incorporating functional summary information in Bayesian neural networks

Bayesian deep learning offers a principled approach to train neural netw...
research
06/09/2021

Fully differentiable model discovery

Model discovery aims at autonomously discovering differential equations ...

Please sign up or login with your details

Forgot password? Click here to reset