The Deep Weight Prior. Modeling a prior distribution for CNNs using generative models

10/16/2018
by   Andrei Atanov, et al.
12

Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution. In this work, we propose a new type of prior distributions for convolutional neural networks, deep weight prior, that in contrast to previously published techniques, favors empirically estimated structure of convolutional filters e.g., spatial correlations of weights. We define deep weight prior as an implicit distribution and propose a method for variational inference with such type of implicit priors. In experiments, we show that deep weight priors can improve the performance of Bayesian neural networks on several problems when training data is limited. Also, we found that initialization of weights of conventional networks with samples from deep weight prior leads to faster training.

READ FULL TEXT
research
06/12/2019

MOPED: Efficient priors for scalable variational inference in Bayesian deep neural networks

Variational inference for Bayesian deep neural networks (DNNs) requires ...
research
05/15/2022

Incorporating Prior Knowledge into Neural Networks through an Implicit Composite Kernel

It is challenging to guide neural network (NN) learning with prior knowl...
research
06/14/2022

A theory of learning with constrained weight-distribution

A central question in computational neuroscience is how structure determ...
research
10/08/2021

Pathologies in priors and inference for Bayesian transformers

In recent years, the transformer has established itself as a workhorse i...
research
02/12/2021

Bayesian Neural Network Priors Revisited

Isotropic Gaussian priors are the de facto standard for modern Bayesian ...
research
03/26/2022

Current Source Localization Using Deep Prior with Depth Weighting

This paper proposes a novel neuronal current source localization method ...
research
02/08/2017

Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization

A fundamental advantage of neural models for NLP is their ability to lea...

Please sign up or login with your details

Forgot password? Click here to reset