Last Layer Marginal Likelihood for Invariance Learning

06/14/2021
by   Pola Elisabeth Schwöbel, et al.
0

Data augmentation is often used to incorporate inductive biases into models. Traditionally, these are hand-crafted and tuned with cross validation. The Bayesian paradigm for model selection provides a path towards end-to-end learning of invariances using only the training data, by optimising the marginal likelihood. We work towards bringing this approach to neural networks by using an architecture with a Gaussian process in the last layer, a model for which the marginal likelihood can be computed. Experimentally, we improve performance by learning appropriate invariances in standard benchmarks, the low data regime and in a medical imaging task. Optimisation challenges for invariant Deep Kernel Gaussian processes are identified, and a systematic analysis is presented to arrive at a robust training scheme. We introduce a new lower bound to the marginal likelihood, which allows us to perform inference for a larger class of likelihood functions than before, thereby overcoming some of the training challenges that existed with previous approaches.

READ FULL TEXT
research
08/16/2018

Learning Invariances using the Marginal Likelihood

Generalising well in supervised learning tasks relies on correctly extra...
research
02/22/2022

Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations

Data augmentation is commonly applied to improve performance of deep lea...
research
04/11/2021

Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning

Marginal-likelihood based model-selection, even though promising, is rar...
research
05/24/2021

Informative Bayesian model selection for RR Lyrae star classifiers

Machine learning has achieved an important role in the automatic classif...
research
04/28/2023

Hyperparameter Optimization through Neural Network Partitioning

Well-tuned hyperparameters are crucial for obtaining good generalization...
research
02/23/2022

Bayesian Model Selection, the Marginal Likelihood, and Generalization

How do we compare between hypotheses that are entirely consistent with o...
research
02/25/2022

Learning Invariant Weights in Neural Networks

Assumptions about invariances or symmetries in data can significantly in...

Please sign up or login with your details

Forgot password? Click here to reset