Improved uncertainty quantification for neural networks with Bayesian last layer
Uncertainty quantification is an essential task in machine learning - a task in which neural networks (NNs) have traditionally not excelled. Bayesian neural networks (BNNs), in which parameters and predictions are probability distributions, can be a remedy for some applications, but often require expensive sampling for training and inference. NNs with Bayesian last layer (BLL) are simplified BNNs where only the weights in the last layer and the predictions follow a normal distribution. They are conceptually related to Bayesian linear regression (BLR) which has recently gained popularity in learning based-control under uncertainty. Both consider a non-linear feature space which is linearly mapped to the output, and hyperparameters, for example the noise variance, For NNs with BLL, these hyperparameters should include the deterministic weights of all other layers, as these impact the feature space and thus the predictive performance. Unfortunately, the marginal likelihood is expensive to evaluate in this setting and prohibits direct training through back-propagation. In this work, we present a reformulation of the BLL log-marginal likelihood, which considers weights in previous layers as hyperparameters and allows for efficient training through back-propagation. Furthermore, we derive a simple method to improve the extrapolation uncertainty of NNs with BLL. In a multivariate toy example and in the case of a dynamic system identification task, we show that NNs with BLL, trained with our proposed algorithm, outperform standard BLR with NN features.
READ FULL TEXT