DeepAI
Log In Sign Up

An Overview of Uncertainty Quantification Methods for Infinite Neural Networks

01/13/2022
by   Florian Juengermann, et al.
0

To better understand the theoretical behavior of large neural networks, several works have analyzed the case where a network's width tends to infinity. In this regime, the effect of random initialization and the process of training a neural network can be formally expressed with analytical tools like Gaussian processes and neural tangent kernels. In this paper, we review methods for quantifying uncertainty in such infinite-width neural networks and compare their relationship to Gaussian processes in the Bayesian inference framework. We make use of several equivalence results along the way to obtain exact closed-form solutions for predictive uncertainty.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/30/2019

Non-Gaussian processes and neural networks at finite widths

Gaussian processes are ubiquitous in nature and engineering. A case in p...
01/30/2022

Stochastic Neural Networks with Infinite Width are Deterministic

This work theoretically studies stochastic neural networks, a main type ...
12/19/2019

Optimization for deep learning: theory and algorithms

When and why can a neural network be successfully trained? This article ...
10/12/2021

Uncertainty-based out-of-distribution detection requires suitable function space priors

The need to avoid confident predictions on unfamiliar data has sparked i...
08/20/2015

Steps Toward Deep Kernel Methods from Infinite Neural Networks

Contemporary deep neural networks exhibit impressive results on practica...
07/26/2021

Are Bayesian neural networks intrinsically good at out-of-distribution detection?

The need to avoid confident predictions on unfamiliar data has sparked i...
06/18/2020

Infinite attention: NNGP and NTK for deep attention networks

There is a growing amount of literature on the relationship between wide...

1 Introduction

Deep learning has brought breakthroughs across a variety of disciplines including image classification, speech recognition, and natural language processing [LeCun et al., 2015]. But despite their tremendous success, artificial neural networks often tend to be overconfident in their predictions, especially for out-of-distribution data.

Therefore, proper quantification of uncertainty is of fundamental importance to the deep learning community. It turns out that the infinite-width limit, i.e. the limit when the number of hidden units in a neural network approaches infinity, simplifies the analysis considerably, and is useful to draw conclusions for very large, but finite neural networks.

In this paper, we provide an overview of the most common techniques for quantifying uncertainty in deep learning and explore how these techniques are related in the infinite-width regime. In section 2, we introduce the fundamentals of these techniques, while in section 3, we outline their relation as well as several equivalence results.

2 Background

Throughout this section, we will consider a neural network with input and scalar output . Let

denote the neural network output function where the weights and biases are summarized in the parameter vector

. The traditional training objective is to minimize the mean squared error (MSE) loss over a dataset of training points, i.e. to find an optimal such that:

(1)

2.1 Gaussian Processes

One fundamental tool used in machine learning to quantify uncertainty is Gaussian processes (GPs). They are primarily encountered in regression problems, but they can be extended to classification or clustering tasks [Kapoor et al., 2010, Kim and Lee, 2007].

Simply put, a GP works as follows: (i) for any input

, it yields a probability distribution for any possible outcome

, and (ii) any finite collection of input points follows a multivariate Gaussian distribution

with mean vector and covariance matrix where one usually assumes the data to be centered (). The matrix is determined by a kernel function that allows to incorporate prior knowledge about the shape and characteristics of the data distribution. We will denote a GP with kernel function applied on centered data as

. The most commonly used kernel function is the squared exponential or radial basis function (RBF):

(2)

where the variance

and length

serve as hyperparameters to encode domain knowledge. For a new test point

, the prior distribution over is given by:

(3)

Then, the dataset can be used as part of the Bayesian framework to update prior beliefs and obtain a posterior distribution by conditioning on the training points, as follows:

(4)

2.2 Frequentist versus Bayesian Approach

The most common tools used to quantify uncertainty specifically in neural networks come from the distinction between two fundamental approaches. On the one hand, we can use the variance of predictions of multiple neural networks that follow from different random initializations. We refer to this frequentist type of ensemble method as deep ensemble (DE). On the other hand, a Bayesian neural network (BNN) is the natural result of applying Bayesian inference to neural networks. In a BNN, we place priors – typically independent Gaussians – on the network weights and biases. As a result, the posterior predictive distribution gives a prediction interval instead of a single-point estimate.

BNNs used in practice have posterior distributions that are highly complex and multi-modal, challenging even the most sophisticated Markov chain Monte Carlo (MCMC) sampling methods. For that reason, practitioners oftentimes use mean-field variational inference (MFVI) to approximate the posterior with a factorized Gaussian, i.e. assume the posterior distributions over the network’s parameters to be independent.

2.3 Neural Linear Models

An alternative to MFVI that scales better to large datasets than BNNs and GPs are neural linear models (NLMs).

First introduced by Lázaro-Gredilla and Figueiras-Vidal [2010] as Marginalized Neural Networks and later re-introduced by Snoek et al. [2015]

, NLMs consist of a Bayesian linear regression performed on the last layer of a deep neural network. They resemble BNNs in structure, except only the final linear layer has priors on its coefficients. An example is shown in

Figure 1.

[height=4, layerspacing=15mm] [count=4, bias=false, title=Input
Layer, text=[count=4, bias=false, text=[count=4, bias=false, title=Hidden
Layers, text=[count=4, bias=false, text=[count=1, bias=false, title= Bayesian
Output
Layer, text=

Figure 1: Example of a neural linear model. Hidden layer has vector weights , while the output layer performs Bayesian linear regression on the weights .

The first layers whose weights are simple scalars can be viewed as a feature map that transforms input into features , which are then fed into the Bayesian linear regression. Because of this particular structure, training and inference with NLMs differs from usual procedures and usually consist of two steps: training the deterministic layers first using a point estimate of the output layer weights, and computing the posterior of that Bayesian layer second. In [Thakur et al., 2021], the authors propose an instance of this procedure that has the benefit of not underestimating uncertainty in data-scarce regions.

When all priors on the output weights are independent and identical Gaussian distributions, the posterior is a multivariate Gaussian whose mean and covariance matrix can be expressed in closed-form:

(5)
(6)

where

is the identity matrix, and

and are prior hyperparameters equal to the inverse of the variance of the regression noise and weight variables, respectively. These expressions reveal that the main computational effort of inference with NLMs lies in the inversion of the matrix

, whose dimension is the number of neurons in the last hidden layer. Compared with GPs who scale with the size of the training data set, as explained later in

subsubsection 3.2.1, NLMs scale with the width of the neural network’s last layer, which is often of lower dimensionality. This is where NLMs gain their advantage in tractability and computational efficiency, which is why NLMs have gained popularity.

2.4 Neural Tangent Kernel

The last piece of background to understand is the theory describing the training process of a neural network with gradient descent. A helpful tool for that is the neural tangent kernel (NTK) introduced by Jacot et al. [2018]:

(7)

In general, this kernel function depends on the random initialization and changes over time. However, Jacot et al. [2018] show that in the infinite-width limit, the NTK at initialization

becomes deterministic and only depends on the activation function and network architecture. In this case, the NTK also stays constant during training for small enough learning rates:

. We call this deterministic and constant kernel . Yang [2020]

extends these convergence results from multilayer perceptrons (MLPs) to other architectures including convolutional and batch-normalization layers.

Now, we investigate how the NTK describes change in the output function of a NN during training. Jacot et al. [2018] and Lee et al. [2019] show that for small enough learning rates, the NN prediction at training time

follows an ordinary differential equation (ODE) that is characterized by the NTK:

(8)

Here, denotes the learning rate and describes the gradient of the loss with respect to model’s output on the training set. Using an MSE loss for and the fact that the NTK stays constant over time, Equation 8 simplifies to a linear ordinary differential equation with known exponentially decaying solution. So, the output for the training dataset at time is

(9)

This shows that the NN achieves zero training loss because the NTK is a positive definite matrix, hence the exponential term converges to zero.

Equivalently, based on the results of [Lee et al., 2019], He et al. [2020] show that can be expressed based on the random initialization . For this gives:

(10)

3 Behavior at Infinite Width

In this section, we summarize the behavior of NNs in the infinite-width limit, i.e. we are interested in the limit as the number of hidden units goes to infinity.

3.1 Priors in Infinite Neural Networks

Under the condition that the network parameters follow independent and identical (iid) distributions with zero mean and finite variance and the activation function of the last layer is bounded, the prior predictive distribution converges to a GP with zero mean and finite variance [Neal, 1996]. The intuition is as follows: for a fixed input , every hidden layer output

is a random variable with finite mean and variance as the activation function is bounded. When multiplied by the final layer weights,

are iid random variables with zero mean and finite variance. According to the central limit theorem, their sum converges to a Gaussian distribution. If we rescale the variances according to the network width, the network output becomes a standard normal distribution

. As this holds true for every , the neural network is equivalent to a GP called the neural network Gaussian process (NNGP). A sampled function from this NNGP is equivalent to the output function of a randomly initialized NN and to a prior predictive sample of a BNN.

While Neal [1996] gives a proof of existence, i.e. that a NN converges to a GP in the infinite-width limit, Neal does not provide any analytic form of the covariance function . Williams [1997] builds on that work and provides closed-form kernel functions for different sets of activation functions and prior distributions on the network parameters.

3.2 Bayesian Inference

So far we have not considered any data but made predictions based on randomly drawn initial parameters. In the following sections, we discuss three approaches to incorporate the training data. First, we update the GP prior kernel by conditioning on the training data to obtain a GP posterior kernel. Second, we use gradient-based methods to train the NN. Third, we make the neural network Bayesian and approximate the BNN posterior.

3.2.1 Gaussian Process Regression

To go from the prior to the posterior distribution in a GP, we condition the NNGP kernel on the dataset . Following the Bayesian framework, the result is the posterior GP as defined in Equation 4.

A major problem of GPs is their computational effort which is cubic in the number of training points . Common methods have enabled exact inference for a maximum of a few thousand training points only. This makes GPs unsuitable for many real-world applications that require very large datasets to be processed. Even though exact inference on GPs has recently been scaled to a million data points by GPU parallelization [Wang et al., 2019], NNs provide a more accessible framework in the big data regime.

3.2.2 Neural Network Training

NNs are trained with gradient-based optimizers such as full-batch gradient descent or its extensions such as stochastic gradient descent and Adam optimizer. Here, we consider two training methods with different theoretical conclusions.

Weakly Trained NN

First, we only train the last layer of our NN. This means all other layers act as random feature extractors that are unchanged during training. Lee et al. [2019]

show that for the MSE loss function and a sufficiently small learning rate, the network output during training is an interpolation between the GP prior and GP posterior, and asymptotically converges to the posterior of the NNGP. This means, training only the last layer of a randomly initialized NN is equivalent to sampling a function from the GP with the NNGP prior kernel

conditioned on the dataset.

Fully Trained NN

If we use the gradient descent algorithm on the entire network, the NTK describes the training behavior. After convergence to zero loss, Equation 10 describes the learned function . As can be regarded as a sample from the NNGP , we write with 111The notation means ”plus Hermitian conjugate”, like in [Lee et al., 2019]

(11)
(12)

where we used subscripts for the kernel arguments in interest of space. Lee et al. [2019] observed that this deep ensemble GP does not correspond to a proper Bayesian posterior. This means, in contrast to the weakly trained NNs, random initialization followed by gradient descent training does not give valid posterior predictive function samples. Hence, an ensemble of NNs does not accurately approximate the posterior.

Bayesian Deep Ensemble

To address this, He et al. [2020] propose a slight modification to the gradient descent training. With that, they arrive at the what they call neural tangent kernel Gaussian process (NTKGP) with

(13)
(14)

Note that this corresponds to a GP posterior as defined in Equation 4. In comparison to the NNGP posterior we obtain when conditioning the NNGP prior on the data, or when only training the last layer of the NN, here, the prior covariance function is not the NNGP kernel but the NTK .

3.2.3 BNN Posterior Approximation

Instead of using an ensemble of randomly initialized NNs, we can use a prior distribution on the network weights to obtain a BNN. However, approximation methods struggle with accurately representing the complex BNN posterior. Coker et al. [2021] show that in the infinite-width limit, the commonly used MFVI approximation fails to learn the data. Specifically, the posterior predictive mean for any input converges to zero, regardless of the input data. In their proof, they assume that the Kullback–Leibler (KL) divergence and the activation function are used, but give empirical evidence for and activations.

4 Conclusion

This work provides an overview of the different methods used for quantifying uncertainty in infinite neural networks, and shows how to obtain analytic expressions for both prior and posterior predictive distributions for that purpose.

While the prior predictive can simply be modeled as a GP, we have outlined three ways to obtain proper posterior predictives: using GP regression, weakly trained NNs, or Bayesian deep ensembles, where the latter two turn out to be equivalent to GP regression with particular covariance (kernel) functions.

References

  • B. Coker, W. Pan, and F. Doshi-Velez (2021) Wide mean-field variational bayesian neural networks ignore the data. arXiv preprint arXiv:2106.07052. Cited by: §3.2.3.
  • B. He, B. Lakshminarayanan, and Y. W. Teh (2020) Bayesian deep ensembles via the neural tangent kernel. arXiv preprint arXiv:2007.05864. Cited by: §2.4, §3.2.2.
  • A. Jacot, F. Gabriel, and C. Hongler (2018) Neural tangent kernel: convergence and generalization in neural networks. arXiv preprint arXiv:1806.07572. Cited by: §2.4, §2.4, §2.4.
  • A. Kapoor, K. Grauman, R. Urtasun, and T. Darrell (2010) Gaussian processes for object categorization.

    International journal of computer vision

    88 (2), pp. 169–188.
    Cited by: §2.1.
  • H. Kim and J. Lee (2007) Clustering based on gaussian processes. Neural computation 19 (11), pp. 3088–3107. Cited by: §2.1.
  • M. Lázaro-Gredilla and A. R. Figueiras-Vidal (2010) Marginalized neural network mixtures for large-scale regression. IEEE transactions on neural networks 21 (8), pp. 1345–1351. Cited by: §2.3.
  • Y. LeCun, Y. Bengio, and G. Hinton (2015) Deep learning. nature 521 (7553), pp. 436–444. Cited by: §1.
  • J. Lee, L. Xiao, S. Schoenholz, Y. Bahri, R. Novak, J. Sohl-Dickstein, and J. Pennington (2019) Wide neural networks of any depth evolve as linear models under gradient descent. Advances in neural information processing systems 32, pp. 8572–8583. Cited by: §2.4, §2.4, §3.2.2, §3.2.2, footnote 1.
  • R. M. Neal (1996) Priors for infinite networks. In Bayesian Learning for Neural Networks, pp. 29–53. Cited by: §3.1, §3.1.
  • J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, M. Patwary, M. Prabhat, and R. Adams (2015) Scalable bayesian optimization using deep neural networks. In International conference on machine learning, pp. 2171–2180. Cited by: §2.3.
  • S. Thakur, C. Lorsung, Y. Yacoby, F. Doshi-Velez, and W. Pan (2021) Uncertainty-aware (una) bases for bayesian regression using multi-headed auxiliary networks. External Links: 2006.11695 Cited by: §2.3.
  • K. Wang, G. Pleiss, J. Gardner, S. Tyree, K. Q. Weinberger, and A. G. Wilson (2019) Exact gaussian processes on a million data points. Advances in Neural Information Processing Systems 32, pp. 14648–14659. Cited by: §3.2.1.
  • C. K. Williams (1997) Computing with infinite networks. Advances in neural information processing systems, pp. 295–301. Cited by: §3.1.
  • G. Yang (2020) Tensor programs ii: neural tangent kernel for any architecture. arXiv preprint arXiv:2006.14548. Cited by: §2.4.