Sensitivity Analysis for Predictive Uncertainty in Bayesian Neural Networks

12/10/2017 ∙ by Stefan Depeweg, et al. ∙ 0

We derive a novel sensitivity analysis of input variables for predictive epistemic and aleatoric uncertainty. We use Bayesian neural networks with latent variables as a model class and illustrate the usefulness of our sensitivity analysis on real-world datasets. Our method increases the interpretability of complex black-box probabilistic models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Extracting human-understandable knowledge out of black-box machine learning methods is a highly relevant topic of research. One aspect of this is to figure out how sensitive the model response is to which input variables. This can be useful both as a sanity check, if the approximated function is reasonable, but also to gain new insights about the problem at hand. For neural networks this kind of model inspection can be performed by a sensitivity analysis

[1, 2], a simple method that works by considering the gradient of the network output with respect to the input variables.

Our key contribution is to transfer this idea towards predictive uncertainty: What features impact the uncertainty in the predictions of our model? To that end we use Bayesian neural networks (BNN) with latent variables [3, 4], a recently introduced probabilistic model that can describe complex stochastic patterns while at the same time account for model uncertainty. From their predictive distributions we can extract epistemic and aleatoric uncertainties [5, 4]. The former uncertainty originates from our lack of knowledge of model parameter values and is determined by the amount of available data, while aleatoric uncertainty consists of irreducible stochasticity originating from unobserved (latent) variables. By combining the sensitivity analysis with a decomposition of predictive uncertainty into its epistemic and aleatoric components, we can analyze which features influence each type of uncertainty. The resulting sensitivities can provide useful insights into the model at hand. On one hand, a feature with high epistemic sensitivity suggests that careful monitoring or safety mechanisms are required to keep the values of this feature in regions where the model is confident. On the other hand, a feature with high aleatoric uncertainty indicates a dependence of that feature with other unobserved/latent variables.

2 Bayesian Neural Networks with Latent Variables

Bayesian Neural Networks(BNNs) are scalable and flexible probabilistic models. Given a training set 

, formed by feature vectors 

and targets , we assume that , where  is the output of a neural network with weights and output units. The network receives as input the feature vector and the latent variable . We choose rectifiers: 

as activation functions for the hidden layers and and the identity function: 

for the output layer. The network output is corrupted by the additive noise variable  with diagonal covariance matrix . The role of the latent variable is to capture unobserved stochastic features that can affect the network’s output in complex ways. The network has  layers, with  hidden units in layer , and  is the collection of  weight matrices. The is introduced here to account for the additional per-layer biases.

We approximate the exact posterior distribution with:

(1)

The parameters  and   are determined by minimizing a divergence between and the approximation . For more detail the reader is referred to the work of [3, 6]. In our experiments we use black-box -divergence minimization with .

2.1 Uncertainty Decomposition

BNNs with latent variables can describe complex stochastic patterns while at the same time account for model uncertainty. They achieve this by jointly learning , which captures specific values of the latent variables in the training data, and , which represents any uncertainty about the model parameters. The result is a principled Bayesian approach for learning flexible stochastic functions.

For these models, we can identify two types of uncertainty: aleatoric and epistemic [7, 5]. Aleatoric uncertainty originates from random latent variables, whose randomness cannot be reduced by collecting more data. In the BNNs this is given by (and constant additive Gaussian noise , which we omit). Epistemic uncertainty, on the other hand, originates from lack of statistical evidence and can be reduced by gathering more data. In the BNN this is given by , which captures uncertainty over the model parameters. These two forms of uncertainty are entangled in the approximate predictive distribution for a test input :

(2)

where is the likelihood function of the BNN and is the prior on the latent variables.

We can use the variance

as a measure of predictive uncertainty for the -th component of . The variance can be decomposed into an epistemic and aleatoric term using the law of total variance:

(3)

The first term, that is is the variability of , when we integrate out but not . Because represents our belief over model parameters, this is a measure of the epistemic uncertainty. The second term, represents the average variability of not originating from the distribution over model parameters . This measures aleatoric uncertainty, as the variability can only come from the latent variable .

3 Sensitivity Analysis of Predictive Uncertainty

In this section we will extend the method of sensitivity analysis toward predictive uncertainty. The goal is to provide insight into the question of which features affect the stochasticity of our model, which results in aleatoric uncertainty, and which features impact its epistemic uncertainty. For instance, if we have limited data about different settings of a particular feature , even a small change of its value can have a large effect on the confidence of the model.

Answers to these two questions can provide useful insights about a model at hand. For instance, a feature with high aleatoric sensitivity indicates a strong interaction with other unobserved/latent features. If a practitioner can expand the set of features by taking more refined measurements, it may be advisable to look into variables which may exhibit dependence with that feature and which may explain the stochasticity in the data. Furthermore, a feature with high epistemic sensitivity, suggests careful monitoring or extended safety mechanisms are required to keep this feature values in regions where the model is confident.

We start by briefly reviewing the technique of sensitivity analysis [1, 2], a simple method that can provides insight into how changes in the input affect the network’s prediction. Let be a neural network fitted on a training set , formed by feature vectors  and targets . We want to understand how each feature influences the output dimension . Given some test data , we use the partial derivate of the output dimension w.r.t. feature :

(4)

In Section 2.1 we saw that we can decompose the variance of the predictive distribution of a BNN with latent variables into its epistemic and aleatoric components. Our goal is to obtain sensitivities of these components with respect to the input variables. For this we use a sampling based approach to approximate the two uncertainty components [4] and then calculate the partial derivative of these w.r.t. to the input variables. For each test data point , we perform forward passes through the BNN. We first sample a total of times and then, for each of these samples of , performing forward passes in which is fixed and we only sample the latent variable

. Then we can do an empirical estimation of the expected predictive value and of the two components on the right-hand-side of Eq. (

3):

(5)
(6)
(7)

where and () is an empirical estimate of the variance over () samples of (). We have used the square root of each component so all terms share the same unit of . Now we can calculate the sensitivities:

(8)
(9)
(10)

where Eq. (8) is the standard sensitivity term. We also note that the general drawbacks [2] of the sensitivity analysis, such as considering every variable in isolation, arise due to its simplicity. These will also apply when focussing on the uncertainty components.

4 Experiments

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 1: Sensitivity analysis for the predictive expectation and uncertainty on toy data LABEL:sub@fig:toy and UCI datasets LABEL:sub@fig:pplant-LABEL:sub@fig:naval. Top row shows sensitivities w.r.t. expectation (Eq. (8)). Middle and bottom row show sensitivities for epistemic and aleatoric uncertainty (Eq. (9) and Eq. (10

)). Error bars indicate standard errors over experiment repetitions.

In this section we want to do an exploratory study. For that we will first use an artifical toy dataset and then use 8 datasets from the UCI repository [8] in varying domains and dataset sizes. For all experiments, we use a BNN with 2 hidden layer. We first perform model selection on the number of hidden units per layer from on the available data, details can be found in Appendix A. We train for epochs with a learning rate of using Adam as optimizer. For the sensitivity analysis we will sample and and samples from . All experiments were repeated times and we report average results.

4.1 Toy Data

We consider a regression task for a stochastic function with heteroskedastic noise: with . The first input variable is responsible for the shape of the function whereas the second variable determines the noise level. We sample data points with and . Fig. (a)a shows the sensitivities. The first variable is responsible for the epistemic uncertainty whereas is responsible for the aleatoric uncertainty which corresponds with the generative model for the data.

4.2 UCI Datasets

We consider several real-world regression datasets from the UCI data repository [8]. Detailed descriptions, including feature and target explanations, can be found on the respective website. For evaluation we use the same training and test data splits as in [6]. In Fig. 1 we show the results of all experiments. For some problems the aleatoric sensitivity is most prominent (Fig. (f)f,(g)g), while in others we have predominately epistemic sensitivity (Fig. (e)e,(h)h) and a mixture in others. This makes sense, because we have variable dataset sizes (e.g. Boston Housing with 506 data points and 13 features, compared to Protein Structure with 45730 points and 9 features) and also likely different heterogeneity in the datasets.

In the power-plant example feature (temperature) and (ambient pressure) are the main sources of aleatoric uncertainty of the target, the net hourly electrical energy output. The data in this problems originates from a combined cycle power plant consisting of gas and steam turbines. The provided features likely provide only limited information of the energy output, which is subject to complex combustion processes. We can expect that a change in temperature and pressure will influence this process in a complex way, which can explain the high sensitivities we see. The task in the naval-propulsion-plant example, shown in Fig. (i)i, is to predict the compressor decay state coefficient, of a gas turbine operated on a naval vessel. Here we see that two features, the compressor inlet air temperature and air pressure have high epistemic uncertainty, but do not influence the overall sensitivity much. This makes sense, because we only have a single value of both features in the complete dataset. The model has learned no influence of this feature on the output (because it is constant) but any change from this constant will make the system highly uncertain.

5 Conclusion

In this paper we provided a new way of sensitivity analysis for predictive epistemic and aleatoric uncertainty. Experiments indicate useful insights of this method on real-world datasets.

References

Appendix A Model Selection

We perform model selection on the number of hidden units for a BNN with 2 hidden layer. Table 1 shows test log-likelihoods with standard error on all UCI datasets. By that we want to lower the effect of over- and underfitting. Underfitting would show itself by high aleatoric and low epistemic uncertainty whereas overfitting would result in high epistemic and low aleatoric uncertainty.

Hidden units per layer
Dataset
Power Plant -2.76 0.03 -2.71 0.01 -2.69 0.02 -2.71 0.03
Kin8nm 1.23 0.01 1.30 0.01 1.30 0.01 1.31 0.01
Energy Efficiency -0.79 0.10 -0.66 0.07 -0.79 0.07 -0.74 0.06
Concrete Strength -3.02 0.05 -2.99 0.03 -3.00 0.01 -3.01 0.04
Protein Structure -2.40 0.01 -2.32 0.00 -2.26 0.01 -2.24 0.01
Wine Quality Red 1.94 0.11 1.95 0.09 1.78 0.13 1.62 0.13
Boston Housing -2.49 0.12 -2.39 0.05 -2.52 0.03 -2.66 0.03
Naval Propulsion 1.80 0.04 1.74 0.06 1.81 0.05 1.86 0.05
Table 1: Test Log-likelihood on UCI Dataset.