Extracting human-understandable knowledge out of black-box machine learning methods is a highly relevant topic of research. One aspect of this is to figure out how sensitive the model response is to which input variables. This can be useful both as a sanity check, if the approximated function is reasonable, but also to gain new insights about the problem at hand. For neural networks this kind of model inspection can be performed by a sensitivity analysis[1, 2], a simple method that works by considering the gradient of the network output with respect to the input variables.
Our key contribution is to transfer this idea towards predictive uncertainty: What features impact the uncertainty in the predictions of our model? To that end we use Bayesian neural networks (BNN) with latent variables [3, 4], a recently introduced probabilistic model that can describe complex stochastic patterns while at the same time account for model uncertainty. From their predictive distributions we can extract epistemic and aleatoric uncertainties [5, 4]. The former uncertainty originates from our lack of knowledge of model parameter values and is determined by the amount of available data, while aleatoric uncertainty consists of irreducible stochasticity originating from unobserved (latent) variables. By combining the sensitivity analysis with a decomposition of predictive uncertainty into its epistemic and aleatoric components, we can analyze which features influence each type of uncertainty. The resulting sensitivities can provide useful insights into the model at hand. On one hand, a feature with high epistemic sensitivity suggests that careful monitoring or safety mechanisms are required to keep the values of this feature in regions where the model is confident. On the other hand, a feature with high aleatoric uncertainty indicates a dependence of that feature with other unobserved/latent variables.
2 Bayesian Neural Networks with Latent Variables
Bayesian Neural Networks(BNNs) are scalable and flexible probabilistic models. Given a training set
, formed by feature vectorsand targets , we assume that , where is the output of a neural network with weights and output units. The network receives as input the feature vector and the latent variable . We choose rectifiers:
as activation functions for the hidden layers and and the identity function:for the output layer. The network output is corrupted by the additive noise variable with diagonal covariance matrix . The role of the latent variable is to capture unobserved stochastic features that can affect the network’s output in complex ways. The network has layers, with hidden units in layer , and is the collection of weight matrices. The is introduced here to account for the additional per-layer biases.
We approximate the exact posterior distribution with:
The parameters , and , are determined by minimizing a divergence between and the approximation . For more detail the reader is referred to the work of [3, 6]. In our experiments we use black-box -divergence minimization with .
2.1 Uncertainty Decomposition
BNNs with latent variables can describe complex stochastic patterns while at the same time account for model uncertainty. They achieve this by jointly learning , which captures specific values of the latent variables in the training data, and , which represents any uncertainty about the model parameters. The result is a principled Bayesian approach for learning flexible stochastic functions.
For these models, we can identify two types of uncertainty: aleatoric and epistemic [7, 5]. Aleatoric uncertainty originates from random latent variables, whose randomness cannot be reduced by collecting more data. In the BNNs this is given by (and constant additive Gaussian noise , which we omit). Epistemic uncertainty, on the other hand, originates from lack of statistical evidence and can be reduced by gathering more data. In the BNN this is given by , which captures uncertainty over the model parameters. These two forms of uncertainty are entangled in the approximate predictive distribution for a test input :
where is the likelihood function of the BNN and is the prior on the latent variables.
We can use the varianceas a measure of predictive uncertainty for the -th component of . The variance can be decomposed into an epistemic and aleatoric term using the law of total variance:
The first term, that is is the variability of , when we integrate out but not . Because represents our belief over model parameters, this is a measure of the epistemic uncertainty. The second term, represents the average variability of not originating from the distribution over model parameters . This measures aleatoric uncertainty, as the variability can only come from the latent variable .
3 Sensitivity Analysis of Predictive Uncertainty
In this section we will extend the method of sensitivity analysis toward predictive uncertainty. The goal is to provide insight into the question of which features affect the stochasticity of our model, which results in aleatoric uncertainty, and which features impact its epistemic uncertainty. For instance, if we have limited data about different settings of a particular feature , even a small change of its value can have a large effect on the confidence of the model.
Answers to these two questions can provide useful insights about a model at hand. For instance, a feature with high aleatoric sensitivity indicates a strong interaction with other unobserved/latent features. If a practitioner can expand the set of features by taking more refined measurements, it may be advisable to look into variables which may exhibit dependence with that feature and which may explain the stochasticity in the data. Furthermore, a feature with high epistemic sensitivity, suggests careful monitoring or extended safety mechanisms are required to keep this feature values in regions where the model is confident.
We start by briefly reviewing the technique of sensitivity analysis [1, 2], a simple method that can provides insight into how changes in the input affect the network’s prediction. Let be a neural network fitted on a training set , formed by feature vectors and targets . We want to understand how each feature influences the output dimension . Given some test data , we use the partial derivate of the output dimension w.r.t. feature :
In Section 2.1 we saw that we can decompose the variance of the predictive distribution of a BNN with latent variables into its epistemic and aleatoric components. Our goal is to obtain sensitivities of these components with respect to the input variables. For this we use a sampling based approach to approximate the two uncertainty components  and then calculate the partial derivative of these w.r.t. to the input variables. For each test data point , we perform forward passes through the BNN. We first sample a total of times and then, for each of these samples of , performing forward passes in which is fixed and we only sample the latent variable
. Then we can do an empirical estimation of the expected predictive value and of the two components on the right-hand-side of Eq. (3):
where and () is an empirical estimate of the variance over () samples of (). We have used the square root of each component so all terms share the same unit of . Now we can calculate the sensitivities:
where Eq. (8) is the standard sensitivity term. We also note that the general drawbacks  of the sensitivity analysis, such as considering every variable in isolation, arise due to its simplicity. These will also apply when focussing on the uncertainty components.
)). Error bars indicate standard errors over experiment repetitions.
In this section we want to do an exploratory study. For that we will first use an artifical toy dataset and then use 8 datasets from the UCI repository  in varying domains and dataset sizes. For all experiments, we use a BNN with 2 hidden layer. We first perform model selection on the number of hidden units per layer from on the available data, details can be found in Appendix A. We train for epochs with a learning rate of using Adam as optimizer. For the sensitivity analysis we will sample and and samples from . All experiments were repeated times and we report average results.
4.1 Toy Data
We consider a regression task for a stochastic function with heteroskedastic noise: with . The first input variable is responsible for the shape of the function whereas the second variable determines the noise level. We sample data points with and . Fig. (a)a shows the sensitivities. The first variable is responsible for the epistemic uncertainty whereas is responsible for the aleatoric uncertainty which corresponds with the generative model for the data.
4.2 UCI Datasets
We consider several real-world regression datasets from the UCI data repository . Detailed descriptions, including feature and target explanations, can be found on the respective website. For evaluation we use the same training and test data splits as in . In Fig. 1 we show the results of all experiments. For some problems the aleatoric sensitivity is most prominent (Fig. (f)f,(g)g), while in others we have predominately epistemic sensitivity (Fig. (e)e,(h)h) and a mixture in others. This makes sense, because we have variable dataset sizes (e.g. Boston Housing with 506 data points and 13 features, compared to Protein Structure with 45730 points and 9 features) and also likely different heterogeneity in the datasets.
In the power-plant example feature (temperature) and (ambient pressure) are the main sources of aleatoric uncertainty of the target, the net hourly electrical energy output. The data in this problems originates from a combined cycle power plant consisting of gas and steam turbines. The provided features likely provide only limited information of the energy output, which is subject to complex combustion processes. We can expect that a change in temperature and pressure will influence this process in a complex way, which can explain the high sensitivities we see. The task in the naval-propulsion-plant example, shown in Fig. (i)i, is to predict the compressor decay state coefficient, of a gas turbine operated on a naval vessel. Here we see that two features, the compressor inlet air temperature and air pressure have high epistemic uncertainty, but do not influence the overall sensitivity much. This makes sense, because we only have a single value of both features in the complete dataset. The model has learned no influence of this feature on the output (because it is constant) but any change from this constant will make the system highly uncertain.
In this paper we provided a new way of sensitivity analysis for predictive epistemic and aleatoric uncertainty. Experiments indicate useful insights of this method on real-world datasets.
-  Li Fu and Tinghuai Chen. Sensitivity analysis for input vector in multilayer feedforward neural networks. In Neural Networks, 1993., IEEE International Conference on, pages 215–218. IEEE, 1993.
-  Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 2017.
-  Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, and Steffen Udluft. Learning and policy search in stochastic dynamical systems with bayesian neural networks. arXiv preprint arXiv:1605.07127, 2016.
-  Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, and Steffen Udluft. Decomposition of uncertainty for active learning and reliable reinforcement learning in stochastic systems. arXiv preprint arXiv:1710.07283, 2017.
-  Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? arXiv preprint arXiv:1703.04977, 2017.
-  José Miguel Hernández-Lobato, Yingzhen Li, Mark Rowland, Daniel Hernández-Lobato, Thang Bui, and Richard E Turner. Black-box -divergence minimization. In Proceedings of The 33rd International Conference on Machine Learning (ICML), 2016.
-  Armen Der Kiureghian and Ove Ditlevsen. Aleatory or epistemic? does it matter? Structural Safety, 31(2):105 – 112, 2009. Risk Acceptance and Risk Communication.
-  Moshe Lichman. UCI machine learning repository. http://archive.ics.uci.edu/ml, 2013.
Appendix A Model Selection
We perform model selection on the number of hidden units for a BNN with 2 hidden layer. Table 1 shows test log-likelihoods with standard error on all UCI datasets. By that we want to lower the effect of over- and underfitting. Underfitting would show itself by high aleatoric and low epistemic uncertainty whereas overfitting would result in high epistemic and low aleatoric uncertainty.
|Hidden units per layer|
|Wine Quality Red||1.94||0.11||1.95||0.09||1.78||0.13||1.62||0.13|