PHiSeg: Capturing Uncertainty in Medical Image Segmentation

06/07/2019 ∙ by Christian F. Baumgartner, et al. ∙ 4

Segmentation of anatomical structures and pathologies is inherently ambiguous. For instance, structure borders may not be clearly visible or different experts may have different styles of annotating. The majority of current state-of-the-art methods do not account for such ambiguities but rather learn a single mapping from image to segmentation. In this work, we propose a novel method to model the conditional probability distribution of the segmentations given an input image. We derive a hierarchical probabilistic model, in which separate latent spaces are responsible for modelling the segmentation at different resolutions. Inference in this model can be efficiently performed using the variational autoencoder framework. We show that our proposed method can be used to generate significantly more realistic and diverse segmentation samples compared to recent related work, both, when trained with annotations from a single or multiple annotators.



There are no comments yet.


page 2

page 5

page 7

page 11

page 12

page 13

Code Repositories


Tensorflow Code for "PHiSeg: Capturing Uncertainty in Medical Image Segmentation", Proc. MICCAI 2019

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Semantic segmentation of anatomical structures and pathologies is a crucial step in clinical diagnosis and many downstream tasks. The majority of recent automated segmentation methods treat the problem as a one-to-one mapping from image to output mask (e.g.  [6]). However, medical segmentation problems are often characterised by ambiguities and multiple hypotheses may be plausible [10]. This is in part due to inherent uncertainties such as poor contrast or other restrictions imposed by the image acquisition, but also due to variations in annotation “styles” between different experts. To account for such ambiguities it is crucial that prediction systems provide access to the full distribution of plausible outcomes without sacrificing accuracy. Predicting only the most likely hypothesis may lead to misdiagnosis and may negatively affect downstream tasks.

Recent work proposed to account for the uncertainty in the learned model parameters using an approximate Bayesian inference over the network weights 

[2]. However, it was shown that this method may produce samples that vary pixel by pixel and thus may not capture complex correlation structures in the distribution of segmentations [4]. A different line of work accounts for the possibility of different outcomes by training an ensemble of networks [5] or by training a single network with heads [7]. Both approaches, however, can only produce a fixed number of hypotheses. This problem is overcome by the conditional variational autoencoder (cVAE), an extension of [3] for modelling conditional segmentation masks given an input image [8]. Finally, the recently proposed probabilistic U-NET combines the cVAE framework with a U-NET architecture [4]. The authors showed that, given ground-truth annotations from multiple experts, the method can produce an unlimited number of realistic segmentation samples. Moreover, the method was shown to outperform various related methods including network ensembles, -heads [7] and the Bayesian SegNet [2].

Figure 1: (Left) Example of hierarchical segmentation generation with segmentation output at each level () in the top row, right to left, and corresponding residual refinements to the prostate peripheral zone class in the bottom row. (Right) Corresponding graphical model (for independent samples).

However, as we will show, the probabilistic U-NET produces samples with limited diversity. We believe this may be due to the fact that stochasticity is only introduced in the highest resolution level of the U-NET, and because the network can choose to ignore the random draws from the latent space since it is only concatenated to the channels. In this work, we propose a novel hierarchical probabilistic model which can produce segmentation samples closely matching the ground-truth distribution of a number of annotators. Inspired by Laplacian Pyramids, the model generates image-conditional segmentation samples by generating the output at a low resolution and then continuously refining the distribution of segmentations at increasingly higher resolutions. In contrast to prior work, the variations on each resolution level are governed by a separate latent variable, thereby avoiding the problems mentioned above. This process is illustrated in Fig. 1. We show that compared to recent work, our proposed Probabilistic Hierarchical Segmentation (PHiSeg) produces samples of significantly better quality for two challenging segmentation tasks, both, when trained with multiple annotations, and a single annotation per image. Furthermore, the mean prediction of our model performs on par with the standard U-NET in terms of segmentation accuracy.

2 Methods

We start by assuming that the segmentations given an input image are generated from levels of latent variables according to the graphical model shown in Fig. 1. Thus, the conditional distribution is given by the following expression for the general case of latent levels:


We further assume that each latent variable is responsible for modelling the conditional target segmentations at of the original image resolution (e.g. and model the segmentation at the original and at of the original resolution, respectively.). This does not result from the graphical model itself but is rather enforced by our implementation thereof as will become clear shortly.

We aim to approximate the posterior distribution of using a variational approximation where we used to denote . It can be shown that , where denotes the evidence lower bound, and

the Kullback-Leibler divergence 

[3, 8, 4]. Since , is a lower bound on the conditional log probability with equality when the approximation matches the posterior exactly. Using the decomposition in Eq. 1 we find that for our model


with . A complete derivation can be found in Appendix 0.A. The are additional variables which we introduced to help account for dimensionality differences between the

(explained below). Following standard practice we parametrise the prior and posterior distributions as axis aligned normal distributions

. Specifically, we define


where the

are functions parametrised by neural networks. Note that in contrast to the variational autoencoder 

[3], the are also parametrised by neural networks similar to [4, 8]. Lastly, we model as the usual categorical distribution with parameters (i.e. softmax probabilities) predicted by another neural network. By parametrising all distributions using neural networks, this can be seen as a hierarchical conditional variational autoencoder with the posteriors and priors encoding and into latent representations , and the likelihood acting as the decoder. Our implementation of this model using a neural network for is shown in Fig. 2. In that figure it can be seen that the total number of resolution levels of the network (i.e. number of downsampling steps plus one) can be larger than the number of latent levels. We use 7 resolution levels and latent levels for most experiments. The prior and posterior nets have identical structure but do not share any weights. Similar to previous work, all three subnetworks are used for training but testing is performed by using only the prior and the likelihood networks [3, 8, 4].

From Fig. 2 it can be seen that latent variables will form the skip connections in a U-NET-like architecture. However, unlike [6] and [4], each skip connection corresponds to a latent variable

such that no information can flow from the image to the segmentation output without passing a sampling step. We do not map the latent variables to a 1-D vector but rather choose to keep the structured relationship between the variables. We found that this substantially improves segmentation accuracy. As a result, latent variable

has a dimensionality of , where is a hyper-parameter and for all experiments, and are the dimensions of the input images. The latent variable is limited to modelling the data at of the original resolution due to the downsampling operations before it. It then passes up the learned representation to the latent space embedding above () to perform a refinement at double the resolution. This continues until the top level is reached. To further enforce this behaviour the likelihood network is designed to generate only residual changes of the segmentation masks for all except the bottom one. This is achieved through the addition layers before the outputs (see Fig. 2). Our model bears some resemblance to the Ladder Network [9] which is also a hierarchical latent variable model where inference results in an autoencoder with skip connections. Our work differs substantially from that work in how inference is performed. Furthermore, to our knowledge, the Ladder Network was never applied to structured prediction problems.

Figure 2: Schematic network architecture of the proposed method for latent levels and 4 resolution levels.

Training and Predictions: We aim to find the neural network parameters which maximise the lower bound in Eq. 2. The analytical form of the individual terms is prescribed by our model assumptions: since the posterior and prior both are modelled by normal distributions, the KL terms can be calculated analytically [3]. Our choice of likelihood results in a cross entropy term , with the predicted segmentation and the corresponding ground-truth. Similar to previous work we found that it is sufficient to evaluate all of the expectations using a single sample [3]. Two deviations from the above theory were necessary for stable training. First, the magnitude of the KL terms depends on the dimensionality of . However, since the dimensionality of in our model grows with

, this led to optimisation problems. To counteract this, we heuristically set the weights

in Eq. 2. Secondly, to enforce the desired behaviour that should only model the data at its corresponding resolution, we added deep supervision to the output of each resolution level ( in Fig. 2). The cost function used for this is again the cross entropy loss, for , where denotes a nearest neighbour upsampling to match the size of .

We trained the model using the Adam optimiser with a learning rate of and a batch-size of 12. We used batch-normalisation on all non-output layers. All models were trained for 48 hours on a NVIDIA Titan Xp GPU and the model with the lowest total loss on a held-out validation set was selected. Our implementation of the methods proposed in this paper can be found on

After the model is trained, segmentation samples for an input image can be generated by first obtaining samples using the prior network and then decoding them using the likelihood network.

3 Experiments and Results

We evaluated our method on two datasets: 1) the publicly available LIDC-IDRI dataset which comprises 1018 thoracic CT images with lesions annotated by 4 radiologists [1]. Similar to [4] we extracted square 2D patches of size pixels such that each patch was centred on a lesion. 2) We also evaluated our method on an in-house prostate MR dataset of 68 patients acquired with a transverse T2-weighted sequence (in-plane resolution and slice thickness ). The transition and peripheral zones were manually annotated by 4 radiologists and 2 non-radiologists. We processed the data slice-by-slice (approx. 25 slices per volume), where we resampled each slice to a resolution of and took a central crop of size . We divided both datasets into a training, testing and validation set using a random 60-20-20 split.

For all experiments we compared our method (PHiSeg) with latent levels and a total of 7 resolution levels to the probabilistic U-NET [4]. In order to exclude network capacity as an explanation for performance differences, we aimed to model our network components as closely as possible after the probabilistic U-NET. We used batch normalisation layers for both methods which deviates from [4] but did not affect the results negatively. Furthermore, to demonstrate that modelling the segmentation problem at multiple resolution levels is beneficial, we also compared against a variation of PHiSeg with only latent levels (i.e. no skip connections or latent space hierarchy). Lastly, for some experiments we compared to a deterministic U-NET using the same architecture as for the probabilistic U-NET but with no stochastic components.

We evaluated the techniques in two experiments. First, we trained the methods using the masks from all available annotators, where in each batch we randomly sampled one annotation per image. We were interested in assessing how closely the distribution of generated samples matched the distribution of ground-truth annotations. To this end, we used the generalised energy distance where is 1 minus the intersection over union, i.e. , and are samples from the learned distribution , and ground-truth distribution  [4]. The GED reduces the sample quality to a single, easy-to-understand number but, as a consequence, cannot be interpreted visually. Therefore, we additionally aimed to produce pixel-wise maps showing variability among the segmentation samples. We found the expected cross entropy between the mean segmentation mask and the samples to be a good measure, i.e. with the pixel position and the mean prediction.

is statistically similar to variance with the L2-distance replaced by CE. However, we believe it is more suitable for measuring segmentation variability. Examples of our

-maps along with sample segmentations are shown in Fig. 3. We quantify how well the -maps for each method predict regions with large uncertainty using the average normalised cross correlation (NCC) between the -maps and the CE error maps obtained with respect to each annotator:


Results for both and are shown in the top part of Tab. 1. All measures were evaluated with 100 samples drawn from the learned models.

Figure 3: Ground-truth annotations and samples for two of the evaluated methods trained with masks from 6 or 1 experts(s). Average error maps and -maps () for each model are shown in the left and right-most column, respectively.

Secondly, we set out to investigate the models’ ability to infer the inherent uncertainties in the annotations from just one annotation per training image. To this end, we trained the above models by using only the annotations of a single expert. For the evaluation we then computed the and using all available annotators. Additionally, we evaluated the models in terms of conventional Dice score evaluated with masks from the single annotator as ground-truth. To get a single prediction from the probabilistic models we used . This allowed us to obtain an indication of conventional segmentation accuracy. The results are shown in the bottom part of Tab. 1.

LIDC-IDRI Prostate dataset
# experts Dice Dice
Prob. U-NET All 0.2393 0.7749 - 0.1322 0.7763 -
PHiSeg () All 0.2934 0.7944 - 0.1608 0.7452 -
PHiSeg () All 0.2248 0.8453 - 0.0864 0.8185 -
Det. U-NET 1 - - 0.5297 - - 0.8364
Prob. U-NET 1 0.4452 0.5999 0.5238 0.2198 0.6022 0.8290
PHiSeg () 1 0.4695 0.6013 0.5275 0.2462 0.6683 0.7942
PHiSeg () 1 0.3225 0.7337 0.5408 0.2044 0.6917 0.8540
Table 1: Quantitative results for all metric. Statistically significant improvements ( with paired student’s -test) over all other methods are indicated in bold.

We observed that when using all annotators for training, PHiSeg () produced significantly better and scores compared to all other methods. This can be observed qualitatively in Fig. 3 for a prostate slice with large inter-expert disagreements. Both, the prob. U-NET and PHiSeg () produced realistic samples but PHiSeg () was able to capture a wider variability. Furthermore, as indicated by the high values, PHiSeg’s () -maps were found to be very predictive of where in the image the method’s average prediction errors will occur. Similar results were obtained when training with only one annotator. We noticed that in this scenario the prob. U-NET may in some cases fail to learn variation in the data and revert back to an almost entirely deterministic behaviour (see fourth row in Fig. 3). We believe this can be explained by the prob. U-NET’s architecture which, in contrast to our method, allows the encoder-decoder structure to bypass the stochasticity. While our method also predicted smaller variations in the samples, they were still markedly more diverse. The lower performance of PhiSeg () indicates that using multiple resolution levels is crucial for our method. More samples for the prostate and LIDC-IDRI datasets can be found in Appendix 0.B. From Tab. 1 it can be seen that no significant differences between the Dice scores were found for any of the methods (except PHiSeg’s ()), including the det. U-NET. From this we conclude that neither PhiSeg () nor the prob. U-NET suffer in segmentation performance due to their stochastic elements.

4 Discussion and Conclusion

We introduced a novel hierarchical probabilistic method for modelling the conditional distribution of segmentation masks given an input image. We have shown that our method substantially outperforms the state-of-the-art on a number of metrics. Furthermore, we demonstrated that PHiSeg was able to predict its own errors significantly better compared to previous work. We believe that proper modelling of uncertainty is indispensable for clinical acceptance of deep neural networks and that having access to the segmentation’s probability distribution will have applications in numerous downstream tasks.


This work was partially supported by the Swiss Data Science Center. One of the Titan X Pascal used for this research was donated by the NVIDIA Corporation.


  • [1] Armato, S.G., McLennan, G., Bidaut, L., McNitt-Gray, M.F., Meyer, C.R., Reeves, A.P., Zhao, B., Aberle, D.R., Henschke, C.I., Hoffman, E.A., et al.: The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Med. Phys. 38(2), 915–931 (2011)
  • [2] Kendall, A., Badrinarayanan, V., Cipolla, R.: Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv:1511.02680 (2015)
  • [3] Kingma, D., Welling, M.: Auto-encoding variational bayes. arXiv:1312.6114 (2013)
  • [4] Kohl, S., Romera-Paredes, B., Meyer, C., De Fauw, J., Ledsam, J.R., Maier-Hein, K., Eslami, S.A., Rezende, D.J., Ronneberger, O.: A probabilistic U-Net for segmentation of ambiguous images. In: Proc. NIPS. pp. 6965–6975 (2018)
  • [5]

    Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Proc. NIPS. pp. 6402–6413 (2017)

  • [6] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Proc. MICCAI. pp. 234–241. Springer (2015)
  • [7] Rupprecht, C., Laina, I., DiPietro, R., Baust, M., Tombari, F., Navab, N., Hager, G.D.: Learning in an uncertain world: Representing ambiguity through multiple hypotheses. In: Proc. CVPR. pp. 3591–3600 (2017)
  • [8] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. In: Proc. NIPS. pp. 3483–3491 (2015)
  • [9]

    Valpola, H.: From neural PCA to deep unsupervised learning. In: Persp Neural Comp, pp. 143–171. Elsevier (2015)

  • [10]

    Warfield, S.K., Zou, K.H., Wells, W.M.: Validation of image segmentation and expert quality with an expectation-maximization algorithm. In: Proc. MICCAI. pp. 298–306. Springer (2002)

Appendix 0.A Derivation of the Evidence Lower Bound (ELBO)

In this section, we derive the ELBO given in Eq. 2 based on our model assumptions given in Fig. 1 (right) and Eq. 1. We start from the well known decomposition for the variational lower bound for the conditional probability distribution 111This can easily be derived by starting with , replacing , and splitting the log term. (see also Eq. 4 in reference [8]). Defining we can write:




Since is always positive is a lower bound on with equality when .

The right-most KL-divergence term in Eq. 7 can be further rewritten as


where in the first equality we used the definition of our graphical model and assumed that our variational distribution factorises in the same way. In the second equality we separated out the terms, which can be done because does not depend on any of the other ’s and thus the remaining integrals in the KL-divergence integrate to one.

We now define the support of the random variable

as and proceed to further simplify the right-most KL term of Eq. 8.


For the first equality above, we used the definition of the KL-divergence and the log property. To obtain the second equality, we regrouped the integrals and rewrote the products of conditional probabilities as joint probabilities. Note that the integrals corresponding to the terms to can be moved inside because the log term does not depend on them. For the third equality, we noticed that the integral over evaluates to one because must be a normalised probability distribution. We also marginalised out the variables to in the integral over . Lastly, we used the definitions of the expectation and the KL-divergence to arrive at the final term.

By plugging back the simplifications obtained in Eqs. 8 and 9 into Eq. 7 we arrive at the expression for the evidence lower bound given in Eq. 2 in the main article.

Appendix 0.B Additional Samples

In this section, we show additional samples generated using all of the investigated methods for the prostate dataset (see Fig. 0.B.1), and the LIDC dataset (see Figs. 0.B.2 and 0.B.3).

Fig. 3 in the main article shows examples for a prostate with large inter-expert disagreements. In contrast, Fig. 0.B.1 is showing an example where the annotation disagreements were relatively smaller.

Fig. 0.B.2 shows a thoracic lesion with disagreements regarding the shape of the lesion, while Fig. 0.B.3 is showing an example where the disagreement is over presence or absence of the lesion.

Figure 0.B.1: Prostate MR slice with comparatively small inter-expert disagreement. Ground-truth annotations and samples for all evaluated methods trained with masks from 6 or 1 experts(s) are shown. Average error maps and -maps () for each model are shown in the left and right-most column, respectively. The first and last column should match in a perfect model.
Figure 0.B.2: Thoracic lesion in CT image (zoomed in with factor for better visualisation). Ground-truth annotations and samples for all evaluated methods trained with masks from 4 or 1 experts(s) are shown. Average error maps and -maps () for each model are shown in the left and right-most column, respectively. The first and last column should match in a perfect model.
Figure 0.B.3: Thoracic lesion in CT image with inter-expert disagreement over presence or absence of lesion (zoomed in with factor for better visualisation). Ground-truth annotations and samples for all evaluated methods trained with masks from 4 or 1 experts(s) are shown. Average error maps and -maps () for each model are shown in the left and right-most column, respectively. The first and last column should match in a perfect model.