1 Introduction
Convolutional neural networks (CNN) have been used for semantic segmentation on medical images with great success [2]. For the most part, these methods rely on fully annotated images to train the network. Although CNNbased segmentation algorithms keep evolving and improving, the amount of available training data still has a substantial effect on the performance [9]. However, it is difficult to obtain large scale fully annotated data for medical images since it requires an expert to spend considerable time and effort.
To address this limitation, a number of works have proposed interactive image segmentation methods relying on weak annotations such as bounding boxes [12], or scribbles [4, 6]. However, in these works, the annotations need to be provided for each new test image. Recently, a number of works have demonstrated that it is feasible to train fullyautomatic, learningbased algorithms using exclusively weak labels [9, 5, 10, 11]. Despite being trained on weak labels, these methods can produce full segmentation masks on test images. Of the above works only [11] was demonstrated on medical images. The authors proposed to train a segmentation network for fetal structures from bounding box annotations only.
In this paper we present a scribblebased weaklysupervised learning framework for medical images. Scribbles have been recognized as particularly userfriendly form of supervision
[9] and may be better suited for nested structures, when compared to bounding boxes. Furthermore, they require only a fraction of the annotation time compared to full pixelwise annotations. Following previous works, the proposed framework is an iterative twostep procedure in which a segmentation network is trained on the scribble annotations, then this network is used in conjunction with a conditional random field (CRF) to relabel the training set. This in turn is used for an additional training recursion^{1}^{1}1We refer to this as recursion rather than iteration to avoid confusion with single minibatch gradient descent steps, which are also often referred to as iteration.. We show that this procedure, under some assumptions, can be interpreted as expectation maximization (EM). We investigate multiple strategies for relabeling the training dataset, estimating the CRF parameters, and quantifying uncertainty in the relabeling step. An overview of the method is shown in Fig.
1.We evaluate the framework and its individual components on the public cardiac ACDC dataset [2] and the NCIISBI 2013 prostate segmentation challenge data [3]. We show that despite the inherently very sparse nature of the annotations the proposed methods achieve a segmentation accuracy within 95% of a baseline network trained with full supervision. To our knowledge, this is the first demonstration of training a pixelwise segmentation network with scribble supervision on medical image data.
2 Methods
The aim of our proposed method is to learn the parameters of a CNNbased segmentation network such that it predicts a generally unknown segmentation mask for an input image , where is the number of pixels. During training, rather than full pixelwise annotations, we are only provided with a ground truth annotation for a small number of pixels (i.e. the scribbles). Note that this also includes a background scribble (see examples in Fig. 2). The proposed framework consists of a repeated estimation of the network parameters and subsequent relabeling of the training dataset by combining the network prediction with a CRF. We investigate two different CRF inference strategies: the dense CRF approach proposed in [8]
, and a recent extension thereof in which the CRF is formulated as a recurrent neural network (RNN) and the CRF parameters can be learned endtoend
[13]. Moreover, we investigate a novel strategy for incorporating prediction uncertainty in the relabeling step based on [7]. For all investigated strategies we perform an initial region growing step described in the following.2.1 Generation of Seed Areas by Region Growing
For this step we use the random walkbased segmentation method proposed by [6]
, which (similar to neural networks) produces a pixelwise probability map for each label. We assign each pixel its predicted value only if the probability exceeds a threshold
. Otherwise the pixellabel is treated as unknown. An example of this step can be seen in Fig. 1. Note that the threshold is intentionally chosen very high such as to underestimate the true extent of the structures and only include pixels which have a very high probability of being correctly estimated. Those assignments will serve as new “ground truth” labels for the remainder of the steps and will be referred to as seed areas. The uncertain pixels are treated as unlabeled, i.e. they are the latent variables of our model.2.2 Separate CRF and Network Training
We propose a hard expectation maximization (EM) approximation to learn the network parameters in an iterative fashion. The algorithm consists of alternatingly estimating the best parameters of the neural network given a labeling obtained using the current parameters (M step), and estimating the optimal labeling of the latent variables given an updated (E step). We assume the following graphical model
(1) 
where is modeled using a neural network . Following the standard EM approach, we write the expectation of the completedata log likelihood as
(2) 
In the E step of the algorithm we estimate the mode of as
(3) 
using the fact that does not depend on .
By assuming a complete dependency graph between all
, the conditional joint distribution can be factorized and the E step can be written as the following CRF optimization problem:
(4) 
where denotes the set of all unary cliques of a set of variables and denotes the set of all pairwise cliques. The unary potential function acting on the latent variables is defined using the current network output as
(5) 
The unary potential function acting on the seed regions is defined as 0 for labellings matching the ground truth and infinity otherwise, effectively preventing the initially grown regions from changing. Furthermore, we use the pairwise potential function proposed in [8]:
(6) 
where the label compatibility function is given by the Potts model , and
denotes the Euclidean distance between the pixel locations. We estimate the hyperparameters
in a grid search on the validation set. In order to optimize Eq. 4 we use the approach in [8]. We also consider a simple modification of this procedure as a baseline in which we set the pairwise terms to zero and only use the unary terms .In the M step, after we have found the optimal labeling of the latent variables using the network parameters we can rewrite Eq. 2 as
(7) 
where is the Dirac delta function, the approximate equality is due to the hard EM approximation and we substituted Eq. 1 to obtain the equality. Since does not depend on the optimization can be written as
(8) 
We find the parameters that maximize the likelihood of predicting the labels by minimizing the pixelwise cross entropy function between the labels and the network output using the ADAM optimizer with an initial learning rate of 0.001 which is multiplied by 0.9 every 3000 iterations. We use the modified UNet segmentation network used in [1] in all experiments. The network parameters for each recursion are initialized with . The E and the M steps get repeated until convergence, which typically occurs within 3 recursions or less.
In the first recursion, we set the crossentropy loss to zero in all locations where the random walk is “uncertain” (probabilities below ), allowing the network to predict any label in those regions. We also explore a strategy to identify uncertain regions in subsequent iterations, which will be discussed in Sec. 2.4
2.3 Integrated Network Training and (CRFRNN)
Here, we investigate estimation of the CRF parameters as part of the network training. To that end we use the CRFRNN layer proposed in [13] which learns individual kernel weights for each class and a more flexible compatibility matrix.
To obtain a new labeling we simply run a forward pass through the network. Next, in order to prevent the original seed regions from changing, we simply reset those values to their original label. In future work, we aim to include this constraint directly into the CRFRNN formulation.
In the subsequent network optimization step, we directly learn to predict those . Here we use the following training scheme: the network parameters are trained as above for 10 minibatch iterations while keeping the RNN parameters constant. Every 10 iterations, the RNN parameters are updated with a learning rate of , while freezing the remainder of the network parameters. As before, the label estimation and training steps are repeated until convergence.
2.4 Quantifying segmentation uncertainty
In order to prevent segmentation errors from early recursions from propagating we investigate the following strategy to reset labels predicted with insufficient certainty after each E step. We add dropout with probability 0.5 to the 5 innermost blocks of our UNet architecture during training. In order to estimate the new optimal labeling we perform 50 forward passes with dropout similar to [7]
. Rather than a single output this yields a distribution of logits and softmax outputs for each pixel and label. We then compare the logits distributions of the label with the highest and second highest softmax mean for each pixel using a Welch’s ttest. If the logits come from a distribution with the same mean with
we conclude that the label was not predicted with sufficient certainty and reset its labeling to “uncertain”. Thus, in the subsequent Mstep the network will be free to predict any label in that location. Otherwise, we set the pixel to the label with the highest probability.3 Experiments and Results
We trained and evaluated the methods on two publicly available datasets: the ACDC cardiac segmentation challenge data [2] for which the Myocardium (Myo), the left and right ventricles (LV and RV) have been annotated, and the NCIISBI 2013 prostate segmentation challenge data [3] for which reference annotations of the central gland (CG) and the peripheral zone (PZ) were available. For the cardiac data we split the data into 160 training volumes and 40 validation volumes, and evaluated the algorithms on 100 images using the challenge server. For the prostate data we split 29 available training volumes into 12 training, 7 validation and 10 testing volumes. Training was performed on 2D slices.
We used for the cardiac and for the prostate experiments. For the separate CRF we used for the cardiac experiments and for the prostate, and for the CRFRNN we used , for the cardiac data, for the prostate, and for both datasets.
In the following experiments, the simple recursive training strategy which does not make use of pairwise terms in Eq. 4, nor uncertainty estimation, is called base. We evaluated the performance with and without the components discussed above. Additionally, we also investigated the same segmentation architecture on the fully labeled data to obtain an upper bound on the performance, and a version of base in which we did not perform any recursions, but used the network parameters learned directly on the seed regions .
The Dice scores with respect to the reference annotations for all the examined methods and structures are shown in Table 1. Note that ACDC challenge server did not allow for higher precision Dice reporting in the postchallenge phase. Example segmentations for the two best performing methods are shown in Fig. 3 for the cardiac and prostate data, respectively.
Cardiac dataset  Prostate dataset  

LV  RV  Myo  Avg  PZ  CG  Avg  
Base (no recursion)  0.895  0.875  0.825  0.865  0.631  0.827  0.729 
Base  0.905  0.880  0.835  0.873  0.670  0.829  0.750 
Base + separate CRF  0.890  0.880  0.840  0.870  0.698  0.837  0.767 
Base + CRFRNN  0.915  0.885  0.840  0.880  0.698  0.863  0.781 
Base + uncertainty  0.910  0.890  0.840  0.880  0.720  0.837  0.778 
Base + sep. CRF & unc.  0.910  0.890  0.840  0.880  0.722  0.839  0.780 
Base + CRFRNN & unc.  0.915  0.885  0.840  0.880  0.710  0.834  0.772 
Fully supervised  0.935  0.905  0.895  0.912  0.746  0.889  0.818 
We observe that a) the recursive training regime led to substantial improvements over nonrecursive training, b) the dropout based uncertainty was responsible for the largest improvements, c) additional CRF led to further, albeit smaller improvements, d) using CRFRNN without uncertainty led to similar results as the separate CRF with uncertainty, e) applying dropout uncertainty in conjunction with the CRFRNN did not lead to additional improvements and performed slightly worse on the prostate. We believe this is due to the CRFRNN module leading to unusual logit distributions at its input. On average, the training frameworks with 1) CRFRNN, and with 2) separate CRF and uncertainty performed the best and similar to each other. Future work on integrating uncertainty with the CRFRNN may lead to further improvements.
Most importantly, the results show that our proposed training strategy allows to learn a pixellevel segmentation network using scribble supervision alone with a remarkably small degradation compared to the fully supervised upper bound. For instance, the performance of the CRFRNN method is only 4.5% worse on the prostate, and 2.9% worse on the cardiac data compared to fully supervised training. These results are also confirmed by the qualitative analysis. We believe this is likely an acceptable error margin for certain quantification studies where precise border delineation is of secondary importance such as automatic estimation cardiac ejection fractions [2].
4 Conclusion
In this paper, we investigated training strategies to train a fully automatic segmentation network with scribble supervision alone. We demonstrated the feasibility of the techniques on two publicly available medical image datasets and showed that only a remarkably small performance degradation is incurred with respect to fully supervised upper bound networks.
Acknowledgements
This work was partially supported by the Swiss Data Science Center. One of the Titan X Pascal used for this research was donated by the NVIDIA Corporation.
References

[1]
Baumgartner, C.F., Koch, L.M., Pollefeys, M., Konukoglu, E.: An exploration of 2D and 3D deep learning techniques for cardiac MR image segmentation. In: Proc. STACOM. pp. 111–119 (2017)
 [2] Bernard, O., et al.: Deep learning techniques for automatic MRI cardiac multistructures segmentation and diagnosis: Is the problem solved? IEEE T Med Imaging (2018)
 [3] Bloch, N., Madabhushi, A., Huisman, H., et al.: NCIISBI 2013 challenge: automated segmentation of prostate structures (2015)
 [4] Criminisi, A., Sharp, T., Blake, A.: Geos: Geodesic image segmentation. In: Proc. ECCV. pp. 99–112. Springer (2008)
 [5] Dai, J., He, K., Sun, J.: BoxSup: exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In: Proc ICCV. pp. 1635–1643 (2015)
 [6] Grady, L.: Random walks for image segmentation. IEEE T Pattern Anal 28(11), 1768–1783 (2006)
 [7] Kendall, A., Badrinarayanan, V., Cipolla, R.: Bayesian SegNet: Model uncertainty in deep convolutional encoderdecoder architectures for scene understanding. arXiv preprint arXiv:1511.02680 (2015)
 [8] Krähenbühl, P., Koltun, V.: Efficient inference in fully connected CRFs with Gaussian edge potentials. In: Adv Neur In. pp. 109–117 (2011)
 [9] Lin, D., Dai, J., Jia, J., He, K., Sun, J.: ScribbleSup: Scribblesupervised convolutional networks for semantic segmentation. In: Proc. CVPR. pp. 3159–3167 (2016)

[10]
Papandreou, G., Chen, L., Murphy, K., Yuille, A.L.: Weakly and semisupervised learning of a DCNN for semantic image segmentation. Proc. ICCV 2015 (2015)
 [11] Rajchl, M., et al.: DeepCut: Object segmentation from bounding box annotations using convolutional neural networks. IEEE T Med Imaging 36(2), 674–683 (2017)
 [12] Rother, C., Kolmogorov, V., Blake, A.: ”GrabCut”: interactive foreground extraction using iterated graph cuts. ACM Trans Graph 23(3), 309–314 (2004)
 [13] Zheng, S., Jayasumana, S., RomeraParedes, B., Vineet, V., Su, Z., Du, D., Huang, C., Torr, P.H.S.: Conditional random fields as recurrent neural networks. In: Proc. ICCV. pp. 1529–1537 (2015)
Comments
There are no comments yet.