Combining Heterogeneously Labeled Datasets For Training Segmentation Networks

07/24/2018 ∙ by Jana Kemnitz, et al. ∙ 2

Accurate segmentation of medical images is an important step towards analyzing and tracking disease related morphological alterations in the anatomy. Convolutional neural networks (CNNs) have recently emerged as a powerful tool for many segmentation tasks in medical imaging. The performance of CNNs strongly depends on the size of the training data and combining data from different sources is an effective strategy for obtaining larger training datasets. However, this is often challenged by heterogeneous labeling of the datasets. For instance, one of the dataset may be missing labels or a number of labels may have been combined into a super label. In this work we propose a cost function which allows integration of multiple datasets with heterogeneous label subsets into a joint training. We evaluated the performance of this strategy on thigh MR and a cardiac MR datasets in which we artificially merged labels for half of the data. We found the proposed cost function substantially outperforms a naive masking approach, obtaining results very close to using the full annotations.



There are no comments yet.


page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Accurate segmentation of complex and anatomical structures in medical images is the one of most critical parts in the image analysis pipeline. Segmentation results affect all the subsequent processes of image analysis such as object representation, feature measurement, the development of imaging biomarkers and ultimately the resulting diagnosis and treatment of diseases [1, 2].

The recent reemergence of convolutional neural networks (CNNs) allows automatic segmentation of anatomical structures with unprecedented accuracy [3, 4]. However, the performance of CNNs depends strongly on the size of the training data [3]. Since fully annotated datasets are still often relatively small, a possible strategy is to combine multiple datasets from different sources for training.

Apart from possible domain shifts, a problem that may arise in practice is that different datasets may be following different labeling protocols and may thus contain different subsets of labels. For instance, detailed labels in one dataset may be combined into a “super label” in another dataset, or a label may be completely missing from the one of the datasets. Note that the latter case can be thought of as the missing label forming a super label with the background.

Combining heterogeneously labeled datasets has previously been investigated in the context of atlas-based segmentation employing majority voting, semilocally weighted voting, performance level estimation and multi-protocol label fusion 

[5]. However, to our knowledge incorporating such data for training segmentation networks still remains a open challenge [3].

A naive approach to address this problem would be to simply set training cost function (e.g. crossentropy loss) to zero at pixel locations where the desired label is not available. This means that in those locations the network would be free to predict any label. However, this is not taking full advantage of the available information. For instance, for training images for which one label is missing, we know that in those locations the network should only predict background or the missing label, but not any other labels. Similarly, if a training image combines two anatomic labels into one, in those regions only those two structures should be predicted, but not, for example, background.

In this paper, we propose a simple and effective cost function which allows integrating such information into the training process and thus takes advantage of the full extent of available training information. We evaluate the proposed cost function on two datasets: thigh MR images from the Osteoarthritis Initiative (OAI) [6] and publicly available cardiac MR data from the ACDC challenge [7]. For both datasets we simulate incomplete labels by merging a number of labels for parts of the datasets.

2 Methods

The goal of the proposed method is to learn the parameters of a segmentation network which can assign a label for each pixel of an image . Generally, for training we may have multiple datasets which have been annotated with different subsets of those labels. To describe the proposed method, we focus on the simpler problem, where we assume that we have only two training datasets of which was annotated with all target labels, while contains one super label that corresponds to of the labels in . That is contains the following labels . For notational simplicity we define a binary mask which is 0 at all pixels that have label and 1 otherwise. In other words, only where full information is available. This simplified problem, with two datasets and one super label, can be easily extended to more complex scenarios.

The commonly used cross entropy function for a single fully annotated image is given by



denotes the ground-truth probability distribution and

denotes the networks softmax output. In the following we consider a naive extension of this cost function disregarding pixels with incomplete information, and our proposed cost function which takes into account the possible predictions of super labels. An overview of the strategies is shown in Fig. 1.

2.1 Naive Masking

Apart from completely disregarding datasets with incomplete labeling, the simplest strategy is to mask out regions with incomplete information in the crossentropy loss function:


using the mask defined earlier. While still using images from both datasets for training, this formulation disregards the information contained in , that it corresponds to and not to any other label. In practice, we found that this often leads to undesired structure labels or background leaking into those regions.

2.2 Super Label Aware Crossentropy Loss

In order to overcome this limitation, we propose adding an additional term to the crossentropy loss also taking into account the super labels as follows:


where we omitted the sum over and the conditioning on for brevity. Here, the second term encourages the network to predict , in regions where the training image is labeled with the super label. The simplification in the second equality is due the fact that by definition where .

Figure 1: Thigh MRI trainings paths of the U-Net for segmentation with a) training data annotated with all target labels, while training dataset contains the following labels ; both with binary mask which is 0 at all pixels that have label and 1 otherwise.

3 Experiments and Results

3.1 Data

We evaluated segmentation accuracy of the cost functions introduced above on two data sets. Thigh MRI: The thigh MRI data consist of 139 patient scans of the osteoarthritis initiative (OAI) [6], a publicly available data base created for imaging biomarker validation in knee osteoarthritis. MRIs were acquired using a 3T system (slice thickness 5mm; in-plane resolution 0.98mm; no inter-slice gap) and segmentations were available for patient from previous studies [8, 9]. The dataset was divided into training, test and validation set comprising 99, 20 and 20 subjects, respectively. All muscle MRI slices where cropped and centered towards the femoral bone of the right knee to simplify the segmentation problem with a resulting image size of 256x256 pixels.

Cardiac MRI: The cardiac MRI data from the ACDC challenge [7] consists 100 patient scans each including a short-axis cine-MRI acquired on 1.5T and 3T systems with resolutions ranging from 0.70mm to 1.92mm in-plane and 5mm to 10mm through-plane. Segmentation for the background, the myocardium (Myo), the left ventricle (LV) and the right ventricle (RV) were available for the end-diastolic (ED) and end-systolic (ES) phases of each patient. The dataset was divided into training, test and validation set comprising 60, 25 and 15 subjects, respectively. All images were resampled to a common resolution of 1.37x1.37mm

and resampled centrally placed into images of constant size, padding with zeros where necessary.

3.2 Network Architecture and Training

All experiments were performed using the modified 2D U-Net architecture proposed in [11]. We used mini-batch gradient descent and the ADAM optimizer with a learning rate of 0.01 to minimize the respective cost functions. The final model was selected based on the respective loss functions evaluated on the validation set.

3.3 Evaluation

In order to evaluate the ability of the loss functions discussed in Section 2 to address the problem of differently labeled datasets, we artificially generated a fully annotated dataset and a dataset for which a number of labels have been merged into super labels. For both the cardiac and thigh datasets we relabeled half of the training and validation sets as summarized in Table 1. To generate , for the thigh data we merged the AD and IMF labels, and for the cardiac data we created a “heart” super label containing all of the structures apart from background. The final performance was evaluated on the fully labeled test sets using the Dice score (DSC), average symmetric surface distance (ASSD) and Hausdorff distance (HD).

thigh cardiac
background x x background x x
femoral bone (FB) x x left ventricular (LV) x
quadriceps (QC) x x right ventricular (RV) x
flexors (FX) x x myocardium (Myo) x
sartorius (ST) x x
subcutaneous fat (SCF) x x
adductors (AD) x
intermuscular fat (IMF) x
Table 1: Simulated data (completely labeled) and (containing a super label ) for thigh and cardiac MR segmentation.

In addition to the network training with the two cost functions and we also evaluated two baseline methods: 1) we trained only on the complete dataset with the normal crossentropy cost function to obtain a lower bound on the performance, and 2) we trained with on the entire unaltered training sets to obtain an upper bound.

femoral bone (FB) quadriceps (QC)
(LB) 0.971 (0.014) 0.60 (0.77) 7.00 (12.17) 0.952 (0.056) 1.32 (0.79) 14.40 (6.32)
0.978 (0.008) 0.45 (0.58) 5.40 (12.50) 0.977 (0.006) 0.81 (0.35) 10.60 (8.42)
0.974 (0.008) 0.38 (0.09) 2.05 (1.70) 0.980 (0.010) 0.61 (0.23) 7.78 (3.79)
(UB) 0.978 (0.006) 0.32 (0.09) 1.86 (1.15) 0.979 (0.008) 0.67 (0.31) 7.37 (5.16)
flexors (FX) sartorius (ST)
(LB) 0.905 (0.065) 2.30 (1.28) 16.56 (4.88) 0.809 (0.117) 4.22 (2.53) 36.44 (25.82)
0.957 (0.019) 1.05 (0.58) 11.20 (6.51) 0.903 (0.052) 1.76 (1.32) 15.90 (11.07)
0.957 (0.021) 0.90 (0.35) 9.36 (4.16) 0.967 (0.010) 0.33 (0.09) 2.10 (1.52)
(UB) 0.968 (0.013) 0.75 (0.33) 6.70 (3.71) 0.945 (0.055) 0.92 (1.08) 14.07 (24.17)
subcutanous fat (SCF) adductors (AD)
(LB) 0.936 (0.132) 0.92 (1.37) 11.29 (15.28) 0.809 (0.117) 4.22 (2.53) 36.44 (25.82)
0.965 (0.035) 0.48 (0.19) 6.33 (12.38) 0.908 (0.039) 1.13 (0.51) 10.8 (8.66)
0.974 (0.008) 0.41 (0.09) 6.11 (11.18) 0.967 (0.010) 1.09 (0.45) 9.05 (4.48)
(UB) 0.975 (0.014) 0.38 (0.12) 5.19 (11.93) 0.945 (0.055) 0.92 (1.08) 14.07 (24.17)
intermuscular fat (IMF) average
(LB) 0.608 (0.093) 2.62 (1.09) 32.67 (10.24) 0.847 (0.100) 2.05 (1.42) 18.96 (11.81)
0.744 (0.076) 1.55 (0.34) 27.00 (7.31) 0.919 (0.077) 1.04 (0.46) 12.52 (6.70)
0.823 (0.031) 0.92 (0.16) 18.00 (3.29) 0.940 (0.054) 0.66 (0.28) 7.78 (5.20)
(UB) 0.821 (0.046) 1.03 (0.36) 21.96 (7.06) 0.943 (0.020) 0.71 (0.31) 9.29 (7.20)
left ventricle (ED) left ventricle (ES)
(LB) 0.960 (0.018) 0.37 (0.38) 5.85 (3.77) 0.914 (0.040) 0.81 (0.69) 8.30 (3.59)
0.951 (0.018) 0.64 (0.56) 8.91 (5.78) 0.919 (0.040) 1.00 (1.16) 10.11 (5.69)
0.962 (0.018) 0.42 (0.54) 5.88 (3.64) 0.923 (0.052) 0.77 (0.84) 7.20 (3.16)
(UB) 0.962 (0.017) 0.39 (0.48) 5.49 (2.95) 0.934 (0.034) 0.53 (0.40) 7.76 (3.34)
right ventricle (ED) right ventricle (ES)
(LB) 0.876 (0.171) 1.69 (3.37) 16.28 (14.57) 0.828 (0.140) 1.81 (2.26) 15.96 (7.84)
0.909 (0.039) 0.91 (0.54) 14.52 (6.78) 0.809 (0.089) 2.06 (0.91) 15.87 (5.51)
0.922 (0.048) 0.83 (0.98) 13.57 (6.13) 0.827 (0.116) 1.76 (1.33) 15.15 (5.96)
(UB) 0.927 (0.043) 0.82 (0.90) 13.74 (6.33) 0.834 (0.108) 1.74 (1.43) 15.93 (5.73)
myocardium (ED) myocardium (ES)
(LB) 0.873 (0.031) 0.47 (0.18) 8.17 (5.08) 0.882 (0.042) 0.75 (0.47) 11.80 (5.85)
0.852 (0.044) 0.66 (0.32) 11.27 (6.58) 0.863 (0.055) 0.86 (0.51) 11.78 (5.85)
0.878 (0.030) 0.54 (0.29) 9.99 (8.46) 0.891 (0.035) 0.67 (0.42) 10.06 (5.68)
(UB) 0.881 (0.026) 0.51 (0.21) 8.90 (6.36) 0.896 (0.039) 0.61 (0.32) 10.74 (6.39)
(LB) 0.889 (0.074) 0.89 (1.32) 11.06 (6.78)
0.884 (0.048) 1.02 (1.67) 12.08 (6.03)
0.901 (0.050) 0.83 (0.73) 10.31 (5.51)
(UB) 0.906 (0.045) 0.77 (0.62) 10.43 (5.18)
Table 2: Thigh and cardiac MR segmentation accuracy measure in mean (std) for the evaluated cost functions and (best performance in bold font) and the lower bound (LB) and upper bound (UB) for all structures.

The results obtained with the investigated costs functions are summarized in Table 2. Example segmentations for both datasets are shown in Fig. 2. With the proposed cost function we achieved segmentation results very close to using the full annotations (upper bound) in both thigh and cardiac datasets.

Figure 2: Examples of thigh and cardiac ground truth and predicted segmentation using the evaluated cost functions and and the lower bound (LB) and upper bound (UB).

3.4 Discussion and Conclusion

In this work we proposed a cost function to enable the integration of multiple datasets with heterogeneous label subsets into a joint training. We evaluated the performance of this strategy on thigh MR and a cardiac MR datasets in which we artificially merged labels for half of the data. We found the proposed cost function substantially outperforms a naive masking approach and achieved results very close to using the full annotations. This novel cost function improves the segmentation performance compared to a naive masking precisely in those single labeled regions merged into a super label by avoiding undesired label or background leaking (see Fig. 2, Table 2). As expected we found that the proposed cost function led to the biggest improvement over the naive masking approach in regions were labels were merged into super labels.

One specific motivation of this work was to investigate the potential of this novel loss term in the scope of the OAI database where several datasets with heterogeneous label subsets are available from previous studies [8, 9, 12]. This new loss term will allow us to merge all this heterogeneous label subsets into a joint training.


  • [1]

    D. Shen, G. Wu, H. Suk, “Deep Learning in Medical Image Analysis,” Annu. Rev. Biomed. Eng. 19(1) 221-48 (2017)

  • [2] J. W. Prescott, “Quantitative imaging biomarkers: The application of advanced image processing and analysis to clinical and preclinical decision making,” J. Digit. Imaging 26(1) 97-108 (2013)
  • [3] G. Litjens, T. Kooi, BE. Bejnordi, AAA. Setio, F. Ciompi et al., “A survey on deep learning in medical image analysis,” Med. Image Anal. 42 60-88 (2017)
  • [4] O. Ronneberger, P. Fischer, T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” In: MICCAI. 234-41 (2015)
  • [5] JE. Iglesias, MR. Sabuncu, I. Aganj, P. Bhatt, C. Casillas et al., ”An algorithm for optimal fusion of atlases with different labeling protocols,” NeuroImage 106 451-63, (2014)
  • [6] CG. Peterfy, E. Schneider, M. Nevitt. The osteoarthritis initiative: report on the design rationale for the magnetic resonance imaging protocol for the knee. Osteoarthritis and cartilage 16(12) 1433-41 (2008)
  • [7] O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, X. Yang ; et al., ”Deep Learning Techniques for Automatic MRI Cardiac Multi-structures Segmentation and Diagnosis: Is the Problem Solved?,” in IEEE Transactions on Medical Imaging (2018)
  • [8] A. Ruhdorfer, T. Dannhauer, W. Wirth, W. Hitzl, CK. Kwoh et al., “Cross-Sectional and Longitudinal Side Differences in Thigh Muscles,” Arthr. Care Res. 65(7) 1034-42 (2013)
  • [9] A. Ruhdorfer, W. Wirth, T. Dannhauer, F. Eckstein, “Longitudinal (4 year) change of thigh muscle and adipose tissue distribution in chronically painful vs painless knees - data from the Osteoarthritis Initiative.,” Osteo. Cartil. 23(8) 1348-56 (2015)
  • [10] Ö. Cicek, A. Abdulkadir, SS. Lienkamp, T. Brox, O. Ronneberger, “3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation,” In: MICCAI. 424-32 (2016)
  • [11] CF. Baumgartner, LM. Koch, M. Pollefeys, E. Konukoglu, “An Exploration of 2D and 3D Deep Learning Techniques for Cardiac MR Image Segmentation,” in Proc. Statistical Atlases and Computational Models of the Heart (STACOM), ACDC challenge, MICCAI’17 Workshop (2017)
  • [12] J. Kemnitz, W. Wirth, F. Eckstein, AG. Culvenor, “The Role of Thigh Muscle and Adipose Tissue in Knee Osteoarthritis Progression in Women: Data from the Osteoarthritis Initiative”, Osteo. Cartil.” (2018) “Epub ahead of print”