Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalization of a Musculoskeletal Model

07/21/2019
by   Yuta Hiasa, et al.
3

We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891 +/- 0.016 (mean +/- std) and an average symmetric surface distance (ASD) of 0.994 +/- 0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845 +/- 0.031 DC and 1.556 +/- 0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in active-learning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.

READ FULL TEXT VIEW PDF

page 2

page 5

page 6

page 8

page 9

page 12

page 13

page 14

09/30/2021

Robust Segmentation Models using an Uncertainty Slice Sampling Based Annotation Workflow

Semantic segmentation neural networks require pixel-level annotations in...
11/29/2017

Automatic Spine Segmentation using Convolutional Neural Network via Redundant Generation of Class Labels for 3D Spine Modeling

There has been a significant increase from 2010 to 2016 in the number of...
01/13/2016

Multi-Atlas Segmentation with Joint Label Fusion of Osteoporotic Vertebral Compression Fractures on CT

The precise and accurate segmentation of the vertebral column is essenti...
05/18/2020

Bayesian convolutional neural network based MRI brain extraction on nonhuman primates

Brain extraction or skull stripping of magnetic resonance images (MRI) i...
11/01/2021

Comparing Bayesian Models for Organ Contouring in Headand Neck Radiotherapy

Deep learning models for organ contouring in radiotherapy are poised for...
02/11/2021

A fully automated method for 3D individual tooth identification and segmentation in dental CBCT

Accurate and automatic segmentation of three-dimensional (3D) individual...

I Introduction

The patient-specific geometry of skeletal muscles plays an important role in biomechanical modeling. The computational simulation of human motion using musculoskeletal modeling has been performed in a number of studies to investigate musculo-tendon forces and joint contact forces, which cannot be easily achieved by physical measurements [25, 27, 29]. Recent studies have demonstrated that personalization of model parameters, such as the size of the bones, geometry of the muscles and tendons, and physical properties of the muscle-tendon complex, improves accuracy of the simulation [20, 4, 32]. While the majority of previous studies modeled the musculo-tendon unit as one or multiple lines joining their origin and insertion, including so-called via points in some cases, several recent studies have shown that volumetric models representing subject-specific muscle geometry provide higher accuracy in the simulation [37]. However, the segmentation of the volumetric geometry of individual muscles from subject-specific medical images remains a time consuming task that requires expert-knowledge, thus precludes application in clinical practice. Therefore, our focus in this study is to develop an automated method of segmentation of individual muscles for personalization of the musculoskeletal model.

I-a Related work

Segmentation of muscle tissue and fat tissue has been studied extensively for the analysis of muscle/fat composition. (Note that we refer to muscle tissue here as an object including all muscles, not an individual muscle.) Ulbrich et al. [34] and Karlsson et al. [13] implemented an algorithm for automated segmentation of the muscle and fat tissues from MRI using a multi-atlas method [12]. Lee et al. [17] used deep learning for segmentation of the muscle and fat tissues in a 2D abdominal CT slice.

Segmentation of individual muscles is a much more difficult problem due to the low tissue contrast at the border between neighboring muscles, especially in the area where many muscles are contiguously packed such as in the hip and thigh regions. Handsfield et al. [8] manually performed segmentation of 35 individual muscles from MRIs of the lower leg in order to investigate the relationship between muscle volume and height or weight. To facilitate automation of the individual muscular segmentation, prior knowledge about the shape of each muscle has been introduced [3]. Andrews et al. [2] proposed an automated segmentation method for 11 thigh muscles from MRI using a probabilistic shape representation and adjacency information. They evaluated the method using images of the middle part of the left femur (20 cm in length until just above the knee) and reported an accuracy of 0.808 average Dice coefficient. Since the muscles of interest run along a long bone, i.e., the femur, the muscles have similar appearances in axial slices resulting in less complexity in segmentation compared to the hip region.

In CT images, due to the lower soft tissue contrast compared to MRI, segmentation of individual muscles is even more difficult. Yokota et al. [40] addressed the automated segmentation of individual muscles from CTs of the hip and thigh regions. The target region was broader than [2] covering the origin to insertion of 19 muscles. They introduced a hierarchization of the multi-atlas segmentation method such that the target region becomes gradually more complex in a hierarchical manner, namely starting with skin surface, then all muscle tissues as one object, and finally individual muscles at each hierarchy. They reported an average Dice coefficient of 0.838. Although their algorithm produced a reasonable accuracy for this highly challenging problem, due to the large number of non-rigid registrations required in the multi-atlas method, computational load was prohibitive when considering routine clinical applications (41 minutes for segmentation of one CT volume using a high performance server with 60 cores).

In order to enhance the accuracy and speed of the muscle segmentation in CT, we propose an application of convolutional neural networks (CNNs). We investigate the segmentation accuracy as well as a metric indicating uncertainty of the segmentation using the framework of Bayesian deep learning. Yarin Gal et al. [7] found that the dropout [30]

is equivalent to approximating the Bayesian inference, which allows estimation of the model uncertainty. It measures the degree of difference of each test sample from the training data set, originated from the deficiency of training data, namely

epistemic uncertainty [15]. This method has been applied to brain lesion segmentation [22, 6] and surgical tool segmentation [10]. Two example applications of the uncertainty metric explored in this study are; 1) prediction of segmentation accuracy without using the ground truth similar to the goal of Valindria et al. [35] and, 2) the active-learning framework [19, 39] for the reduction of manual annotation costs.

I-B Contributions

In this study, we demonstrate a significantly improved accuracy in the segmentation of 19 individual muscles from CTs of the hip and thigh regions through application of CNNs. Contribution of this paper is two-fold; 1) investigation of the performance of Bayesian U-Net using 20 fully annotated clinical CTs and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database, 2) analysis of the uncertainty metric in a multi-class organ segmentation problem and its potential applications in predicting segmentation accuracy, without using the ground truth, and efficient selection of manual annotation samples in an active-learning framework.

I-C Paper organization

The paper is organized as follows. In Section II, the proposed method is described, including data sets, uncertainty estimates, and active learning. In Section III, we quantitatively evaluate the proposed methods through experiments using two data sets. Then, we discuss the methods and results, and conclude the paper in Section IV.

Ii Methodology

Ii-a Overview

Figure 1 shows the workflow of the proposed methods. We first segment the skin surface using a 2D U-Net to isolate the body from surrounding objects such as the scan table and the calibration phantom. Next, the individual muscles are segmented and the model uncertainty is predicted using Bayesian U-Net, which is described in Section II-C

. The Dice coefficient of each muscle segmentation is predicted from the model uncertainty without using the ground truth. This is done using a linear regression between the average model uncertainty computed in a cross validation within the training data set (Fig.

8a). We evaluated the proposed active-learning framework, shown in Fig. 1(b), on a simulated environment using a fully annotated data set by assuming a situation where partial manual annotation is provided initially. The manual annotation of a small number of slices selected by the proposed procedure is given in steps as described in Section II-D.

Ii-B Data sets

Two data sets were used to evaluate the proposed method: 1) a fully annotated non-public clinical CT data set and 2) a partially annotated publicly available CT data set.

Ii-B1 Osaka University Hospital THA data set (THA data set)

This data set consists of 20 CT volumes scanned at Osaka University Hospital, Suita, Japan, for CT-based planning and navigation of total hip arthroplasty (THA) [40, 23]. The field of view was 360360 mm and the matrix size was 512512. The original slice intervals were 2.0 mm for the region including the pelvis and proximal femur, 6.0 mm for the femoral shaft region, and 1.0 mm for the distal femur region. Each CT volume had about 500 slices (see supplementary materials for details of the number of axial slices of each muscle). In this study, the CT volumes were resampled so that the slice interval becomes 1.0 mm throughout the entire volume. Nineteen muscles around the hip and thigh regions and 3 bones (pelvis, femur, sacrum) were manually annotated by an expert surgeon (Figure 2). The manual annotation took about 40 hours per volume. This data set was used for training and cross-validation for the accuracy evaluation and prediction of the Dice coefficient. Note that 132 CT volumes acquired at Osaka University independently from the above mentioned data set were used for training of the skin segmentation network. The region inside the skin was semi-automatically annotated.

Fig. 2: Training data set used in this study, consisting of 20 labeled CT volumes. The muscles of interest are separately visualized according to the functional group and their region. Upper and lower rows show the anterior and posterior views, respectively.

Ii-B2 TCIA soft tissue sarcoma data set (TCIA data set)

The data set obtained from TCIA collections 111http://www.cancerimagingarchive.net contains CT and MR volumes from 51 patients with soft tissue sarcomas (STSs) [36]. In this study, we selected 18 CT volumes that include the hip region. The CT volumes were resampled so that the in-plane field of view becomes 360360 mm without changing the slice center and the slice interval becomes 1.0 mm throughout the volume similar to the THA data set. The gluteus medius muscle was manually traced by a computer scientist and verified by an expert surgeon. This data set was not used in the training nor in the parameter tuning and only used for evaluation of generalizability of the model trained with the THA data set (see Section III-B2 for details).

Ii-C Estimation of uncertainty metric

The underlying algorithm of the proposed uncertainty estimates follows that of Gal et al. [7]

which used the dropout at the inference phase. This allowed approximation of the posterior distribution based on the probabilistic softmax output obtained from the stochastic dropout sampling. We use the mean and variance of the output from multiple samplings as the segmentation result and uncertainty estimate, respectively. Below, we briefly summarize the theoretical background described in

[7], formulate the specific metric that we employed in this paper, and propose a new structure-wise uncertainty metric for a multi-class segmentation problem.

Suppose we have a training data set of images and its labels . We consider the predictive label of an unseen image . Let a ”deterministic” neural network represent . A ”probabilistic” Bayesian neural network is given by marginalization over the weight as

(1)

where is the output label of a pixel, is the label class, and is the posterior distribution. Gal et al. [7] proved that approximation of the posterior distribution is equivalent to the dropout masked distribution , where and , and is the dropout ratio. Then, Eq. (1) can be approximated by minimizing the Kullback-Leibler (KL) divergence as follows.

(2)
(3)

where is the number of dropout samplings. This Monte Carlo estimation is called ”MC dropout[7]. We employed the predictive variance as the metric indicating uncertainty which is defined as

(4)

In this paper, we propose two new structure-wise uncertainty metrics: 1) predictive structure-wise variance (PSV) and 2) predictive Dice coefficient (PDC). PSV represents the predictive variance per unit area of the pixels that are classified as the target structure. Let

be all pixels that are classified as class ; (

represents the selection of the class with the highest probability for the pixel

). The metric is defined as

(5)

PDC is computed by a linear regression of PSV and the actual Dice coefficient of the target structure.

(6)

where is the linear coefficient and is the bias. To find these parameters, we conduct -fold cross-validation. -1 groups are used to train a model, while the remaining one group is used for the evaluation (i.e., observe the Dice and PSV). Then, and are determined by all sets of observed Dice and PSV.

As for the network architecture, we extend the U-Net model by inserting the dropout layer before each max pooling layer and after each up-convolution layer as shown in the dotted squares in Fig.

1(a), which is the same approach as Bayesian SegNet, proposed by Kendall et al. [14]. We call the U-Net extended by MC dropoutBayesian U-Net.”

Ii-D Bayesian active learning

A common practical situation in segmentation problems entails a scenario where the labeled data set is small-scale while a large-scale unlabeled data set is available. The active-learning method is known to be effective in that scenario by interactively expanding the training data set using the experts’ input.

In order to determine the pixels to query to the experts, the proposed method first selects slices with high uncertainty in segmentation from the unlabeled data set, which we call the slice selection step, and then selects pixels with high uncertainty from the selected slices, which we call the pixel selection step. The slice selection step follows Yang et al. [39] which utilized uncertainty and similarity metrics to determine the query slices. This is summarized as follows: Let be an unlabeled data set; then a subset of uncertain slices is selected following the selection of representative slices using a similarity-based clustering approach. Details of the algorithm are shown in Appendix.

In this paper, we propose a new method for the pixel selection step to reduce the number of pixels to query to the expert using the proposed uncertainty metric. We used manual labels for the pixels with uncertainty larger than the threshold (i.e., ”uncertain” pixels) and predicted labels for other pixels (i.e., ”certain” pixels), that is

(7)

where denotes the label for the -th pixel in -th slice and denotes the label manually provided by the expert. Note that the threshold determines the trade-off between manual annotation cost and the achieved accuracy. We experimentally investigate the choice of the threshold in the following sections.

Ii-E Implementation details

During the pre-processing, intensity of the CT volumes is normalized so that [] HU is mapped to [] (intensities smaller than -150 HU and larger than 350 HU are clamped to 0 and 255, respectively). At the training phase, data augmentation is performed by translation of [] % of the matrix size, rotation of [] deg, scale of [] %, shear transform with the shear angle of [, ] rad, and flipping in the right-left direction. The data augmentation allows the model to be invariant to the FOV of the scan, patient’s size, rotation, and translation. At post-processing, the largest connected component is extracted to obtain the final output for each muscle.

Ii-F Comparison with conventional methods

The current state-of-the-art method for automated segmentation of individual muscles from CT based on the hierarchical multi-atlas method [40] was implemented and the results were compared with the proposed method. In addition to U-Net, we evaluated another network architecture, FCN-8s [18], which is also a common fully convolutional neural network based on VGG16.

We used the Dice coefficient (DC) [5] and the average symmetric surface distance (ASD) [31] as the error metrics. Note that each metric was calculated per volume, not slice-by-slice. The statistical significance was tested by the paired -test with Bonferroni correction.

Iii Results

Iii-a Network architecture selection and comparison with conventional methods

First, the segmentation accuracy is quantitatively evaluated using the 20 labeled clinical CTs, known as the THA data set. Leave-one-out cross-validation (LOOCV) was performed where a model was trained with 19 CTs and tested with the remaining one CT. Twenty-three class classifications (3 bones, 19 muscles, and background) were performed. We initialized the weights in the same way as in [9]

, and then trained the networks using adaptive moment estimation (Adam)

[16] for iterations at the learning rate of 0.0001. The batch-size was 3.

Figure 3

summarizes the segmentation accuracy of the muscles. The DC and ASD over 19 muscles for one patient were averaged and plotted as box plots (i.e., 20 data points in each plot) for the multi-atlas method, FCN-8s, and U-Net. The average and standard deviation of DC for the three methods were 0.845

0.031 (meanstd), 0.8220.021, and 0.8910.016, respectively, while for ASD the values were 1.5560.444 mm, 1.7520.279 mm, and 0.9940.230 mm, respectively. Compared with the conventional multi-atlas method [40] and FCN-8s, U-Net resulted in statistically significant improvements () in both DC and ASD.

Fig. 3: Accuracy of muscle segmentation for 20 patients with the hierarchical multi-atlas method [40]

, FCN-8s, and U-Net. Box and whisker plots for two error metrics: (left) Dice coefficient (DC) and (right) average symmetric surface distance (ASD). Boxes denote the 1st/3rd quartiles, the median is marked with the horizontal line in each box, and outliers are marked with diamonds. The accuracy of 19 muscles over one patient was averaged in advance (i.e., 20 data points for each box plot).

Figure 4 shows the heatmap visualization of ASD for the individual muscles of each patient using the multi-atlas method and U-Net. The blue color indicates a lower ASD. The accuracy improvement is clearly observed for almost all of the muscles except for 5 cells (the psoas major in Patients #09 and #17, gracilis in Patient #14, semimembranosus in Patient #04, and semimembranosus in Patient #06).

Fig. 4: Heatmap visualization of ASD with hierarchical multi-atlas method [40] and U-Net for each individual muscle in each patient. The blue color shows higher segmentation accuracy. The numbers in parentheses indicate the mean of each row/column.

Figure 5

shows example visualizations of the predicted label for a representative patient (Patient #01). The result with U-Net demonstrates more accurate segmentation near the boundary of the muscles compared to the other two methods. In FCN-8s, where the output layer is obtained by upsampling and fusing the latent vectors that have lower resolution (one eighth in our case) of the input size, the accuracy seemed to be consistently lower than U-Net due to the lack of details. On the other hand, in U-Net, where the output layer is directly fused with the latent vectors that have the same resolution as the input size, delineation of details was improved due to the pixel-wise correspondence between the input image and the output label.

As for the segmentation accuracy of the bones with U-Net, DC of the pelvis, femur, sacrum were 0.981 0.0043, 0.985 0.0065, 0.962 0.0166, respectively, and ASD were 0.145 0.040 mm, 0.175 0.084 mm, 0.402 0.243 mm, respectively.

The skin surface segmentation step did not yield statistically significant accuracy difference in the THA data set () since it did not contain such objects that added undesirable variation, but it was effective in reducing the undesirable variation for the muscle segmentation step with a low manual annotation cost, especially for the CT volumes scanned with a solid intensity calibration phantom placed near the skin surface. The calibration phantom is essential in the quantitative CT (QCT) [1] which is one of our main application targets for the analysis of the relationship between muscle quality and bone mineral density (see supplementary materials for evaluation of the skin surface segmentation step in QCT volumes). Note that, in our experience, simple image processing methods, such as thresholding or extraction of the largest connected component, often failed to isolate the calibration phantom from the skin surface.

The average training time was approximately 11 hours with FCN-8s and U-Net, on an Intel Xeon processor (2.8 GHz, 4 cores) with NVIDIA GeForce GTX 1080Ti. The average computation time for the inference on one CT volume with about 500 2D slices was approximately 2 minutes excluding file loading, and the post-processing took about 3 minutes.

We conducted the following experiments about the predictive accuracy and active learning only with the U-Net architecture, since its accuracy is significantly higher than the other two methods as shown above.

Fig. 5: Visualization of the predicted label for a representative patient (Patient #01). The result with U-Net shows distinctly more accurate segmentation near the boundary of the muscles. The region of interest in the slice visualization at the bottom corresponds to the black dotted line in the left-most column.

Iii-B Estimation of uncertainty metric

Iii-B1 Relationship between uncertainty and segmentation accuracy

To demonstrate validity of the uncertainty metric, we investigated the relationship between the estimated uncertainty and the error metric using the 20 labeled CTs. We performed a 4-fold cross-validation where Bayesian U-Net was trained with 15 randomly selected CTs, and tested with the remaining 5 CTs using the same conditions as the experiment above.

Figure 6(a) shows the box and whisker plots of DC as a function of PSV. PSV was divided into 10 bins of equal width. The statistical significance was tested between adjacent bins, with Mann-Whitney test. The overall correlation ratio was . Figure 6 (b-h) shows scatter plots of DC for the individual muscle structures as a function of its PSV. The 95% confidence ellipses clearly illustrate the trend of the increased error (i.e., decreased DC) in accordance with increased uncertainty (i.e., increased PSV). The only muscle which had relatively low correlation was the obturator internus, which we discuss in the discussion section. Figure 7 shows an example uncertainty visualization. These high correlations between the accuracy and uncertainty suggested validity of using the uncertainty metric estimated by Bayesian U-Net as an indicator of the unobservable error metric without using the ground truth in a real clinical situation.

Fig. 6: Relationship between the proposed uncertainty metric and segmentation accuracy. (a) Box and whisker plots of DC as a function of predictive structure-wise variance (PSV). PSV was divided into 10 bins of equal width. Mann-Whitney test was performed in adjacent bins. (b-h) Scatter plots, with the 95 % confidence ellipses, of DC for each structure as a function of PSV. (e) Bones, and muscles of the (b-d) hip and (f-h) thigh regions. The symbol ”” denotes Pearson’s correlation coefficient.
Fig. 7: Visualization of the predictive variance computed by Bayesian U-Net. The average Dice coefficient and predictive structure-wise variance of muscles are denoted as DC and PSV. A good agreement between the regions with high uncertainty (denser regions in the middle sub-figure of each patient) and the regions with error (blue regions in the right sub-figure) suggests validity of the uncertainty metric to predict unobservable error in a real clinical situation.

Iii-B2 Generalization capability to an unseen data set

The generalization capability of Bayesian U-Net to an unseen data set was tested with the TCIA data set. Note that Bayesian U-Net was retrained using all 20 annotated CTs in the THA data set. Figure 8(a) shows a scatter plot of DC as a function of PDC. and in Eq. (6) were determined by a linear regression of 20 data points obtained from 4-fold cross-validation within the THA data set. The mean absolute error between DC and PDC was . Figures 8(b) and (c) show 2 representative patients with higher and lower accuracy, respectively. The higher uncertainty regions were observed in the regions with partial segmentation failure. The quantitative evaluation in the gluteus medius muscle showed that the average DC and ASD from 18 patients were 0.9140.026, 2.9274.997 mm, respectively. When excluding four outlier patients with extremely large sarcoma, the average values of DC and ASD were 0.9250.014 and 1.1350.777 mm, respectively, which was comparable to the results on the THA data set. The uncertainty was included in the plot in Fig. 6(b) (see red crosses), showing a similar distribution as the THA data set. These results suggest generalization capability of the proposed uncertainty metric between different data sets.

Fig. 8: Evaluation of generalization capability of Bayesian U-Net on the TCIA soft tissue sarcoma data set. (a) Scatter plot of DC as a function of predictive Dice coefficient (PDC). (b) Representative results for one patient (#05). (c) One patient with partial segmentation failures (#07), from left to right: the input CT volume, the predicted label and uncertainty, and the surface distance error of the gluteus medius muscle. The predictive structure-wise variance (PSV) of the gluteus medius muscle and PDC are reported, respectively. Higher uncertainty in tumor regions was observed in Patient #07 where the segmentation failed (shown in dark red in the surface distance error).

Iii-C Bayesian active learning

To investigate one of the application scenarios of the uncertainty estimates, we tested an active-learning method in a simulated environment using the 20 fully labeled clinical CTs. The experiment assumed that 15 CTs consisting of 95% of unlabeled slices and 5% labeled slices were available. Then, from each CT, 5% of the total number of slices from unlabeled slices was manually or automatically labeled and added to the labeled data in one step, which we call one ”acquisition step.” We iterate the acquisition step 20 times. The remaining 5 CTs were used as the test data set. In each acquisition step, Bayesian U-Net was initialized and trained using Adam [16] for maximal epochs at the learning rate of 0.0001 with the early stopping schema. Note that each axial CT slice was downsampled to in this experiment due to the limitation of training time. The data augmentation was purposely not performed in order to investigate the behavior of the model purely dependent on the number of training data sets.

For a quantitative evaluation of the manual labor, we defined a metric that we call manual annotation cost (MAC) as

(8)

where is the added label image. denotes the number of pixels to be queried in .

We compared the segmentation accuracy at each acquisition step with the following three pixel selection methods. (1) Fully-manual selection [39]: The user annotates all pixels in the uncertain slices. (2) Random selection: The user annotates random pixels. (3) Semi-automatic selection (proposed method): The user annotates only uncertain pixels. In order to perform a fair comparison, we set the experimental condition so that the number of pixels annotated in (2) and (3) were equal. Note that the fully-manual selection results in .

Figure 9(a) shows mean DC over all muscles and patients as a function of the acquisition step (note that each acquisition step adds 5% of the total training data set resulting in 100% after 20 steps). The proposed semi-automatic selection was tested with three different uncertainty thresholds, in Eq. (7). For a larger , we trust a larger number of pixels in the automatically estimated labels and only those pixels with highly uncertain pixels will be queried to the experts. For a smaller , we trust less number of pixels in the automatically estimated labels and more pixels will be queried to the experts, resulting in a higher MAC. Figure 9(b) shows the MAC metric at each acquisition step. First, we observed a trend that the accuracy increases as the training data set increases with any selection method. The random selection method stopped the increase at around a DC of 0.843, while the other two methods kept increasing. The DC of the proposed method with reached a DC higher than the random selection by about 0.03, which was close to the fully-manual selection method. When comparing the three thresholds in the proposed method, the larger number of pixels were queried (i.e., larger MAC) when the threshold was low; however, it did not reach the DC value achieved via fully-manual selection when the threshold was too low, i.e., . MAC gradually decreases according to the acquisition step, because the overall certainty increased according to the increase of training data set. In this experiment, we concluded that the threshold with a good trade-off between achievable accuracy and annotation cost was , which resulted in an approximately 90-fold cost reduction compared to fully-manual selection (i.e., median MAC was 0.0108 over all 19 acquisition steps). Note that the median MAC in case of and were 0.0484 and 0.0013, respectively.

Fig. 9: Results of the active-learning experiment using the proposed pixel selection method. (a) The plot of mean DC over individual structures and patients as a function of the acquisition step for different pixel selection methods. (b) The box and whisker plots of manual annotation cost at each acquisition step.
Fig. 10: Examples of query pixels to be manually annotated (colored by yellow) and their manual annotation cost (MAC).

Iv Discussion and Conclusion

We presented the performance of CNNs for use in segmentation of 19 muscles in the lower extremity in clinical CT. The findings in this paper are three-fold. The proposed Bayesian U-Net 1) significantly improved segmentation accuracy over the state-of-the-art hierarchical multi-atlas method and demonstrated high generalization capability to unseen test data sets, 2) provided prediction of the quantitative accuracy measure, namely the Dice coefficient, without using the ground truth, and 3) can be used in the active-learning framework to achieve considerable reduction in manual annotation cost.

The LOOCV using 20 fully annotated CTs showed the average DC of 0.8910.016 and ASD of 0.9940.230 mm, which were significant improvements () when compared with the state-of-the-art methods. The muscles that exhibited ASD larger than 3 mm with Bayesian U-Net (see Fig. 4

) were the piriformis (hip #08) of Patient #19, the psoas major (hip #09) of Patients #09, #17, and #20, the semitendinosus (thigh #07) of Patient #04, and the tensor fasciae latae (thigh #08) of Patient #06. After careful verification of those 6 muscles, we found one error in the ground truth expert’s trace (thigh #07 muscle of Patient #04). The accuracy and inter-/intra-operator variability in the manual trace is a frequently raised question. In our case, several rounds of inspections and reviews among the expert group were performed on the manual traces, especially on some muscles, which are difficult to define their boundaries even by experts, and finally consensus among the expert group was established. We consider that the proposed Bayesian U-Net learned the trace generated by the experts specialized in musculoskeletal anatomy and correctly reproduced the trace that would have been created by an expert in the same group with high fidelity. The muscles with higher average ASD (hips #03, #08, #09, thighs #07, #08) had specifically obscure boundaries in the axial plane, and an especially larger error among them was observed in muscles elongated in z- (superior-inferior) direction (hip #09 and thigh #07). On the other hand, the thigh muscles showed notably higher error in the multi-atlas method than Bayesian U-Net, because the thigh muscles, especially the gracilis (thigh #03), which is a thin muscle located near the skin surface in the lower thigh region, exhibited a larger shape variation than the hip muscles due to the variation in the hip joint position. These muscles are susceptible to the error in the registration that relies on the spatial smoothness in 3D, while our 2D slice-by-slice segmentation approach was not affected.

As for the uncertainty metric for prediction of accuracy, the high correlation between uncertainty and Dice coefficient in both THA and TCIA data sets suggested the potential for its use as the performance indicator. The only muscle with low correlation () was the obturator internus (hip #06). A possible reason of the low correlation is that non-epistemic variability became dominant. The obturator internus is a small muscle connecting internal surface of the obturator membrane of the pelvis and medial surface of the greater trochanter of the femur and traveling almost in parallel to the axial plane (see Fig. 2). We believe these properties entailed a challenge in manual tracing and the variability in the ground truth (so-called aleatory variability) became dominant. The psoas major (hip #09) and the tensor fasciae latae (thigh #08) had major failures in a few cases, but their low uncertainty metrics correctly indicated the failures. Valindria et al. [35] also attempted to predict the segmentation performance without using the ground truth by using the predicted segmentation of a new image as a surrogate ground truth for training a classifier which they call a reverse classification accuracy (RCA) classifier. They tried three different classifiers for use as segmentation and the RCA classifiers, and investigated the best combination exhaustively. Extensive comparative studies with our approach are intriguing, but it is beyond the scope of this paper. However, our approach using the MC dropout sampling representing the epistemic uncertainty in the model would be a more straightforward strategy to performance prediction without requiring an exhaustive search. The uncertainty metrics were recently investigated by Eaton-Rosen et al. [6] in a binary segmentation problem of the brain tumor, specifically for quantifying the uncertainty in volume measurement. Nair et al. [22] also explored uncertainty in binary segmentation for lesion detection in multiple sclerosis. Our present work is distinct from these previous works in that we demonstrated correlation between the structure-wise uncertainty metric, namely MC sample variance, and the Dice coefficient of each structure.

Active learning, in which the algorithm interactively queries the user to obtain the desired ground truth for new data points, is an extensively studied topic including discussion regarding the efficient use of non-expert knowledge from the crowd [19] and the efficient savings of the manual annotation cost by the expert [39]. We enhanced the approach developed by Yang et al. [39] which selected the new image of which the expert’s ground truth is most effective to improve accuracy. Our proposal is to further reduce the annotation cost by focusing on pixels to annotate, resulting in an approximately 90-fold cost reduction. The idea of pixel selection is similar to that proposed in [19], in which only super-pixels with high uncertainty is manually annotated. In summary, the proposed method combines slice- [39] and pixel- [19] selection methods based on Bayesian neural networks [7]. Our algorithm introduces one additional hyper parameter, which is the threshold of the uncertainty determining the pixel to be queried or not. We experimentally demonstrated that the threshold determined the trade-off between the manual annotation cost, learning speed, and final achievable accuracy. The optimum choice of the threshold value for a new data set requires further theoretical and experimental considerations, although the rate of initial improvement in accuracy during the first few acquisition steps would provide indications about the behavior in further steps as shown in Fig. 9(a). Stopping criteria in active learning have been discussed in [28]. The ideal criterion is when the ”cost” caused by the error (e.g., incorrect diagnosis) becomes less than the annotation cost. However, in practice, the ”cost” caused by the error is difficult to estimate, so the active learning is usually stopped when the learning curve stalls.

Our target application mainly focuses on personalization of the biomechanical simulation. The volumetric muscle modeling, using a finite element model [37, 38] or a simpler approximation in shape deformation for real-time applications such as [21], has shown advantages in accurate prediction of muscle behavior. In addition, Otake et al. [24] demonstrated the potential for estimating the patient-specific muscle fiber structure from a clinical CT assuming the segmentation of each muscle was provided. The proposed accurate automated segmentation method enhances this volumetric modeling in clinical routine as well as in studies using a large-scale CT database for applications such as statistical analysis of human biomechanics for ergonomic design. The patient-specific geometry of skeletal muscles has also been studied in clinical diagnosis and monitoring of muscle atrophy or muscle fatty degeneration caused by or associated with conditions such as trauma, aging, disuse, malnutrition, and diabetes [26, 33], where muscles were delineated manually by a single operator from the images. The automated segmentation is also advantageous in the reduction of the manual labor and inter-operator variability in these analyses.

In general, CT is superior in terms of speed compared to MRI. The CT scanning protocol that we used for the lower extremity took less than 30 seconds, while a typical MRI scan of the same range with the same spatial resolution would require more than 10 minutes. The fast scan is especially advantageous in orthopedic surgery, where biomechanical simulation is most helpful, to obtain the entire muscle shapes from their origin to insertion in the thigh region. Nonetheless, application of the proposed method to MR images would also be achievable, for example, by using an algorithm such as CycleGAN [11, 41] for synthesizing a CT-like image from the MR image.

One limitation in this study is the limited variation in the training and test data set. The THA data set only contains females who were subject to THA surgery, which limits variation in size and fat content in muscles. Although the TCIA data set contains male patients and a larger variation in terms of pathology, the ground truth label is available only for the gluteus medius muscle. Another limitation in the active-learning method is that the experiment was only a simulation. Although it illustrated potential usefulness of the proposed uncertainty metric with dependency on the uncertainty threshold in one type of active-learning framework, further investigation with a larger labeled- and unlabeled- CT database would be preferable to evaluate effectiveness of the proposed method in a more realistic clinical scenario. An investigation of an effective learning algorithm that exploits information from a large-scale unlabeled data set without requiring the iterative/time-consuming manual annotation is also in our future work.

1:unlabeled data set ; uncertain slices ; representative slices
2:while  do Select representative slices
3:     
4:     for  to  do Find the next best representative image from that maximizes similarity between (tentative ) and
5:         
6:         for  to  do Calculate similarity between and
7:              
8:              for  to  do
9:                   (, )
10:                  if  then                                 
11:                        
12:         if  then               
13:     
Algorithm Slice selection by similarity-based clustering

References

  • [1] J. E. Adams. Quantitative computed tomography. European journal of radiology, 71(3):415–424, 2009.
  • [2] S. Andrews and G. Hamarneh. The generalized log-ratio transformation: learning shape and adjacency priors for simultaneous thigh muscle segmentation. IEEE Trans. Med. Img., 34(9):1773–1787, 2015.
  • [3] P.-Y. Baudin, N. Azzabou, P. G. Carlier, and N. Paragios. Prior knowledge, random walks and human skeletal muscle segmentation. In Proc. Int. Conf. MICCAI, pages 569–576. Springer, 2012.
  • [4] Chèze, Laurence, Moissenet, Florent, and Dumas, Raphaël. State of the art and current limits of musculo-skeletal models for clinical applications. Mov Sport Sci/Sci Mot, (90):7–17, 2015.
  • [5] L. R. Dice. Measures of the amount of ecologic association between species. Ecology, 26(3):297–302, 1945.
  • [6] Z. Eaton-Rosen, F. Bragman, S. Bisdas, S. Ourselin, and M. J. Cardoso. Towards safe deep learning: accurately quantifying biomarker uncertainty in neural network predictions. In Proc. Int. Conf. MICCAI, pages 691–699. Springer, 2018.
  • [7] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proc. ICML, pages 1050–1059, 2016.
  • [8] G. G. Handsfield, C. H. Meyer, J. M. Hart, M. F. Abel, and S. S. Blemker. Relationships of 35 lower limb muscles to height and body mass quantified using mri. J. Biomech., 47(3):631–638, 2014.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun.

    Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.

    In Proc. ICCV, pages 1026–1034, 2015.
  • [10] Y. Hiasa, Y. Otake, S. Nakatani, H. Harada, S. Kanaji, Y. Kakeji, et al.

    Surgical tools segmentation in laparoscopic images using convolutional neural networks with uncertainty estimation and semi-supervised learning.

    In Proc. Int. Conf. CARS, pages 14–15, 2018.
  • [11] Y. Hiasa, Y. Otake, M. Takao, T. Matsuoka, K. Takashima, A. Carass, et al. Cross-modality image synthesis from unpaired data using cyclegan. In Proc. Int. Workshop SASHIMI, pages 31–41. Springer, 2018.
  • [12] J. E. Iglesias and M. R. Sabuncu. Multi-atlas segmentation of biomedical images: A survey. Medical Image Analysis, 24(1):205–219, 2015.
  • [13] A. Karlsson, J. Rosander, T. Romu, J. Tallberg, A. Grönqvist, M. Borga, et al. Automatic and quantitative assessment of regional muscle volume by multi-atlas segmentation using whole-body water–fat mri. J. Mag. Res. Imaging, 41(6):1558–1569, 2015.
  • [14] A. Kendall, V. Badrinarayanan, and R. Cipolla. Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680, 2015.
  • [15] A. Kendall and Y. Gal.

    What uncertainties do we need in bayesian deep learning for computer vision?

    In Proc. NIPS, pages 5574–5584, 2017.
  • [16] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [17] H. Lee, F. M. Troschel, S. Tajmir, G. Fuchs, J. Mario, F. J. Fintelmann, et al.

    Pixel-level deep segmentation: artificial intelligence quantifies muscle on computed tomography for body morphometric analysis.

    J. Digit. Imaging, 30(4):487–498, 2017.
  • [18] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proc. CVPR, pages 3431–3440, 2015.
  • [19] L. Maier-Hein, T. Ross, J. Gröhl, B. Glocker, S. Bodenstedt, C. Stock, et al. Crowd-algorithm collaboration for large-scale endoscopic image annotation with confidence. In Proc. Int. Conf. MICCAI, pages 616–623. Springer, 2016.
  • [20] F. Moissenet, L. Modenese, and R. Dumas. Alterations of musculoskeletal models for a more accurate estimation of lower limb joint contact forces during normal gait: A systematic review. J. Biomech, 63:8–20, 2017.
  • [21] A. Murai, Y. Endo, and M. Tada. Anatomographic volumetric skin-musculoskeletal model and its kinematic deformation with surface-based ssd. IEEE Robotics and Automation Letters, 1(2):1103–1109, 2016.
  • [22] T. Nair, D. Precup, D. L. Arnold, and T. Arbel. Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. In Proc. Int. Conf. MICCAI, pages 655–663. Springer, 2018.
  • [23] T. Ogawa, M. Takao, Y. Otake, F. Yokota, H. Hamada, T. Sakai, et al. Validation study of the CT-based cross-sectional evaluation of muscular atrophy and fatty degeneration around the pelvis and the femur. J. Orthop. Sci., 2019 In press.
  • [24] Y. Otake, M. Takao, N. Fukuda, S. Takagi, N. Yamamura, N. Sugano, et al. Registration-based patient-specific musculoskeletal modeling using high fidelity cadaveric template model. In Proc. Int. Conf. MICCAI, pages 703–710. Springer, 2018.
  • [25] A. Rajagopal, C. L. Dembia, M. S. DeMers, D. D. Delp, J. L. Hicks, and S. L. Delp. Full-body musculoskeletal model for muscle-driven simulation of human gait. IEEE Trans. Biomed. Engineering, 63(10):2068–2079, 2016.
  • [26] A. Rasch, A. Byström, N. Dalen, N. Martinez-Carranza, and H. Berg. Persisting muscle atrophy two years after replacement of the hip. J. Bone Jt. Surg., Br. vol., 91(5):583–588, 2009.
  • [27] A. Seth, J. L. Hicks, T. K. Uchida, A. Habib, C. L. Dembia, J. J. Dunne, et al. Opensim: Simulating musculoskeletal dynamics and neuromuscular control to study human and animal movement. PLoS Comp. Biol., 14(7):e1006223, 2018.
  • [28] B. Settles. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009.
  • [29] L. Shu, K. Yamamoto, J. Yao, P. Saraswat, Y. Liu, M. Mitsuishi, et al. A subject-specific finite element musculoskeletal framework for mechanics analysis of a total knee replacement. J. Biomech., 77:146–154, 2018.
  • [30] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958, 2014.
  • [31] M. Styner, J. Lee, B. Chin, M. Chin, O. Commowick, H. Tran, et al. 3d segmentation in the clinic: A grand challenge ii: Ms lesion segmentation. Midas Journal, 2008:1–6, 2008.
  • [32] F. Taddei, S. Martelli, G. Valente, A. Leardini, M. G. Benedetti, M. Manfrini, et al. Femoral loads during gait in a patient with massive skeletal reconstruction. Clin. Biomech., 27(3):273–280, 2012.
  • [33] K. Uemura, M. Takao, T. Sakai, T. Nishii, and N. Sugano. Volume increases of the gluteus maximus, gluteus medius, and thigh muscles after hip arthroplasty. J. Arthroplasty, 31(4):906–912, 2016.
  • [34] E. J. Ulbrich, D. Nanz, O. D. Leinhard, M. Marcon, and M. A. Fischer. Whole-body adipose tissue and lean muscle volumes and their distribution across gender and age: Mr-derived normative values in a normal-weight swiss population. Magn. Reson. Med., 79(1):449–458, 2018.
  • [35] V. V. Valindria, I. Lavdas, W. Bai, K. Kamnitsas, E. O. Aboagye, A. G. Rockall, et al. Reverse classification accuracy: Predicting segmentation performance in the absence of ground truth. IEEE Trans. Med. Imag., 36(8):1597–1606, 2017.
  • [36] M. Vallières, C. R. Freeman, S. R. Skamene, and I. El Naqa. A radiomics model from joint fdg-pet and mri texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities. Phys. Med. Biol., 60(14):5471, 2015.
  • [37] J. D. Webb, S. S. Blemker, and S. L. Delp. 3d finite element models of shoulder muscles for computing lines of actions and moment arms. Comp. Meth. Biomech. Biomed. Eng., 17(8):829–837, 2014.
  • [38] N. Yamamura, J. L. Alves, T. Oda, R. Kinugasa, and S. Takagi. Effect of tendon stiffness on the generated force at the achilles tendon-3d finite element simulation of a human triceps surae muscle during isometric contraction. J. Biomech. Sci. Eng., 9(3):13–00294, 2014.
  • [39] L. Yang, Y. Zhang, J. Chen, S. Zhang, and D. Z. Chen. Suggestive annotation: A deep active learning framework for biomedical image segmentation. In Proc. Int. Conf. MICCAI, pages 399–407. Springer, 2017.
  • [40] F. Yokota, Y. Otake, M. Takao, T. Ogawa, T. Okada, N. Sugano, et al. Automated muscle segmentation from ct images of the hip and thigh using a hierarchical multi-atlas method. Int J Comput Assist Radiol Surg, 13(7):977–986, 2018.
  • [41] Y. Zhang, L. Yang, J. Chen, M. Fredericksen, D. P. Hughes, and D. Z. Chen. Deep adversarial networks for biomedical image segmentation utilizing unannotated images. In Proc. Int. Conf. MICCAI, pages 408–416. Springer, 2017.