Cascaded Framework for Automatic Evaluation of Myocardial Infarction from Delayed-Enhancement Cardiac MRI

12/29/2020 ∙ by Jun Ma, et al. ∙ Nanjing University 0

Automatic evaluation of myocardium and pathology plays an important role in the quantitative analysis of patients suffering from myocardial infarction. In this paper, we present a cascaded convolutional neural network framework for myocardial infarction segmentation and classification in delayed-enhancement cardiac MRI. Specifically, we first use a 2D U-Net to segment the whole heart, including the left ventricle and the myocardium. Then, we crop the whole heart as a region of interest (ROI). Finally, a new 2D U-Net is used to segment the infraction and no-reflow areas in the whole heart ROI. The segmentation method can be applied to the classification task where the segmentation results with the infraction or no-reflow areas are classified as pathological cases. Our method took second place in the MICCAI 2020 EMIDEC segmentation task with Dice scores of 86.28 areas, respectively, and first place in the classification task with an accuracy of 92

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Quantitative assessment of myocardial viability is essential in the diagnosis and treatment management for patients suffering from myocardial infarction (MI). Cardiac magnetic resonance (CMR) is particularly used to provide imaging anatomical and functional information of the heart, such as the delayed-enhancement (LGE) CMR sequence which visualizes MI.

One of the important tasks is to segment the myocardium into different regions, including normal myocardium, infarction, and no-reflow from multi-sequence CMR dataset. Manual annotation is generally time-consuming, tedious and subjects to inter- and intra-observer variations. Thus, a fully automatic segmentation method is highly desired in clinical practice. Figure 1 presents some images from different myocardial infraction cases and the corresponding left ventricle, healthy myocardium, infraction, and no-reflow annotations. It can be observed that the intensity appearances vary significantly among different cases, and both infraction and no-reflow areas have ambiguous boundaries and low contrast. Thus, it is very challenging to automatically segment them.

Figure 1: Visual examples of different myocardial infraction delayed-enhancement cardiac MR images. The 1st row and the 2nd row are the original image and ground truth, respectively. In the 2nd row, the red, green, blue,and yellow color denote left ventricle, healthy myocardium, infraction, and no-reflow, respectively.

To the best of our knowledge, most CMR segmentation related studies focus on the left ventricle/atrium, right ventricle, and myocardium segmentation ([1, 12, 2, 7]), little work has been done in the fully automatic cardiac pathology segmentation ([13, 6, 5, 11, 8]). To advance the development of myocardial infraction image analysis, a joint segmentation and classification challenge, automatic evaluation of myocardial infarction from delayed-enhancement cardiac MRI (EMIDEC, http://emidec.com/), was organized in MICCAI 2020.

The two main objectives of the EMIDEC challenge are first to classify normal and pathological cases from the clinical information with or without DE-MRI, and secondly to automatically detect the different relevant areas (the myocardial contours, the infarcted area and the permanent microvascular obstruction area (no-reflow area)) from a series of short-axis DE-MRI covering the left ventricle. The segmentation allows us to make a quantification of the MI, in absolute value (mm3) or percentage of the myocardium. The paper presents our method details for the EMIDEC challenge.

2 Method

This paper focuses on both healthy and pathology pathologic myocardium segmentation from the delayed-enhancement cardiac MRI. One of the main challenges is how to exploit rich and reliable information regarding the pathological as well as morphological information of the myocardium. To this end, we design a cascaded framework that comprises two 2D U-Net to segment the left ventricle and myocardium, and the pathology regions, respectively. Figure 2 presents the whole pipeline of the proposed method. Specifically, the proposed method contains three steps111In step 1 and step 3, the networks are trained end-to-end, while the whole framework is not end-to-end.:

  • Step 1 (whole LV segmentation). Train a 2D U-Net [10] on the original CMR images to segment the whole LV (including left ventricular blood pool and myocardium);

  • Step 2 (creating ROI). Crop LV region of interest (ROI) from the original CMR images based on the segmentation results in step 1. In this way, the unrelated background can be excluded;

  • Step 3 (infraction and no-reflow segmentation). Train a new 2D U-Net to segment the infraction and no-reflow from the ROI images.

Figure 2: Pipeline of the proposed method. we first use a 2D U-Net to segment the whole LV (left ventricle), including LV blood pool and myocardium. Then, we crop the LV region of interest (ROI). Finally, a new 2D U-Net is used to segment the infraction and no-reflow areas.

3 Dataset and Training protocols

The EMIDEC challenge dataset provides 100 cases for the training and 50 cases for the testing [4]. For the classification tasks, additional 50 testing cases are provided and participants are required to submit the classification results within one hour. In particular, Every training and test case represents a DE-MRI exam of the left ventricle. An exam (i.e. a case) consists of a series of 5 to 10 short-axis slices covering the left ventricle from the base to the apex. The ground-truths (contours of the relevant areas) will be provided with the training dataset. The training set with full ground-truth will comprise 100 cases (67 pathological cases, 33 normal cases) randomly selected among the 150 subjects. The testing set is made of data from 50 subjects (33 pathological cases, 17 normal cases), all different from those in the training set.

During preprocessing, we apply z-score to normalize the image intensity, and resample all the images to the same spacing

. For the image, we apply three-order spline interpolation for in-plane voxels and nearest neighbor interpolation for out-of-plane voxels. For the ground truth, we convert the label to one-hot encoding and apply linear interpolation for in-plane voxels and nearest neighbor interpolation for out-of-plane voxels. We employ nnU-Net

[3] as the main network without any modification in architecture. Since the CMR data has a large slice thickness, 2D U-Net is more suitable for this task. During the training of the first U-Net, the patch size is and the batch size is 16. During the training of the second U-Net, the patch size is

, and the batch size is 32. The loss function is the sum between Dice loss

[9]

and cross entropy. We use stochastic gradient descent with momentum to optimize the networks. Each model is trained on a TITAN V100 GPU. During testing, we use a five-fold ensemble to predict each testing case.

4 Results and Discussion

4.1 Cross-validation segmentation results

Table 1 presents the quantitative cross-validation results of the first U-Net. The U-Net can achieve very high accuracy for the whole LV, which can ensure the cropped ROI can cover most of the LV and also the lesions.

Metrics Left Ventricle (LV) Myocardium (Myo) Whole LV (LV+Myo)
Dice (%) 93.47 2.06 85.38 3.94 95.51 1.83
Table 1: Quantitative segmentation results of the left ventricle, myocardium and whole LV on the training set.

Table 2 shows the quantitative segmentation results of the infection and the no-reflow. The sensitivity of all lesions is significant worse than the corresponding specificity, indicating that most segmentation results are right but many lesions are missed by the proposed method. Figure 3 presents the visualized segmentation results. We can find that most of the missed infraction and no-reflow areas have low contrast and weak boundaries, which are very challenging to segment.

Metrics Infraction No-reflow Infraction + No-reflow
Dice (%) 57.96 16.64 77.19 31.33 59.23 17.85
Sensitivity (%) 53.61 19.23 74.02 34.88 54.16 20.20
Specificity (%) 99.21 0.62 99.94 0.10 99.26 0.61
Table 2: Quantitative segmentation results of the infraction and no-reflow on the training set.
Figure 3: Visualized segmentation results of the myocardium (green), infraction (blue) and no-reflow areas (yellow).

4.2 Testing set segmentation results

We evaluate the proposed method on the official testing set with 50 cases. Table 3 presents the quantitative results. Our results ranked the second place on the overall segmentation ranking leaderboard (http://emidec.com/leaderboard).

Targets Metrics Results Subranks
Myocardium DSC (%) 0.8628 2
Volume Difference () 10153 2
Hausdorff Distance () 14.31 3
Infarction DSC (%) 0.6224 3
Volume Difference () 4874 4
Volume Difference Ratio (%) 3.50 3
Re-flow DSC (%) 0.7776 3
Volume Difference () 829.7 2
Volume Difference Ratio (%) 0.49 2
Table 3: Quantitative segmentation results of the testing set. NR stands for the no-reflow.

4.3 Testing Set Classification

The final segmentation results can be used for classifying the cases in normal or pathological. In particular, if the segmentation of one case does not have lesions (infraction or no-reflow) or the number of the lesion voxels is less than 10, it will be classified as a normal case. Otherwise, it will be classified as a pathological case.

The classification was organized as an on-site challenge where the participants had one hour to run their methods on their own laptop to classify 50 addition testing cases between normal and pathologic cases. Table 4 presents the quantitative classification results. Our results ranked the first place on the classification ranking leaderboard with an accuracy of 92%.

Teams Accuracy Rank
Ma 92% 1
Lourenco et al. 82% 2
Ivantsits et al. 78% 3
Sharma et al. 62% 4
Table 4: Classification Leaderboard.

5 Conclusion

This paper presents a simple fully automatic method for myocardium and pathology segmentation and classification from enhanced cardiac MR images. Experiments on the MICCAI 2020 EMIDEC challenge dataset show that the proposed method can achieve promising results, which ranked the 2nd place in segmentation task and the 1st place in classification task. In future, we would improve the learning ability of the network to be more sensitive to the lesions.

Acknowledgment

The authors of this paper declare that the segmentation method they implemented for participation in the EMIDEC challenge has not used any pre-trained models nor additional MRI datasets other than those provided by the organizers. The author also highly appreciates the organizers of automatic Evaluation of Myocardial Infarction from Delayed-Enhancement Cardiac MRI (EMDEC 2020) for their public dataset and organizing the great challenge.

References

  • [1] O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, X. Yang, P. Heng, I. Cetin, K. Lekadir, O. Camara, M. A. G. Ballester, et al. (2018) Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: is the problem solved?. IEEE Transactions on Medical Imaging 37 (11), pp. 2514–2525. Cited by: §1.
  • [2] C. Chen, C. Qin, H. Qiu, G. Tarroni, J. Duan, W. Bai, and D. Rueckert (2020) Deep learning for cardiac image segmentation: a review. Frontiers in Cardiovascular Medicine 7, pp. 25. Cited by: §1.
  • [3] F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein (2020) NnU-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods. Cited by: §3.
  • [4] A. Lalande, Z. Chen, T. Decourselle, A. Qayyum, T. Pommier, L. Lorgis, E. de la Rosa, A. Cochet, Y. Cottin, D. Ginhac, et al. (2020) Emidec: a database usable for the automatic evaluation of myocardial infarction from delayed-enhancement cardiac mri. Data 5 (4), pp. 89. Cited by: §3.
  • [5] L. Li, X. Weng, J. A. Schnabel, and X. Zhuang (2020) Joint left atrial segmentation and scar quantification based on a dnn with spatial encoding and shape attention. arXiv preprint arXiv:2006.13011. Cited by: §1.
  • [6] L. Li, F. Wu, G. Yang, L. Xu, T. Wong, R. Mohiaddin, D. Firmin, J. Keegan, and X. Zhuang (2020) Atrial scar quantification via multi-scale cnn in the graph-cuts framework. Medical image analysis 60, pp. 101595. Cited by: §1.
  • [7] J. Ma, J. He, and X. Yang (2020) Learning geodesic active contours for embedding object global information in segmentation cnns. IEEE Transactions on Medical Imaging. Cited by: §1.
  • [8] J. Ma (2020) Cascaded framework with complementary cmr information for myocardial pathology segmentation. In Myocardial Pathology Segmentation Combining Multi-Sequence CMR Challenge, pp. 159–166. Cited by: §1.
  • [9] F. Milletari, N. Navab, and S. Ahmadi (2016) V-net: fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV), Vol. , pp. 565–571. Cited by: §3.
  • [10] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi (Eds.), pp. 234–241. Cited by: 1st item.
  • [11] S. Zhai, R. Gu, W. Lei, and G. Wang (2020) Myocardial edema and scar segmentation using a coarse-to-fine framework with weighted ensemble. In Myocardial Pathology Segmentation Combining Multi-Sequence Cardiac Magnetic Resonance Images, X. Zhuang and L. Li (Eds.), pp. 49–59. Cited by: §1.
  • [12] X. Zhuang, L. Li, C. Payer, D. Štern, M. Urschler, M. P. Heinrich, J. Oster, C. Wang, Ö. Smedby, C. Bian, et al. (2019) Evaluation of algorithms for multi-modality whole heart segmentation: an open-access grand challenge. Medical Image Analysis 58, pp. 101537. Cited by: §1.
  • [13] X. Zhuang (2018) Multivariate mixture model for myocardial segmentation combining multi-source images. IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (12), pp. 2933–2946. Cited by: §1.