Myocardial Segmentation of Cardiac MRI Sequences with Temporal Consistency for Coronary Artery Disease Diagnosis

12/29/2020
by   Yutian Chen, et al.
University of Notre Dame
1

Coronary artery disease (CAD) is the most common cause of death globally, and its diagnosis is usually based on manual myocardial segmentation of Magnetic Resonance Imaging (MRI) sequences. As the manual segmentation is tedious, time-consuming and with low applicability, automatic myocardial segmentation using machine learning techniques has been widely explored recently. However, almost all the existing methods treat the input MRI sequences independently, which fails to capture the temporal information between sequences, e.g., the shape and location information of the myocardium in sequences along time. In this paper, we propose a myocardial segmentation framework for sequence of cardiac MRI (CMR) scanning images of left ventricular cavity, right ventricular cavity, and myocardium. Specifically, we propose to combine conventional networks and recurrent networks to incorporate temporal information between sequences to ensure temporal consistent. We evaluated our framework on the Automated Cardiac Diagnosis Challenge (ACDC) dataset. Experiment results demonstrate that our framework can improve the segmentation accuracy by up to 2

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 6

page 7

page 8

09/03/2019

Combining Multi-Sequence and Synthetic Images for Improved Segmentation of Late Gadolinium Enhancement Cardiac MRI

Accurate segmentation of the cardiac boundaries in late gadolinium enhan...
08/21/2019

Pixel-wise Segmentation of Right Ventricle of Heart

One of the first steps in the diagnosis of most cardiac diseases, such a...
07/03/2017

Automatic Cardiac Disease Assessment on cine-MRI via Time-Series Segmentation and Domain Specific Features

Cardiac magnetic resonance imaging improves on diagnosis of cardiovascul...
07/13/2020

DeU-Net: Deformable U-Net for 3D Cardiac MRI Video Segmentation

Automatic segmentation of cardiac magnetic resonance imaging (MRI) facil...
07/22/2020

Learning Directional Feature Maps for Cardiac MRI Segmentation

Cardiac MRI segmentation plays a crucial role in clinical diagnosis for ...
02/26/2019

A Fully-Automatic Framework for Parkinson's Disease Diagnosis by Multi-Modality Images

Background: Parkinson's disease (PD) is a prevalent long-term neurodegen...
02/13/2018

Computer-Aided Knee Joint Magnetic Resonance Image Segmentation - A Survey

Osteoarthritis (OA) is one of the major health issues among the elderly ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Coronary artery disease (CAD) is the most common cause of death globally. It affects more than 100 millions of people, and results in about 10 millions death each year [30]. In the United States, about 20% of those over 65 have CAD [8]. Magnetic resonance imaging (MRI) is a common tool for CAD diagnosis. With cardiac MRI (CMR), the myocardial structure and functionality can be assessed and analyzed. Particularly, experienced radiologists manually perform myocardial segmentation on the CMR image sequences and measures several parameters to finally determine the diagnosis. For instance, the left and right ejection fractions (EF) and stroke volumes (SV) are widely used for cardiac function analysis [3].

Recently, automatic myocardial segmentation of CMR image sequences has attracted considerable attention in the community. On one hand, with the aging society, the number of patients with CAD has been increasing for decades [15]. On the other hand, manual myocardial segmentation is tedious, time-consuming and with low replicability. Considering the medical cost and quality, automatic myocardial segmentation is highly desirable. However, it is a challenging task. First, there exists large shape variations in the images. Second, the labels of the noisy images is with low uniformity which degrades the training efficiency and effectiveness.

Currently, there exists two approaches for automatic myocardial segmentation. In the traditional myocardial segmentation approach [14, 9], an manually defined contour or boundary is needed for initialization. Although an automatic initialization might be achieved by some algorithms [1, 29], the segmentation performance highly relies on the initialization quality, which makes the framework lack stability. Another approach [13, 5]

uses deep learning for myocardial segmentation, which do not need any initialization and the whole process can run without manual interaction. However, these methods treat each CMR frame independently which do not exploit the temporal consistence among sequences.

In this paper, we propose to exploit temporal consistence for myocardial segmentation of CMR sequences for automatic CAD diagnosis. Particularly, we propose an encoder-decoder framework combining conventional networks and recurrent neural networks. The encoder is able to extract a set of features from CMR sequences, while the decoder embeds convolutional networks into recurrent networks to incorporate temporal information between CMR sequences. The contributions of our work are:

  • We proposed an encoder-decoder framework for myocardial segmentation of CMR sequences. our framework is able to incorporate temporal information between CMR frames.

  • To further exploit the temporal consistence among frames, we adopted a bi-directional training approach which can reduce segmentation error introduced by the first few frames in the training process.

  • We conducted comprehensive experiments on the ACDC dataset. Compared with the residual 3D U-net model of [33], our framework achieves an improvement of 1%-2% of segmentation accuracy in Dice coefficient.

2 Background

2.1 Myocardial Segmentation of CMR Image

CMR image is a widely used imaging tool for the assessment of myocardial micro-circulation. It utilizes the electromagnetic signal with characteristic frequency produced by the hydrogen nuclei under strong contrasting magnetic field and weak oscillating near field as the imaging agent.

Due to the high capacity for discriminating different types of tissues, CMR image is one of the most prominent standards for cardiac diagnosis through the assessment of the left and right ventricular ejection functions (EF) and stroke volumes (SV), the left ventricle mass and the myocardium thickness. For example, [3] obtained these parameters from CMR images using an accurate segmentation of CMR image for left ventricular cavity, right ventricular cavity, and the myocardium at end diastolic (ED) frame and end systolic (ES) frame can give out an accurate diagnostic of cardiac function.

In order to evaluate the myocardial function, accurate segmentation of left ventricular (LV) cavity, right ventricular (RV) cavity, and myocardial (MYO) need to be acquired from the framework. Figure 1 shows the slices of typical CMR images of a patient at ED frame with and without ground truth mask along the each axis respectively. The label shows the ground truth of segment result for different parts of the CMR image.

Figure 1: Structure illustration of a typical CMR image. Images above are the slices on z-axis, y-axis and x-axis respectively from patient 001 in the ACDC dataset at ED frame with mask, and the second row of figure are the raw CMR slices of patient 001.

2.2 Related Work

Myocardial segmentation of CMR sequences has the following challenges. First, the contrast between myocardium and surrounding structures are low as shown in Figure 1. Second, the brightness heterogeneity in the left and right ventricular cavities due to blood flow [3]. Third, misleading structures such as papillary muscle have the same intensity and grayscale information as myocardium, which makes it hard to extract the accurate boundary. There are two approaches among existing works towards myocardium segmentation.

The first approach is based on point distribution models (PDMs) [26]. A good example is the active shape model (ASM) [7] or active appearance model (AAM) [6]

. The main idea of ASM is to learn patterns of variability from a training set of correctly annotated image. ASM uses principal component analysis (PCA) to build a statistical shape model from a set of training shapes, and then fits an image in a way that most similar to the statistical shape in the training set.

[16] proposed an algorithm for myocardial and left ventricular cavity segmentation in CMR images based on invariant optimal feature 3-D ASM (IOF-ASM). [28] improved the ASM such that the method can work for sparse and arbitrary oriented CMR images. [27] proposed a new ASM model that includes the measurement of reliability during the matching process to increase the robustness of the model. [23] proposed a method of applying ASM on CMR images with varying number of slices and perform segmentation on arbitrary slice of CMR images with a new re-sampling strategy.

The prediction results of ASM must be constrain into certain shape variations so that the shape of the segmentation result does not go too far from the regular myocardium shape. Note that this is very important when artifacts and defects in the CMR image make the myocardium boundary unclear and hard to recognize. However, ASM is based on linear intensity information in the image, which is insufficient to model the appearance of CMR data with huge intensity variations and large artifacts. In addition, ASM requires a manual initialization shape and the final segmentation result is very sensitive to the shape and position of this initialization. Thus a fully automatic and non-linear model is needed.

The second approach adopts machine learning techniques to perform image segmentation. For example, [34]

used a simple implementation of fully connected neural network for the quality assessment of CMR image.

[19] used recurrent fully connect network (RFCN) on stack of 2D image for segmentation of CMR images. The recurrent network is applied on the short axis so that the continuous of spacial information on the short axis can be utilized. [24] proposed to use Dilation CNN, where each layer has the same resolution so the localized information in the input image would not be lost. [10] proposed a multi-structure segmentation for each time step of MRI sequences and extracted a domain-specific features. [25] used a simple network composed of cascaded modules of dilated convolutions with increasing dilation rate without using concatenation or operations like pooling that will lead to the decrease of resolution. [36]

introduced the shape prior obtained from the training dataset in the 3D Grid-net and employed the contour loss as loss function to improve the performance on the border of segmentation result.

[17] presented a method to guarantee the anatomical plausibility of segmentation result such that the anatomical invalid segmentation result of the model will be reduced to zero. [12] proposed a neural network with the dense block that contains dense connections between layers inspired by the Dense Net. [2] compared the performance of 2D and 3D fully convolution network (FCN) and U-net. [4] used an multi-objective evolutionary based algorithm to incorporate 2D FCN and 3D FCN to search for an efficient and high-performing architecture automatically. [31]

used six different types of model’s average probability map and cyclic learning rate schedule to improve the segmentation performance.

[20] proposed a combination of rigid alignment, non-rigid diffeomorphism registration, and label fusion to increase the performance of 3D U-net. [35] used the shape prior that is embedded in the GridNet to reduce the anatomical impossible segmentation result. [18] used the combination of 2D and 3D U-net and proposed a new class-balanced Dice loss to make the optimization easier.

Although these above methods showed great improvements in the segmentation performance compared to ASM or AAM. They treat each frame independently, which make the segmentation results of some specific sequences inaccurate or the overall results lack coherence.

2.3 Dataset

Figure 2: Examples of hard and easy cases of CMR image (slice taken on short-axis). The first and second columns refer to hard cases and the third and forth columns refer to easy cases.

The automatic cardiac diagnostic challenge (ACDC) dataset consists of both CAD patients and healthy individuals, whose diagnosis results are extracted from clinical medical cases. There are 150 patients in total and are evenly divided into five subgroups base on their diagnosis results. The five subgroups of patients have systolic heart failure with infraction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle, and no abnormality, respectively. 50 of the patients made up the test dataset on the ACDC website, and the other patients are released as the training dataset. CMR sequences of all patients are collected by two MRI systems with different magnetic strength (1.5T-Siemens Area, Siemens Medical Solutions, Germany and 3.0T-Siemens Trio Tim, Siemens Medical Solutions, Germany). For each frame in the patient’s CMR sequence, there contains a series of short-axis slices covering the LV from base to apex [3]. For most patients, the dataset collected 28 - 40 consecutive frames to cover the whole cardiac cycle. Some of the patients in the dataset may have 5-10 percents of the cardiac cycle being omitted.

Figure 2 shows some hard and easy cases in both ED and ES phases of CMR image in the ACDC dataset. The hard cases usually have a low contrast, blur image, or extreme anatomical structure. While the easy cases have a high contrast, and less misleading structure with the similar features as LV, RV, and MYO have.

3 Method

Figure 3: The proposed myocardial segmentation architecture, which contains an encoder (Res U-Net [21]) and a decoder (ConvLSTM [22]).
Figure 4: Network structure of our proposed Res U-net based encoder.
Figure 5: Illustration of our proposed bi-direction training approach.

The proposed framework for myocardial segmentation of CMR image sequences is shown in Figure 3. The input consists of a set of CMR frames from a CMR sequence. The output is the myocardial segmentation at ED and ES phases of the input.

Figure 6:

Network structure of our proposed decoder. The input of our decoder is the features extracted from our encoder. The decoder consists of a hierarchical ConvLSTMs and is able to incorporate temporal information between CMR frames.

3.1 Encoder

The encoder is based on U-net [21], which is an effective method on a broad range of medical image segmentation tasks. The network structure of the encoder is shown in Figure 4. The input of our encoder is a single-channel image corresponding to one frame in a MCE sequence. Based on the U-net, we add one residual block on each layer, which contains three convolutional layers and one shortcut path in it. Also, the results from each layer are combined using pointwise addition instead of concatenation. We expect the addition of residual blocks in U-net can extract features from the input CMR image without suffering from serious gradient explosion or gradient vanishing problem.

We extract four feature maps from U-net as the output of our encoder. Four feature maps obtained from the encoder represents the probability that one voxel belongs to background, LV, RV, and MYO, respectively.

3.2 Decoder

The network structure of the decoder is shown in Figure 6. It contains a hierarchical recurrent network of ConvLSTMs [32] which acts like recurrent U-net. The output of the decoder is the segmentation result of myocardium of this frame. The dash arrows in Figure 3 depict the temporal recurrence in the decoder. We depict the features extracted by the encoder for frame as , and as the feature for frame in the -th level of hierarchical Conv LSTM network and the output of the -th ConvLSTM layer for frame as . As shown in Equation (1-3), depends on three variables: (1) the output of the previous ConvLSTM layer ; (2) the extracted features in hierarchical ConvLSTM from the encoder ; and (3) the hidden state of the same ConvLSTM layer for the previous frame .

(1)
(2)
(3)

In Equation (3), is the input of ConvLSTM, and is the hidden state input of ConvLSTM.

is the concatenate operation for tensor

and on feature axis. For the first frame of a CMR sequence, is a matrix of ones, which means no prior information is known.

Figure 7: Frames of CMR sequence from three patients. Note the brightness heterogeneity in LV and RV on the first few frames. Using LSTM as decoder, the model can get more temporal information from the previous and future frames and result in more accurate segmentation.

3.3 Bi-directional Training

We notice that the prediction results of myocardium segmentation in CMR image is highly related to the segmentation result of frames either behind or after it. The first frame of CMR sequence will not receive enough information if we only use forward Conv LSTM. Figure 7 shows some frames of CMR image of different patients in the ACDC dataset. We can see that the frames of CMR image of the last frame is highly related to the image of the next frame. Consequently, the prediction error of the first frame due to the brightness heterogeneity may propagate to the rest of the CMR frames.

Therefore, we adopted a bi-directional training approach to alleviate this problem. Specifically, we used two Conv LSTMs in our decoder model. One will propagate forward, from frame 1 to frame , while the other will propagate backward, from frame to frame 1. is the total number of frames in one CMR sequence. Figure 5 presents the workflow of proposed bi-directional training approach. Such approach can better exploit the temporal information along the frames thus improve the segmentation accuracy.

4 Experiments

4.1 Experiment Setup

In this section, we evaluated the performance of our proposed encoder-decoder framework in myocardial segmentation task of CMR sequences. The residual U-net (Res U-net) implementation is used as our baseline. We compared the proposed framework (Res U-net+ConvLSTM) and Res U-net. Res U-net and ConvLSTM are implemented using PyTorch based on

[11] and [32], separately. The CMR images are resampled into using linear resample method. For data augmentation during training, we scaled all images by and and flipped them on axis and

axis respectively. During testing and validation, we did not employ any augmentations. For each iteration, a complete CMR sequence containing 28-40 frames of a patient was used for training. Batch size is set to 1, which means in each iteration 1 CMR sequences containing 28 - 40 frames are fed for training. We trained the encoder for 10 epochs with a learning rate of 0.0001. Then, we trained the encoder and decoder together with a learning rate of 0.0001 and a learning rate decay of 0.7 per epoch for another 10 epochs.

We splitted the ACDC dataset into training set, validation set and testing set by a ratio of 7:2:1 based on the patient number. Therefore, there are 70 patients in the training set, 20 patients in the validation set, and 10 patients in the testing set. Dice coefficient and Intersection over Union (IoU) are used to evaluate the segmentation performance, which are defined as:

(4)
(5)

in which and refer to the prediction and ground truth mask, respectively. is the index of all voxels (totally voxels).

Figure 8: Visualization of CMR image segmentation results of three different patients in both ED phase and ES phase. Yellow, orange, and purple areas refer to the LV, MYO, and RV respectively. Each row refer to segmentation result of Res U-net, our framework, Res U-net + f-ConvLSTM, and ground truth from left to right. The white pointers in the image specifically point out the segmentation result that is inconsistent. From the figure we can see the f-ConvLSTM and bi-ConvLSTM model, which incorporate the temporal information between frames can greatly decrease the existence of such inconsistent segmentation result. Also, most errors in Res U-net + fConvLSTM and our framework are in hard cases like Patient 39 in this figure, where input CMR image has a low contrast and vague contour between labeled tissue and background tissue.
ED ES
LV-Dice RV-Dice MYO-Dice IoU LV-Dice RV-Dice MYO-Dice IoU
Res U-net[21] 0.8856 0.8073 0.7178 0.5583 0.8050 0.6841 0.7554 0.4053
Res U-net[21] + f-ConvLSTM[22] 0.8857 0.8082 0.7097 0.5586 0.8056 0.6896 0.7588 0.4186
our framework 0.8967 0.8146 0.7260 0.5587 0.8133 0.7080 0.7656 0.4231
Table 1: Comparison of our proposed framework against Res U-net baseline implementation. Res U-net+f-ConvLSTM refers to have only forward ConvLSTM that trains forwardly from frame 1 to frame . Res U-net+bi-ConvLSTM (our framework) refers to training both forward ConvLSTM and backward ConvLSTM, where backward ConvLSTM process input CMR image from frame to frame .
Res U-net Res U-net+f-ConvLSTM our framework
IoU Dice IoU Dice IoU Dice
Patient 16 ED 0.5417 0.8245 0.5417 0.8433 0.5417 0.8499
ES 0.5729 0.8168 0.5729 0.8196 0.5729 0.8234
Patient 39 ED 0.6771 0.8132 0.6771 0.8364 0.6771 0.8348
ES 0.7396 0.7556 0.7396 0.7688 0.7396 0.7587
Patient 64 ED 0.6666 0.8537 0.6667 0.8653 0.6667 0.8616
ES 0.6875 0.8334 0.6875 0.8368 0.6875 0.8347
Patient 90 ED 0.6563 0.8099 0.6563 0.8027 0.6563 0.8043
ES 0.6563 0.7476 0.6563 0.7482 0.6563 0.7623
Table 2: Quantitative segmentation results of different models for some frames corresponding to Figure 8.
Figure 9: Typical segmentation error of our framework. The figure shows segmentation result for patient 41 and 81 in both ES and ED phases. The image on Column 1 and 4 are the segmentation results, and images on Column 2 and 5 are the corresponding ground truth. The segmentation error is usually caused by brightness heterogeneity, lack of contrast, or the improper input image due to faulty setup of magnetic resonance system or the misoperations of operators.

4.2 Results and Discussion

Table 1 shows the results of myocardial segmentation for LV, RV, and MYO at end diastolic (ED) phase and end systolic (ES) phase. Dice coefficient on each label class and Intersection over Union are reported. Res U-net+f-ConvLSTM refers to training the proposed framework forwardly from frame 0 to frame while our framework (Res U-net+bi-ConvLSTM) refers to training in bi-direction. We noticed that our framework outperforms baseline implementation in all metrics for both ED and ES frame. Specifically, our Res U-net+f-ConvLSTM implementation has an improvement of 0.01%, 0.10%, -0.81%, 0.61%, 0.55%, and 0.30% on Dice coefficient of LV, RV, and MYO at ED frame and ES frame respectively. The experiment results of Res U-net+f-ConvLSTM and our framework show that by adding a backward training step, we can further increase the segmentation performance. Our framework’s implementation has an improvement of 1.11%, 0.64%, 0.82%, 0.83%, 2.39%, and 1.02% on Dice coefficient of LV, RV, and MYO at ED frame and ES frame respectively.

Figure 8 shows the visualization of segmentation results of four different patients (Patient 16, Patient 39, Patient 64, and Patient 90) in both ED phase and ES phase by Res U-net, our framework, and Res U-net+f-ConvLSTM. Each row refers to the result of Res U-net, our framework, Res U-net+f-ConvLSTM, ground truth, and raw CMR slice respectively. From Figure 8, we can see the f-ConvLSTM and bi-ConvLSTM, which has temporal consistency between frames, have less inconsistent segmentation result as marked by the white arrow in the figure. However, for a few cases, the temporal consistency may not eliminate the inconsistency in segmentation result completely. This happens when the framework recognize a stable feature on CMR image as an incorrect label. Since the misleading structure will remain on all the CMR frames in the sequence, the temporal consistency provided by LSTM will not be able to remove such an inconsistency. Table 2 demonstrates the quantitative results for ED phase and ES phase of patient 16, 39, 64, and 90 corresponding to the Figure 8. We can see that our proposed encoder-decoder framework tends to predict more consistent and accurate than the baseline, especially in the first few frames such as ES phase of patient 16, where obvious defect exists in segmentation result. Comparing Res U-net+f-ConvLSTM and our framework on segmentation boundaries, we can observe that the bidirectional training can help our framework to produce more consistent results in most cases. Although in some cases the segmentation of Res U-net+f-ConvLSTM is better than that of our framework, this might be caused by the constraint from backward temporal information that makes the segmentation lack flexibility. The overall performance of our framework is superior in terms of all the metrics.

It can be seen that the Dice coefficient of the ED phase are usually higher than the ES phase. However, our framework can achieve higher performance on both phases compared with Res U-net implementation.

Note that the work [33]

used the class-balanced loss and transfer learning to improve the performance of Res 3D U-net on the ACDC dataset. They achieved Dice coefficients of 0.864, 0.789, 0.775, and 0.770 on segmentation of LV and RV in ED phase and ES phase respectively, while our framework achieves higher Dice coefficients of 0.897, 0.815, 0.813, and 0.708, respectively.

4.3 Discussion

Quantitative segmentation results and grading results demonstrate the superiority of our framework compared with the Res U-net baseline implementation. However, there are still some cases where our framework cannot predict reasonable boundaries. For example, Figure 9 (a) shows the segmentation results of Patient 41 in ES and ED phases by our framework. We can notice that there exists deviation between the ground truth boundary (images on Column 2 and 4) and the prediction boundary (images on Column 1 and 3). This is because the flow of blood in RV cavity leads to the brightness heterogeneity in the RV area of CMR image, which makes the image intensity of the ground truth RV region similar to the surrounding cardiac structures (e.g., heart chambers), and finally leads to segmentation failure.

There are some cases as shown in Figure 9 (b) in which CMR sequence have serious defect and ghosting. This may be caused by the improper setup of the magnetic resonance system or the mistake of operators. Therefore, it is hard for our framework to find a plausible myocardial boundary even though our framework is able to correct some segmentation error based on temporal information between frames.

5 Conclusion

In this paper, we proposed a myocardial segmentation framework of CMR sequences for CAD diagnosis. Specifically, we proposed to combine conventional networks and recurrent networks to incorporate temporal information between sequences to ensure temporal consistency. Extensive experiments showed that compared with Res U-net, our proposed framework can achieve an improvement of 1% to 3% in Dice coefficient. In addition, we introduced a bi-directional training approach to further reduce segmentation error introduced by the first few frames in the forward training process. Experiment results demonstrate that our bi-directional training approach can further improve the segmentation performance.

References

  • [1] Daniel Barbosa, Thomas Dietenbeck, Brecht Heyde, Helene Houle, Denis Friboulet, Jan D’hooge, and Olivier Bernard. Fast and fully automatic 3-d echocardiographic segmentation using b-spline explicit active surfaces: feasibility study and validation in a clinical setting. Ultrasound in medicine & biology, 39(1):89–101, 2013.
  • [2] Christian F. Baumgartner, Lisa M. Koch, Marc Pollefeys, and Ender Konukoglu. An exploration of 2d and 3d deep learning techniques for cardiac MR image segmentation. In Lecture Notes in Computer Science, pages 111–119. Springer International Publishing, 2018.
  • [3] O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, X. Yang, P. Heng, I. Cetin, K. Lekadir, O. Camara, M. A. Gonzalez Ballester, G. Sanroma, S. Napel, S. Petersen, G. Tziritas, E. Grinias, M. Khened, V. A. Kollerathu, G. Krishnamurthi, M. Rohé, X. Pennec, M. Sermesant, F. Isensee, P. Jäger, K. H. Maier-Hein, P. M. Full, I. Wolf, S. Engelhardt, C. F. Baumgartner, L. M. Koch, J. M. Wolterink, I. Išgum, Y. Jang, Y. Hong, J. Patravali, S. Jain, O. Humbert, and P. Jodoin. Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: Is the problem solved? IEEE Transactions on Medical Imaging, 37(11):2514–2525, 2018.
  • [4] Maria Baldeon Calisto and Susana K. Lai-Yuen. AdaEn-net: An ensemble of adaptive 2d–3d fully convolutional networks for medical image segmentation. Neural Networks, 126:76–94, June 2020.
  • [5] Hao Chen, Yefeng Zheng, Jin-Hyeong Park, Pheng-Ann Heng, and S Kevin Zhou. Iterative multi-domain regularized deep learning for anatomical structure detection and segmentation from ultrasound images. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 487–495. Springer, 2016.
  • [6] Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. IEEE Transactions on Pattern Analysis & Machine Intelligence, (6):681–685, 2001.
  • [7] Timothy F Cootes, Christopher J Taylor, David H Cooper, and Jim Graham. Active shape models-their training and application. Computer vision and image understanding, 61(1):38–59, 1995.
  • [8] Centers for Disease Control, Prevention (CDC, et al. Prevalence of coronary heart disease–united states, 2006-2010. MMWR. Morbidity and mortality weekly report, 60(40):1377, 2011.
  • [9] Yanhui Guo, Guo-Qing Du, Jing-Yi Xue, Rong Xia, and Yu-hang Wang. A novel myocardium segmentation approach based on neutrosophic active contour model. Computer methods and programs in biomedicine, 142:109–116, 2017.
  • [10] Fabian Isensee, Paul F. Jaeger, Peter M. Full, Ivo Wolf, Sandy Engelhardt, and Klaus H. Maier-Hein. Automatic cardiac disease assessment on cine-MRI via time-series segmentation and domain specific features. In Lecture Notes in Computer Science, pages 120–129. Springer International Publishing, 2018.
  • [11] Fabian Isensee, Jens Petersen, Andre Klein, David Zimmerer, Paul F Jaeger, Simon Kohl, Jakob Wasserthal, Gregor Koehler, Tobias Norajitra, Sebastian Wirkert, et al. nnu-net: Self-adapting framework for u-net-based medical image segmentation. arXiv preprint arXiv:1809.10486, 2018.
  • [12] Mahendra Khened, Varghese Alex, and Ganapathy Krishnamurthi.

    Densely connected fully convolutional network for short-axis cardiac cine MR image segmentation and heart diagnosis using random forest.

    In Lecture Notes in Computer Science, pages 140–151. Springer International Publishing, 2018.
  • [13] Sarah Leclerc, Thomas Grenier, Florian Espinosa, and Olivier Bernard. A fully automatic and multi-structural segmentation of the left ventricle and the myocardium on highly heterogeneous 2d echocardiographic data. In 2017 IEEE International Ultrasonics Symposium (IUS), pages 1–4. IEEE, 2017.
  • [14] Yuanwei Li, Chin Pang Ho, Navtej Chahal, Roxy Senior, and Meng-Xing Tang. Myocardial segmentation of contrast echocardiograms using random forests guided by shape model. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 158–165. Springer, 2016.
  • [15] Michelle C Odden, Pamela G Coxson, Andrew Moran, James M Lightwood, Lee Goldman, and Kirsten Bibbins-Domingo. The impact of the aging population on coronary heart disease in the united states. The American journal of medicine, 124(9):827–833, 2011.
  • [16] Sebastian Ordas, L. Boisrobert, Marina Huguet, and Alejandro Frangi. Active shape models with invariant optimal features (iof-asm) application to cardiac mri segmentation. Comput Cardiol. Volume 30, 30:633 – 636, 10 2003.
  • [17] Nathan Painchaud, Youssef Skandarani, Thierry Judge, Olivier Bernard, Alain Lalande, and Pierre-Marc Jodoin. Cardiac MRI segmentation with strong anatomical guarantees. In Lecture Notes in Computer Science, pages 632–640. Springer International Publishing, 2019.
  • [18] Jay Patravali, Shubham Jain, and Sasank Chilamkurthy.

    2d-3d fully convolutional neural networks for cardiac MR segmentation.

    In Lecture Notes in Computer Science, pages 130–139. Springer International Publishing, 2018.
  • [19] Rudra P K Poudel, Pablo Lamata, and Giovanni Montana. Recurrent fully convolutional neural networks for multi-slice mri cardiac segmentation, 2016.
  • [20] Marc-Michel Rohé, Maxime Sermesant, and Xavier Pennec. Automatic multi-atlas segmentation of myocardium with SVF-net. In Lecture Notes in Computer Science, pages 170–177. Springer International Publishing, 2018.
  • [21] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
  • [22] Amaia Salvador, Miriam Bellver, Victor Campos, Manel Baradad, Ferran Marques, Jordi Torres, and Xavier Giro-i Nieto. Recurrent neural networks for semantic instance segmentation. arXiv preprint arXiv:1712.00617, 2017.
  • [23] Carlos Santiago, Jacinto C. Nascimento, and Jorge S. Marques. A new ASM framework for left ventricle segmentation exploring slice variability in cardiac MRI volumes. Neural Computing and Applications, 28(9):2489–2500, May 2016.
  • [24] G. Simantiris and G. Tziritas. Cardiac mri segmentation with a dilated cnn incorporating domain-specific constraints. IEEE Journal of Selected Topics in Signal Processing, 14(6):1235–1243, 2020.
  • [25] Georgios Simantiris and Georgios Tziritas. Cardiac MRI segmentation with a dilated CNN incorporating domain-specific constraints. IEEE Journal of Selected Topics in Signal Processing, 14(6):1235–1243, Oct. 2020.
  • [26] Catalina Tobon-Gomez, Constantine Butakoff, Santiago Aguade, Federico Sukno, Gloria Moragas, and Alejandro F Frangi. Automatic construction of 3d-asm intensity models by simulating image acquisition: Application to myocardial gated spect studies. IEEE Transactions on Medical Imaging, 27(11):1655–1667, 2008.
  • [27] Catalina Tobon-Gomez, Federico M Sukno, Constantine Butakoff, Marina Huguet, and Alejandro F Frangi.

    Automatic training and reliability estimation for 3d ASM applied to cardiac MRI segmentation.

    Physics in Medicine and Biology, 57(13):4155–4174, jun 2012.
  • [28] Hans C. van Assen, Mikhail G. Danilouchkine, Alejandro F. Frangi, Sebastián Ordás, Jos J.M. Westenberg, Johan H.C. Reiber, and Boudewijn P.F. Lelieveldt. SPASM: A 3d-ASM for segmentation of sparse and arbitrarily oriented cardiac MRI data. Medical Image Analysis, 10(2):286–303, Apr. 2006.
  • [29] Marijn van Stralen, KYE Leung, Marco M Voormolen, Nico de Jong, Antonius FW van der Steen, Johan HC Reiber, and Johan G Bosch. Time continuous detection of the left ventricular long axis and the mitral valve plane in 3-d echocardiography. Ultrasound in medicine & biology, 34(2):196–207, 2008.
  • [30] Theo Vos, Christine Allen, Megha Arora, Ryan M Barber, Zulfiqar A Bhutta, Alexandria Brown, Austin Carter, Daniel C Casey, Fiona J Charlson, Alan Z Chen, et al. Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990–2015: a systematic analysis for the global burden of disease study 2015. The Lancet, 388(10053):1545–1602, 2016.
  • [31] Jelmer M. Wolterink, Tim Leiner, Max A. Viergever, and Ivana Išgum. Automatic segmentation and disease classification using cardiac cine MR images. In Lecture Notes in Computer Science, pages 101–110. Springer International Publishing, 2018.
  • [32] SHI Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In Advances in neural information processing systems, pages 802–810, 2015.
  • [33] Xin Yang, Cheng Bian, Lequan Yu, Dong Ni, and Pheng-Ann Heng. Class-balanced deep neural network for automatic ventricular structure segmentation. In Lecture Notes in Computer Science, pages 152–160. Springer International Publishing, 2018.
  • [34] Le Zhang, Ali Gooya, Bo Dong, Rui Hua, Steffen E. Petersen, Pau Medrano-Gracia, and Alejandro F. Frangi. Automated quality assessment of cardiac MR images using convolutional neural networks. In Simulation and Synthesis in Medical Imaging, pages 138–145. Springer International Publishing, 2016.
  • [35] Clément Zotti, Zhiming Luo, Olivier Humbert, Alain Lalande, and Pierre-Marc Jodoin. GridNet with automatic shape prior registration for automatic MRI cardiac segmentation. In Lecture Notes in Computer Science, pages 73–81. Springer International Publishing, 2018.
  • [36] Clement Zotti, Zhiming Luo, Alain Lalande, and Pierre-Marc Jodoin. Convolutional neural network with shape prior applied to cardiac MRI segmentation. IEEE Journal of Biomedical and Health Informatics, 23(3):1119–1128, May 2019.