LU-Net: a multi-task network to improve the robustness of segmentation of left ventriclular structures by deep learning in 2D echocardiography

04/04/2020 ∙ by Sarah Leclerc, et al. ∙ 0

Segmentation of cardiac structures is one of the fundamental steps to estimate volumetric indices of the heart. This step is still performed semi-automatically in clinical routine, and is thus prone to inter- and intra-observer variability. Recent studies have shown that deep learning has the potential to perform fully automatic segmentation. However, the current best solutions still suffer from a lack of robustness. In this work, we introduce an end-to-end multi-task network designed to improve the overall accuracy of cardiac segmentation while enhancing the estimation of clinical indices and reducing the number of outliers. Results obtained on a large open access dataset show that our method outperforms the current best performing deep learning solution and achieved an overall segmentation accuracy lower than the intra-observer variability for the epicardial border (i.e. on average a mean absolute error of 1.5mm and a Hausdorff distance of 5.1mm) with 11 outliers. Moreover, we demonstrate that our method can closely reproduce the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of 0.96 and a mean absolute error of 7.6ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.83 and an absolute mean error of 5.0 margin. Based on this observation, areas for improvement are suggested.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Analysis of 2D echocardiographic images based on the measurement of cardiac morphology and function is essential for diagnosis. Low-level image processing such as segmentation and tracking enable to extract and interpret clinical indices, among which the volume of the left ventricle and the corresponding ejection fraction (LVEF) are among the most commonly used. The extraction of such measures requires accurate delineation of the left ventricular endocardium (LVEndo) at both end diastole (ED) and end systole (ES). However, these indices are subject to controversy due to a lack of reproducibility. Indeed, there is a significant variability in the measurement of the values extracted from the ultrasound images from an inter-expert, intra-expert and inter-equipment perspective. The inherent difficulties for segmenting echocardiographic images are well documented: i) poor contrast between the myocardium and the blood pool; ii) brightness inhomogeneities; iii) variation in the speckle pattern along the myocardium, due to the orientation of the cardiac probe with respect to the tissue; iv) presence of trabeculae and papillary muscles with intensities similar to the myocardium; v) significant tissue echogenicity variability within the population; vi) shape, intensity and motion variability across patients and pathologies.

Numerous studies have been conducted for more than 30 years to make automatic measurements of the LVEndo and LVEF indices robust and reliable in echocardiographic imaging. In order to achieve an objective evaluation and comparison of state-of-the-art methods, open-access references datasets are essential. In the context of left ventricle analysis, Bernard et al. [1] published a dataset composed of 45 sequences of 3D echocardiographic images in conjunction with the Challenge on Endocardial Three-dimensional Ultrasound Segmentation (CETUS), which took place during the MICCAI 2014 conference111https://www.creatis.insa-lyon.fr/Challenge/CETUS/. This study, along with additional recent works [14, 11], revealed that the BEAS approach (deformable contour based on an explicit representation of the evolving surface through a B-spline formalism [14]) currently provides the best scores in terms of segmentation of the 3D left endocardium surface and the corresponding LVEndo and LVEF estimation but that these results are still higher than the inter-observer variability measured from the same dataset.

Despite the fact that the training dataset of CETUS consists of only 15 patients, machine learning methods, and especially deep learning methods, produce results that are very close to the best performing ones

[11]

. This led to a recent work whose aim was to study the performance of convolutional neural networks (CNNs) methods for the segmentation of 2D echocardiographic sequences from a larger dataset

[7]. In particular, the authors set up an open access dataset, named CAMUS, composed of two and four-chamber acquisitions of 2D echocardiographic sequences from 500 patients with reference measurements from one cardiologist on the full dataset and from three cardiologists on a fold of 50 patients. This study revealed that approaches based on encoder-decoder architecture, in particular the well-known U-Net method [18], produce accurate results that are much better than the state-of-the-art, on average lower than the inter-observer variability and close but still above the intra-observer variability. Thus, deep learning methods appear to faithfully reproduce the experts’ annotations in echocardiographic image segmentation. In this context, the purpose of this paper is to provide answers to the following three questions:

  1. Is it possible to further improve the accuracy of CNNs for the segmentation in echocardiographic imaging?

  2. Can the number of outliers be significantly reduced?

  3. Do CNNs allow the achievement of results below intra-observer variability both in terms of segmentation and clinical index estimation?

Ii Previous work

The study conducted in [7] highlighted two interesting outputs: i) the scores produced by U-Net models are not much sensitive to the choice of hyper-parameters, which reinforces the quality of the results obtained by such architecture; ii) the use of more sophisticated encoder-decoder architectures (i.e. U-Net++ [24], stacked hourglasses network [10] and anatomically constrained neural network [11]) did not produce better results. Therefore, while U-Net  appears as a good choice for the segmentation of echocardiographic images, the improvement of its performance through the extension of its architecture is not straightforward.

In parallel, there has been an increasing interest in the computer vision community for methods based on attention-learning to improve classification 

[22], localization [17, 16] and segmentation tasks [4]. Attention learning correspond to the set of approaches which integrate a contextualization procedure inside their pipeline to improve their overall performance. Contextualization is usually applied either on the image itself or on a derived feature space. One of the best performing approaches is the Mask R-CNN method recently proposed by He et al., which provides the best current results in all three tracks of the COCO suite of challenges [4]. This network is mainly composed of three stages: i) a region proposal network (RPN) which scans boxes distributed over the image area and finds the ones that contain objects; ii) a classification network that scans each of the regions of interest (ROIs) proposed by the RPN and assigns them to different classes while refining the location and size of the bounding box to encapsulate the object; iii)

a convolutional network that takes the regions selected by the ROIs classifier and generates masks for them.

Attention-based approaches have also been successfully applied in medical imaging [13, 3, 23, 12, 9, 21, 15, 19]. In [21], the authors proposed a dedicated CNN architecture for simultaneous localization and segmentation in cardiac MR imaging. Their model is built around three stages: i) an initial segmentation is performed on the input image; ii)

the features learned at the bottom layers are then used to predict the parameters of a spatial transformer network that transforms the input image into a canonical orientation 

[5]; iii) a final segmentation is performed on the transformed image. In parallel, two attention learning networks were developed in [15]

for the detection of chest radiographs containing pulmonary lesions. The annotated lesions were used during the training process to deliver visual attention feedback informing the networks about their lesion localisation performance. The first network extracts saliency maps from high level layers and compares the predicted position of a lesion with the true position. The second approach consists of a recurrent attention model which learns to process a short sequence of smaller image portions. Recently, a generic attention model was proposed to automatically learn to focus on target structures in medical image analysis 

[19]. Based on attention gate modules that can be integrated in any existing CNN architecture [12], the proposed formalism intrinsically promotes the suppression of irrelevant regions in an input image while highlighting salient features useful for a specific task. This approach has been evaluated for 2D fetal ultrasound image classification and 3D-CT abdominal image segmentation.

Despite their established interest, to our knowledge, only one study based on an attention model has been conducted so far in echocardiographic image segmentation. In particular, the authors introduced an attention mechanism based on the multiplication of a contextualization map derived from a first network with the input image in order to provide as input a pre-processed image without irrelevant information to a second U-Net network that performs the segmentation of both the left ventricle and the myocardium simultaneously  [8]. Results on the CAMUS dataset show that this method allows for a reduction of outliers in terms of segmentation results (from 20% to 17%) while preserving the same level of accuracy.

Iii Methodology

Since CAMUS is the current largest open access 2D echocardiographic dataset with an active evaluation website, we made the choice to build our study on this dataset. Based on the literature review carried out in the previous section, we decided to investigate the capacity of attention-based networks to improve the current best segmentation scores in 2D echocardiographic imaging.

Iii-a Motivations

The work carried out in this study was motivated by an experiment we conducted on the CAMUS dataset, whose details are described below. In particular, we manually selected regions of interest (ROIs) around the reference segmentation masks. Each ROI corresponds to the ideal bounding box (BB) surrounding the corresponding mask with an additional margin of , and % along the axes. From these ROIs, the corresponding images were cropped to create new datasets that were processed with the baseline U-Net1 architecture described in [7]. The corresponding scores are reported in Table II and referred to as BB-m5, BB-m15 and BB-m30, respectively. From this table, it is worth noting the contribution of the cropping stage, leading to a significant improvement of the baseline U-Net1 results, with average scores all below the ones of the intra-observer (except for BB-m30 with the Hausdorff distance metric) and a number of outliers lower than %. This experiment thus reveals that the effective insertion of a localization step during the segmentation process with the U-Net architecture would yield remarkable results in echocardiographic image segmentation.

Iii-B Overall strategy

Based on the motivations and the literature review on attention learning presented in the previous sections, we developed a multi-task network to improve the robustness of segmentation in 2D echocardiography. Since the U-Net model already produces high-performance segmentation results in echocardiography [7], we decided to use this architecture as back-bone for our multi-task network, referred to as Localization U-Net (LU-Net) in the sequel. LU-Net aims at locating and segmenting the endocardial and the epicardial borders of the left ventricle through an end-to-end learning procedure. The underlying assumption of this strategy is that the joint optimization of these two tasks should lead to better segmentation results. An illustration of the LU-Net’s overall architecture is provided in Fig. 1. In particular, LU-Net is composed of two networks: one region proposal network for localization and one U-Net for segmentation.

Fig. 1: Illustration of the LU-Net pipeline with the U-L2-mu localization network introduced in Sec. III-B1. The two U-Nets are independent.

Iii-B1 Localization part

The region proposal network performs a mapping between the input ultrasound image and four coordinates to define a bounding box (BB) around the structure of interest, namely the union of the left ventricle and myocardium. The reference BB is defined as the minimal bounding box in contact with the epicardium border. The target coordinates are computed with an additional margin as:

where are the coordinates of the reference BB and its width and height. The motivation for adding a margin was to provide some context around the targeted structures for the segmentation task.

Iii-B2 Segmentation part

The output of the region proposal network is used as an attention mechanism to crop and resize the input ultrasound image. The resulting image is fed to a segmentation network which corresponds to a U-Net(the U-Net1 model described in [7]), currently the most efficient model evaluated on the CAMUS dataset considering a trade-off between accuracy, speed and size.

Iii-B3 End-to-end approach

In order to make the full network trainable end-to-end, the crop and resize step was implemented using a bilinear differentiable sampling. In addition, the segmentation loss involved in the second U-Net was modified to evolve dynamically over the training phase with respect to the varying ROI. The two U-Nets are independent networks with distinct parameters. At inference time, based on the localization outputs, the final segmentation result is then returned to the original coordinate system of the input image.

Iv Experiments

Iv-a Dataset

The CAMUS dataset contains two and four-chamber acquisitions from 500 patients [7]. The full dataset was divided into 10 folds equally distributed in terms of image quality (good, medium, poor) and ejection fraction category ( 45%, 55% or in between). This allows the analysis of the full dataset by means of a classical cross-validation strategy. One cardiologist (O1) manually annotated the endocardium and epicardium (LVEpi) borders of the left ventricle on the full dataset at end diastole (ED) and end systole (ES) and two other cardiologists (O2 and O3) on a fold of 50 patients. This fold was also annotated twice by O1 seven months apart. This procedure allows comparison of the results provided by the algorithms with the inter- and intra-observer variability.

Iv-B Evaluation metrics

Iv-B1 Localization metrics

We assessed the performance of the localization networks through the Intersection Over Union (IOU) metric and the euclidean distance errors between the predicted and the reference BB coordinates (i.e. its central position , its height and width ). The IOU is a classical localization metric which measures the overlap between the predicted BB and the reference one. It gives a value between 0 (no overlap) and 1 (full overlap). In addition, we provided the ”BB out” metric which corresponds to the number of cases where the predicted BB does not completely encompass the reference mask.

Iv-B2 Segmentation metrics

To measure the accuracy of the segmentation output (LVEndo and LVEpi) of a given method, the Dice metric (closely related to the IOU and classically used in segmentation), the mean absolute distance () and the 2D Hausdorff distance () were used. The Dice similarity index is a measure of overlap between the segmented surface extracted from a method and the corresponding reference surface . It gives a value between 0 (no overlap) and 1 (full overlap). corresponds to the average distance between and while measures the maximum local distance between the two surfaces. In addition, we assessed the quality of segmentation with regard to cardiologists’ annotations through the notion of outliers defined bellow.

  • geometric outlier: the set of segmentation attached to a patient is seen as a geometric outlier if at least one of its eight corresponding distance scores (i.e.  and values at ED and ES for both apical two and four-chamber views) is out of the corresponding bounds defined from the inter-observer variability [7];

  • anatomical outlier: the set of segmentation attached to a patient is seen as an anatomical outlier if the simplicity and convexity  [8] of the corresponding segmented contours are lower than the lowest values computed from expert annotations on patients. These two metrics hold values between 0 and 1, and are maximized for a circle. They also give discriminating values for any convex shapes, such as oval shapes like heart cavities, and bridge-like shapes like the myocardium. They can therefore be used as simple tools to detect anatomical outliers in the case of left ventricular structures.

Iv-B3 Clinical metrics

We evaluated the performance of the methods with clinical indices: i) the ED volume (LVEDV in ); ii) the ES volume (LVESV in ); iii) the ejection fraction (LVEF as a percentage), for which we computed two metrics: the correlation () and the limit of agreement () (computed from conventional definitions). Please note that all volumes of the left ventricle were computed using Simpson’s biplane rule [2] involving the segmentation results on the two- and four-chamber apical views

Iv-C Localization methods

We implemented and assessed the performance of four different convolutional networks dedicated to the prediction of bounding boxes, i.e. predicting .

  1. an AlexNet-like network [6]

    composed of a succession of convolutional layers of varying filter size and max pooling. Our version ends with three fully-connected layers of size 4096, 4096 and 4. Except for the additional last layer, this architecture is therefore the same as the original, without any dropout and data augmentation strategy, and includes 71M parameters;

  2. a VGG19-like network [20], composed of 20 layers that alternate between convolutions and max pooling. The last fully-connected layers are made respectively of 4096, 4096 and 4 units, for a total of 70M parameters;

  3. U-L1 based on a U-Net model performing the segmentation of the left ventricle and the myocardium. The bottom layer of this U-Net was derived in order to carry out the localization procedure using four fully connected layers of 1024, 256, 32, and 4 units. This model was inspired by the work of Vigneault et al. [21]. The network includes 9M parameters;

  4. U-L2 also based on a U-Net model performing the segmentation of the left ventricle and the myocardium. The output of this U-Net was then connected to a downsampling branch ending with four fully-connected layers of 1024, 256, 32 and 4 units. This architecture is novel and corresponds to one of the innovations proposed in this study. We evaluated two versions of this network, one optimizing only the localization loss (referred to as U-L2-mo) and one optimizing both the localization and the segmentation losses (referred to as U-L2-mu). The network includes 11M parameters.

Iv-D Segmentation methods

The performance of the joint segmentation of the endocardial and the epicardial borders was assessed through the following four networks:

  1. U-Net1, corresponding to the current best performing network on the CAMUS dataset [7]. This network includes 2M parameters;

  2. RU-Net, recently introduced in [8] and built from two cascaded U-Net1. The epicardial mask predicted by the first network is dilated and multiplied with the input image to provide a contextualized image as input to the second network, with a total number of 4M parameters;

  3. Attention-gated U-Net (AG-U-Net), recently proposed in [12]

    , in which attention layers are used at each skip connection to locally weigh the concatenated features with coefficients derived from the previous layer. It includes batch normalization before each activation, and deep supervision by aggregating the feature maps produced after each attention layer at the last level of U-Net1(

    i.e. before the last convolution and the softmax). This network has a total number of 2M parameters;

  4. LU-Net, as introduced in this paper, built using U-L2-mu as the region proposal network and U-Net1 as the segmentation network for a total of 13M parameters. Two margins of % and % were evaluated.

Iv-E Learning strategy

Iv-E1 Optimizer

All the methods involved in this study were optimized using Adam’s optimizer associated to a learning rate (either equal to or

) and a number of epochs (controlled using early stopping with a patience of 20) that experimentally allowed to observe a smooth convergence of the training and validation losses. The best model on the validation loss was selected after each training phase.

Iv-E2 Loss

Localization networks were optimized using a L1 loss clipped at summing the errors on the four BB values (i.e. ). Segmentation networks were optimized using a multi-class Dice loss taking into account the LV and myocardium predictions. For multi-task prediction, a weighting of 10 was applied to the localization term as to balance the localization and the segmentation objectives.

V Results

In order to easily compare our results with those of the state-of-the-art on the CAMUS dataset, we followed the strategy developed in [7] by training for each deep learning method a single model on the annotated images of both apical two and four-chamber views, regardless of the time instant.

V-a Localization results

Table LABEL:tab:localization_results shows the localization accuracy computed on the full dataset ( patients) for the algorithms described in section Sec. IV

. Mean and standard deviation values for each metric were obtained from cross-validation on the 10 folds of the dataset (see 

[7] for more details). For each row of this table, the m information indicated after the name of the method indicates the margin value used to define the reference BB. The values in bold correspond to the best scores for each metric.

Based on a comparison of the methods using a margin of 5%, the proposed U-L2-mu gets the overall best localization scores on all metrics, except for the error on with a difference of 0.2 mm with the best method. These results validate the use of the U-Net architecture, which has already proven its effectiveness in terms of segmentation, to perform localization tasks in ultrasound imaging compared to well-established computer vision architectures (i.e. AlexNet and VGG). In addition, the scores highlight the interest of using both segmentation and localization losses to improve the performance of the U-L2 method, with an average gain of  mm over the BB centre estimate and  mm over the BB dimension estimate. This significant improvement demonstrates that forcing segmentation as an intermediate step to localization is beneficial.

We also investigated the influence of the choice of the margin value m on the accuracy of the localization results produced by the U-L2 method. The obtained results are contrasted. Indeed, while the use of a lower margin (i.e. 5%) produces slightly better results with regard to the estimation of the BB position, the use of a higher margin (i.e. 15%) considerably reduces the number of cases where the BB does not encompass the reference mask (from 36% to 2%).

Based on this experiment, it is clear that the U-L2-mu model produced the best localization results. We therefore decided to use this network as the region proposal part of the LU-Net architecture, as illustrated in Fig. 1.

Model IOU Error (mm) BB out
h w
AlexNet-m5 0.880 2.2 1.9 4.2 4.1 866
4.1 4.1 43%
VGG-m5 0.888 1.9 1.7 4.0 4.0 903
3.9 3.7 45%
U-L1-m5 0.849 3.1 2.7 5.3 4.9 1094
4.5 4.3 55%
U-L2-mo-m5 0.791 4.2 4.4 7.1 6.9 1393
6.4 6.7 70%
U-L2-mu-m5 0.898 1.6 1.9 3.2 3.6 712
3.1 3.2 36%
U-L2-mu-m15 0.907 1.6 1.7 3.7 4.3 31
4.0 4.3 2%
TABLE I: Localization accuracy on 4 evaluated methods on the full dataset (500 patients). The information contained in each method name indicates the margin value defined in Sec. III-B1

V-B Segmentation results

Table II displays the segmentation accuracy computed on the full dataset from patients having good and medium image quality ( patients) for the 4 algorithms described in section Sec. IV-D. Mean and standard deviation values for each metric were obtained from cross-validation on the 10 folds of the dataset. The values in bold correspond to the best scores for each metric. From these results, one can see that all the attention-based networks produced either the same, or better results than the baseline U-Net1, with AG-U-Net and LU-Net being the best performing models. Indeed, AG-U-Net obtained the overall best results for the segmentation of the LVEndo border ( value of  mm and value of  mm), leading to segmentation scores close but still higher than the intra-observer variability for this structure. The LU-Net-m5 approach obtained the best results for the segmentation of the LVEpi border ( value of  mm and value of  mm) and the lowest number of geometric outliers (%). Interestingly, these scores are either equivalent or lower than the intra-observer variability for this structure. It is also worth noting the robustness of the LU-Net model with respect to the choice of margin parameter, as margins of % and produce almost the same segmentation scores for all metrics. An illustration of the segmentation performance of the LU-Net-m5 network compared to the baseline U-Net1 model on three different cases is provided in Fig. 2.

Model LVEndo LVEpi outliers
D dm dH D dm dH geo.
val. mm mm val. mm mm # %

intra-observer 0.937 1.4 4.5 0.954 1.7 5.0 21
0.027 0.5 1.8 0.020 0.8 2.2 13

Motivation

study (Sec. III-A)

BB-m5 0.941 1.3 4.3 0.971 1.0 4.1 89
0.034 0.6 1.9 0.011 0.4 1.8 5.5

BB-m15 0.940 1.3 4.4 0.969 1.1 4.3 106
0.034 0.6 1.9 0.011 0.4 2.0 6.5

BB-m30 0.937 1.4 4.7 0.966 1.2 4.6 124
0.035 0.6 2.1 0.013 0.5 2.2 7.6

Experimental

study (Sec. V-B)

U-Net1 0.920 1.7 5.6 0.947 1.9 6.2 282
0.030 1.1 17%

RU-Net [8] 0.925 1.7 5.4 0.950 1.8 5.8 240
0.049 1.0 3.3 0.030 1.1 3.9 15%

AG-U-Net [12] 0.930 1.5 5.3 0.950 1.8 5.9 270
0.049 1.3 3.4 0.026 1.0 3.7 17%

LU-Net-m5 0.953 1.7 5.5 0.932 1.5 5.1 186
0.026 0.9 3.6 0.043 0.8 3.3 11%
LU-Net-m15 0.952 1.7 5.6 0.931 1.5 5.3 203
0.029 1.1 4.0 0.049 1.1 3.6 12%

LVEndo: Endocardial contour of the left ventricle; LVEpi: Epicardial contour of the left ventricle
  D: Dice index; dm: mean absolute distance; dH: Hausdorff distance
  The values in bold refer to the best performance for each measure.
TABLE II: Segmentation accuracy on the 4 evaluated methods described in Sec. IV-D and restricted to patients having good and medium image quality (406 in total). The information contained in each methods name indicates the margin value defined in Sec. III-B1
Fig. 2: Comparison of the segmentation performance of the baseline U-Net1 (left column) and the proposed LU-Net architecture (right column) on cases (a) with similar results; (b) where the intermediate localization of the LU-Net helps; (c) where the artifact present in the image is too strong for any improvement. In each image, the prediction is in green and purple while the ground-truth is in yellow and cyan. The BB estimated is displayed in red.

V-C Clinical scores

Table III contains the clinical metrics computed on the full dataset from patients having good and medium image quality (406 patients) for the 4 methods described in Sec. IV-D. Those indices were computed with the Simpson’s biplane rule [2] from the segmentation results on the two- and four-chamber apical views of each algorithm. The values in bold represent the best scores for the corresponding index. As for segmentation, the AG-U-Net and LU-Net-m5 models obtained the best clinical scores on all the tested metrics (bias was not taken into account since the lowest bias value in itself does not necessarily mean the best performing method). Regarding the estimation of the LVEDV, the two methods produced high correlation scores (), small biases ( ml) and reasonable limit of agreements (around  ml) and mean absolute errors (around  ml). The AG-U-Net produced the best LVESV results with a correlation of , while the LU-Net-m5 model produced the best LVEF scores with a correlation of . However, even if the scores of LU-Net-m5 and AG-U-Net are slightly better than the baseline U-Net1 ones, they are still higher that the intra-observer results. This reveals that there is still room for improvement as discussed in Sec. VI.

Model LVEDV LVESV LVEF
corr loa mae corr loa mae corr loa mae
val. ml ml val. ml ml val. % %
intra-observer 0.978 -2.814.3 6.2 0.981 -0.111.4 4.5 0.896 -2.311.2 4.5
U-Net1 0.947 -8.324.7 10.9 0.955 -4.919.4 8.2 0.791 -0.515.1 5.6
RU-Net [8] 0.946 -1.223.9 8.9 0.949 0.319.6 7.3 0.704 -2.114.3 6.0
AG-U-Net [12] 0.956 -1.421.9 8.1 0.962 0.617.0 6.2 0.798 -2.215.1 5.5
LU-Net-m5 0.956 1.4 21.8 8.3 0.956 1.6 18.0 7.0 0.829 -1.513.5 5.0
LU-Net-m15 0.952 2.4 22.9 8.1 0.962 1.816.7 6.5 0.821 -1.213.7 5.0
corr: Pearson correlation coefficient; loa: limit of agreement; mae: mean absolute error.
  The values in bold refer to the best performance for each measure.
TABLE III: Clinical metrics of the 4 evaluated methods described in Sec. IV-D and restricted to patients having good and medium image quality (406 in total)

V-D LU-Net behavior

From the results given in Table II and Table III, it appears that the LU-Net method outperforms the baseline U-Net1 model both in terms of segmentation and clinical indice estimation. Furthermore, it is one of the most effective model, even compared to other attention-based networks. In order to complete the analysis of LU-Net, we applied this network to the full dataset (including poor image quality) and studied the generated outliers. The corresponding results obtained with a margin of are provided in Table IV. The results of the model named LU-Net-m5-o1 corresponds to the scores derived from the output of the first U-Net involved in the region proposal network, while the scores of the model named LU-Net-m5-o2 corresponds to the scores derived from the final output of the network (i.e. the one provided by the second U-Net). From this table, one can see that LU-Net outperforms the U-Net1 architecture for all the metrics for both LVEndo and LVEpi borders when considering all quality of images. Also, the segmentation results produced by LU-Net appear to be remarkably stable when integrating poor image quality images, with a mean difference of  mm for ,  mm for and 1% for the geometric outliers.

Concerning the localization scores, the LU-Net-m5 model obtained consistent results with respect to the U-L2-mu best performing method reported in Table LABEL:tab:localization_results with an IOU of and BB errors of  mm, respectively. Coupling this result with the last two lines of Table IV which show that the first segmentation is less accurate than a single U-Net1, it appears that the segmentation result produced in the region proposal part of the LU-Net is degraded by optimizing the localization procedure, which in turn allows for a significant improvement of the final segmentation results compared to the baseline U-Net1 model.

Concerning the segmentation scores, LU-Net-m5 produced 12% of geometric outliers, 2% of anatomical outliers and 1% of both, showing that half of the anatomical outliers are also geometric. Moreover, the geometric outlier rate is lower than the intra-observer variability one computed from a subset of 40 patients with good and medium image quality, which further highlights the quality of the results achieved by LU-Net.

Model LVEndo LVEpi outliers
dm dH dm dH geo. ana. both
mm mm mm mm # %

U-Net1
2.0 6.1 2.0 6.5 423 95 71
1.1 4.5 21% 5% 4%



LU-Net-m5-o1
2.1 7.0 1.9 6.2 483 201 138
1.0 3.4 24% 10% 7%

LU-Net-m5-o2
1.8 5.7 1.6 5.3 240 31 20
0.9 3.3 12% 2% 1%



TABLE IV: Segmentation accuracy and outliers on the full dataset (500 patients) including those with poor image quality

Vi Discussion

Vi-a Attention-based networks

Table II and Table III underline the ability of attention-based networks to improve the segmentation and the estimation of clinical indexes in 2D echocardiography. These results are even more interesting given that the authors of the original study  [7] had not succeeded in improving the scores of the baseline U-Net1 model through more sophisticated architectures. Although AG-U-Net produced the best scores on the LVEndo and the estimation of the LVESV , LU-Net  provides the best trade-off between the achieved improvements and the decrease of the number of geometric outliers.

Vi-B Comparison with intra-observer variability

As for the segmentation scores, the LU-Net model manages to reach the intra-observer variability for the LVEpi border ( and metrics). The number of geometric outliers, %, is also reduced below the intra-observer rate. To the best of our knowledge, this is the first time that such result is obtained in the context of 2D echocardiographic image segmentation. In addition, one can observe that the scores reached by our model are still slightly higher than the intra-observer variability for the LVEndo border.

Concerning the estimation of the clinical metrics, although LU-Net improves the results compared to the baseline U-Net1 model, its scores are still slightly higher than the intra-observer variability. This reveals that while attention-based networks clearly enhanced the results produced by the baseline U-Net1 model, there still exists room for improvement to faithfully reproduce the manual annotations of one expert.

Vi-C Areas for improvement

We identified two leads of potential improvement to allow competitive results with respect to the intra-observer variability. First, based on Table LABEL:tab:localization_results, it appears that the localization step can be further optimized to improve the LU-Net scores, as suggested by the results on ideal cases provided in Table II. Secondly, there is a need to introduce temporal coherency into deep learning architectures. Indeed, while the current strategy (i.e. ED and ES are treated independently) provides high correlations for the estimation of the LVEDV and LVESV ( for both indices), the estimation of the LVEFis degraded to . This reveals the lack of temporal consistency of the LU-Net segmentation results between ED and ES.

Vii Conclusions

More accurate and reproducible data analysis is a key innovation in echocardiography, for both diagnosis and patient follow-up. In this study, we introduced a novel multi-task approach to improve the robustness of the segmentation of the endocardium and epicardium in 2D echocardiography. We showed that the joint optimization of the localization and the segmentation tasks leads to better segmentation results at the end of the process. Our method i) outperforms U-Net1, the current best performing deep learning solution on the CAMUS dataset; ii) produced among the best results from the tested attention-based networks; iii) produced overall segmentation scores lower than the intra-observer variability for the epicardial border with % of outliers; iv) closely reproduces the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of ; v) improves the estimation of the ejection fraction of the left ventricle, with scores that remain slightly higher than the intra-observer’s ones. Though the intra-variability remains to be reached for a set of metrics, this study established localization as a lead for more robust 2D echocardiographic image analysis with a deep learning approach.

Acknowledgment

We would like to thank Dr. Ozan Oktay for his help in the implementation of AG-U-Net. This work was performed within the framework of the LABEX PRIMES (ANR- 11-LABX-0063) of Université de Lyon, within the program ”Investissements d’Avenir” (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). The Centre for Innovative Ultrasound Solutions (CIUS) is funded by the Norwegian Research Council (project code 237887).

References

  • [1] O. Bernard, J. G. Bosch, B. Heyde, M. Alessandrini, D. Barbosa, S. Camarasu-Pop, F. Cervenansky, S. Valette, O. Mirea, et al. (2016) Standardized Evaluation System for Left Ventricular Segmentation Algorithms in 3D Echocardiography. IEEE Transactions on Medical Imaging 35 (4), pp. 967–977. Cited by: §I.
  • [2] E. D. Folland, A. F. Parisi, P. F. Moynihan, D. R. Jones, C. L. Feldman, and D. E. Tow (1979) Assessment of left ventricular ejection fraction and volumes by real-time, two-dimensional echocardiography. A comparison of cineangiographic and radionuclide techniques. Circulation 60 (4), pp. 760–766. Cited by: §IV-B3, §V-C.
  • [3] Q. Guan and Y. Huang (2018) Multi-label chest x-ray image classification via category-wise residual attention learning. Pattern Recognition Letters. Cited by: §II.
  • [4] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988. Cited by: §II.
  • [5] M. Jaderberg, K. Simonyan, A. Zisserman, and k. kavukcuoglu (2015) Spatial transformer networks. In Advances in Neural Information Processing Systems 28, pp. 2017–2025. Cited by: §II.
  • [6] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.), pp. 1097–1105. Cited by: item 1.
  • [7] S. Leclerc, E. Smistad, J. Pedrosa, A. Østvik, F. Cervenansky, F. Espinosa, T. Espeland, E. A. R. Berg, P. Jodoin, T. Grenier, C. Lartizien, J. D’hooge, L. Lovstakken, and O. Bernard (2019-Sep.) Deep learning for segmentation using an open large-scale dataset in 2d echocardiography. IEEE Transactions on Medical Imaging 38 (9), pp. 2198–2210. Cited by: §I, §II, §III-A, §III-B2, §III-B, 1st item, item 1, §IV-A, §V-A, §V, §VI-A.
  • [8] S. Leclerc, E. Smistad, J. Pedrosa, A. Østvik, F. Cervenansky, F. Espinosa, T. Espeland, E. A. R. Berg, P. Jodoin, T. Grenier, C. Lartizien, J. D’hooge, L. Lovstakken, and O. Bernard (2019) RU-Net: A refining segmentation network for 2D echocardiography. In IEEE International Ultrasonics Symposium (IUS), Cited by: §II, 2nd item, item 2, TABLE II, TABLE III.
  • [9] C. Li, Q. Tong, X. Liao, W. Si, Y. Sun, Q. Wang, and P. Heng (2019) Attention based hierarchical aggregation network for 3d left atrial segmentation. In Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges, M. Pop, M. Sermesant, J. Zhao, S. Li, K. McLeod, A. Young, K. Rhode, and T. Mansi (Eds.), pp. 255–264. Cited by: §II.
  • [10] A. Newell, K. Yang, and J. Deng (2016)

    Stacked hourglass networks for human pose estimation

    .
    In Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), Cham, pp. 483–499. External Links: ISBN 978-3-319-46484-8 Cited by: §II.
  • [11] O. Oktay, E. Ferrante, K. Kamnitsas, M. Heinrich, W. Bai, J. Caballero, S. A. Cook, A. de Marvao, T. Dawes, D. P. O‘Regan, B. Kainz, B. Glocker, and D. Rueckert (2018) Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation. IEEE Transactions on Medical Imaging 37 (2), pp. 384–395. Cited by: §I, §I, §II.
  • [12] O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, B. Glocker, and D. Rueckert (2018) Attention U-Net: Learning Where to Look for the Pancreas . In Medical Imaging with Deep Learning (MIDL’18), Cited by: §II, item 3, TABLE II, TABLE III.
  • [13] C. Payer, D. Štern, H. Bischof, and M. Urschler (2018) Multi-label whole heart segmentation using cnns and anatomical label configurations. In Statistical Atlases and Computational Models of the Heart. ACDC and MMWHS Challenges, M. Pop, M. Sermesant, P. Jodoin, A. Lalande, X. Zhuang, G. Yang, A. Young, and O. Bernard (Eds.), pp. 190–198. Cited by: §II.
  • [14] J. Pedrosa, S. Queirós, O. Bernard, J. Engvall, T. Edvardsen, E. Nagel, and J. D’hooge (2017) Fast and Fully Automatic Left Ventricular Segmentation and Tracking in Echocardiography Using Shape-Based B-Spline Explicit Active Surfaces. IEEE Transactions on Medical Imaging 36 (11), pp. 2287–2296. Cited by: §I.
  • [15] E. Pesce, S. J. Withey, P. Ypsilantis, R. Bakewell, V. Goh, and G. Montana (2019) Learning to detect chest radiographs containing pulmonary lesions using visual attention networks. Medical Image Analysis 53, pp. 26 – 38. External Links: ISSN 1361-8415 Cited by: §II.
  • [16] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 779–788. Cited by: §II.
  • [17] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, pp. 91–99. Cited by: §II.
  • [18] O. Ronneberger, P. Fischer, and T. Brox (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proc. MICCAI, pp. 234–241. Cited by: §I.
  • [19] J. Schlemper, O. Oktay, M. Schaap, M. Heinrich, B. Kainz, B. Glocker, and D. Rueckert (2019) Attention gated networks: learning to leverage salient regions in medical images. Medical Image Analysis 53, pp. 197 – 207. External Links: ISSN 1361-8415 Cited by: §II.
  • [20] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556. Cited by: item 2.
  • [21] D. M. Vigneault, W. Xie, C. Y. Ho, D. A. Bluemke, and J. A. Noble (2018) Omega-net: fully automatic, multi-view cardiac mr detection, orientation, and segmentation with deep neural networks. Medical Image Analysis 48, pp. 95 – 106. External Links: ISSN 1361-8415 Cited by: §II, item 3.
  • [22] F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, and X. Tang (2017) Residual attention network for image classification. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 6450–6458. Cited by: §II.
  • [23] Y. Wang, Z. Deng, X. Hu, L. Zhu, X. Yang, X. Xu, P. Heng, and D. Ni (2018) Deep attentional features for prostate segmentation in ultrasound. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, A. F. Frangi, J. A. Schnabel, C. Davatzikos, C. Alberola-López, and G. Fichtinger (Eds.), pp. 523–530. Cited by: §II.
  • [24] Z. Zhou, M.R. Siddiquee, N. Tajbakhsh, and J. Liang (2018) UNet++: a nested u-net architecture for medical image segmentation. In in proc. of Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3–11. Cited by: §II.