Automatic 3D Ultrasound Segmentation of Uterus Using Deep Learning

On-line segmentation of the uterus can aid effective image-based guidance for precise delivery of dose to the target tissue (the uterocervix) during cervix cancer radiotherapy. 3D ultrasound (US) can be used to image the uterus, however, finding the position of uterine boundary in US images is a challenging task due to large daily positional and shape changes in the uterus, large variation in bladder filling, and the limitations of 3D US images such as low resolution in the elevational direction and imaging aberrations. Previous studies on uterus segmentation mainly focused on developing semi-automatic algorithms where require manual initialization to be done by an expert clinician. Due to limited studies on the automatic 3D uterus segmentation, the aim of the current study was to overcome the need for manual initialization in the semi-automatic algorithms using the recent deep learning-based algorithms. Therefore, we developed 2D UNet-based networks that are trained based on two scenarios. In the first scenario, we trained 3 different networks on each plane (i.e., sagittal, coronal, axial) individually. In the second scenario, our proposed network was trained using all the planes of each 3D volume. Our proposed schematic can overcome the initial manual selection of previous semi-automatic algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

04/24/2019

Ultrasound segmentation using U-Net: learning from simulated data and testing on real data

Segmentation of ultrasound images is an essential task in both diagnosis...
09/23/2019

Automatic Mouse Embryo Brain Ventricle & Body Segmentation and Mutant Classification From Ultrasound Data Using Deep Learning

High-frequency ultrasound (HFU) is well suited for imaging embryonic mic...
04/18/2017

Interactive Outlining of Pancreatic Cancer Liver Metastases in Ultrasound Images

Ultrasound (US) is the most commonly used liver imaging modality worldwi...
11/12/2018

Subsequent Boundary Distance Regression and Pixelwise Classification Networks for Automatic Kidney Segmentation in Ultrasound Images

It remains challenging to automatically segment kidneys in clinical ultr...
10/12/2021

Voice-assisted Image Labelling for Endoscopic Ultrasound Classification using Neural Networks

Ultrasound imaging is a commonly used technology for visualising patient...
11/19/2018

Automatic Three-Dimensional Cephalometric Annotation System Using Three-Dimensional Convolutional Neural Networks

Background: Three-dimensional (3D) cephalometric analysis using computer...
12/06/2016

Fine-grained Recurrent Neural Networks for Automatic Prostate Segmentation in Ultrasound Images

Boundary incompleteness raises great challenges to automatic prostate se...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Cervical cancer as one of the most frequent cancer types in women, affects more than half a million females each year and results in 300 000 deaths world wide [4]. It is, however, largely preventable, and the treatment is dependent on the severity of the condition and availability of local resources at the time of diagnosis [4]. Recent studies have shown that incorporating the results of advanced imaging technology and surgical staging lead to more enhanced prognosis and treatment planning [7]. Imaging modalities such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) imaging have been utilised for treatment plans. However, MRI, CT, and PET imaging facilities, are costly, not uniformly available, require a long scanning time, and are not real-time. Thus, US imaging has emerged as the most suited modality for cervical cancer screening due to its cost-effectiveness, radiation-free, non-invasiveness, ease of use on the bedside, and real-time nature.

Radiotherapy is a type of treatment that delivers a dose of radiation to the target tissues, however, its effect and efficiency in the treatment of cervical cancer is limited by motion of the target tissues [8]. Therefore, on-line segmentation of the uterus can aid effective image-based guidance for precise delivery of dose to the target tissue (the uterocervix) during cervix cancer radiotherapy. Furthermore, segmenting the uterus can aid in determining the extent of a tumour and the presence of metastatic disease. However, finding the position of uterine boundary in US images is a challenging task due to large daily positional and shape changes in the uterus (shown in Fig. 1), large variation in bladder filling, and the limitations of 3D US images such as low resolution in the elevational direction. One group of studies on uterus segmentation mainly focused on developing semi-automatic algorithms where require manual initialization to be done by an expert clinician. Mason et al. developed a semi-automatic algorithm such that a central sagittal plane is manually contoured. Then, the selected plane and contour are used as a start point of fitting elliptical contours in semi-axial planes [11]. Another group, focused on use of conventional image processing techniques for automatic detection and segmentation of uterine fibroid [5, 13].

Fig. 1: An illustration of uterine location variation (sagittal view) in one patient across two scans taken on different days.

Recent advances in image processing approaches, such as artificial intelligence (AI) and deep learning (DL) algorithms, have paved the way in solving a variety of problems. AI and deep learning approaches in medicine have a lot of potential, particularly in US diagnostic imaging, where large datasets must be managed. In US image analysis, many researchers have shown the promising results in detection of breast lesions 

[2, 1, 3, 17], muscle  [10], thyroid nodule  [12], prostate[15], liver [18], brain [16]. However, due to limited studies on the automatic segmentation of uterus US images, the main focus of the current study is to investigate more on automatic segmentation of 3D uterus US images and to eliminate the need for manual initialization in the previous semi-automatic algorithms using the recent deep learning-based techniques. Deep learning techniques’ success is heavily dependent on the amount of available data with annotations, and creating annotations for US pictures is a time and money-intensive operation. To be more explicit, 3D networks have higher number of parameters, which causes memory issues and a greater demand for annotated 3D data. Therefore, due to limited available 3D uterus data, we explore 2D networks that use 2D planes of 3D volumes.

Ii Materials and Methods

Ii-a Dataset

The dataset that used in the current study consist of 3D US images of 11 patients. On average, each patient received 4 sessions of 3D US scanning leading to a total of 38 3D US scans, with each 3D scan comprising 100 2D images. Two patients were chosen as the test set and the remainder as the train set, resulting in a total of 35 and 3 scans for the train and test sets, respectively. Table I presents the details of number of scans for each patient. An example of US images with their overlaid annotations across all planes (i.e. axial, coronal, sagittal) is presented in Fig. 2. We scaled all the scans to an identical shape 576576576 as the 3D volumes varied in size.

Patient ID No. 3D scans Train/Test
1 2 Test
2 5 Train
3 3 Train
4 4 Train
5 5 Train
6 4 Train
7 5 Train
8 3 Train
9 5 Train
10 1 Test
11 1 Train
TABLE I: Number of 3D US scans per patient. Patient 1 and 10 were selected as the test set, and the rest were grouped as the train set.

Fig. 2: An example of 3D US image with the uterus annotation across (a) axial, (b) coronal, and (c) sagittal planes.

Ii-B Protocol

Most of the recently developed deep learning algorithms suffer from generalization, and the performance of such algorithms for a new dataset need to be investigated. Furthermore, training 3D networks with only 38 3D volumes is not possible. Therefore, we developed 2D networks for segmentation and stacks the outputs into a 3D volume as the final prediction. Each 3D volume partitioned into 2D slices known as the coronal, sagittal, and axial planes. We proceeded our analysis through two main scenarios. In the first scenario, we trained 3 different 2D networks on each 2D plane (i.e., sagittal, coronal, axial) individually. In the second scenario, our proposed 2D network was trained using 2D images across all the planes of each 3D volume.

Ii-C Experiments

The proposed network was based on well-known segmentation architecture, U-Net [14], where its feature extractor is set to MobileNet-v2 [6]. Segmentation masks generated using the proposed algorithm were compared to expert manual contours. We had three and one networks to train in the first and second scenarios, respectively. For simplicity, we refer to net_X, net_Y, net_Z, and net_all

as networks trained on 2D images of axial, coronal, sagittal, and all planes. All the aforementioned networks trained for 200 epochs, using Adam 

[9] optimizer with learning rate and weight decay . 2D images were reshaped to the size of 576576 where a center crop augmentation with the cropping window size of 512

512 were applied as the augmentation. Additionally, images were flipped vertically and/or horizontally on random basis. 5-fold cross validation was conducted to prevent variation in networks performance. The loss function was set to the combination of binary cross-entropy (BCE) and dice similarity (DSC) functions (Eq.

1).

(1)

where , and is ground truth and predicted segmentation masks, respectively, and . And, , where and

denote predictions and probability function, respectively.

Iii Results

Figure 3 shows the train-validation loss across 2 networks. We only include the train_validation loss of net_X due to similarity of train_validation loss in other networks (i.e. net_Y and net_Z) of our 1st scenario. We observed that when we combine all the planes of 3D volume (axial, coronal, and sagittal), Fig. 3 (b). Figure 4 (c), and (d) show an example of sagittal slice of one patient, where the uterus is fully visible, predicted from net_X and net_all based on our first and second scenarios, respectively.

Fig. 3: Train_validation loss for the 1st fold of 5-fold cross validation. ((a)-(c): 1st scenario, (d): 2nd scenario).

Fig. 4: An example of ground truth versus predicted segmentation masks from net_X (DSC=0.88) and net_all (DSC=0.8) for a middle slice.

We observed that for the middle slices where the uterus is fully visible, the DSC is high for both test patients. However, for the slices close to the edges of uterus, the DSC is low that means the network performs well mainly on middle slices. The distribution of the DSC across slices in the axial plane for one scan in all 5 folds is illustrated in Fig. 5. The distribution of DSC in each fold is shown in (a)-(e), and the average of DSC is shown in (f). The red line in this figure shows the DSC of 0.7. Therefore, our proposed algorithms can overcome the need of manual selection of the middle slices for the semi-automatic presented in Mason et al.[11]. Some slices, however, are in the middle and have a low DSC (marked with red circles in Fig. 5). In the future, we will look at these cases more.

Fig. 5: Distribution of the DSC across folds for patient ID 1.

The quantitative results are reported in Table III. The average DSC for most scans is low due to the difficulty in segmenting slices on the edges that we addressed earlier. However, we observed that the DSC of middle slices are higher as we expected, and both scenarios behave pretty similarly.

Patient ID Scan No. net_X net_Y net_Z
All slices
1 1
2
10 1
4 mid-slices
1 1
2
10 1
TABLE II: Quantitative results - Average DSC - Scenario 1.
Patient ID Scan No. Axial Coronal Sagittal
All slices
1 1
2
10 1
4 mid-slices
1 1
2
10 1
TABLE III: Quantitative results - Average DSC - Scenario 2.

Iv Discussion

As mentioned earlier, uterus segmentation in US images is very challenging due to its location and inconspicuous boundaries. In the previous semi-automatic algorithm presented by Mason et al. [11], the start point of the algorithm is finding the slice where the uterus is completely visible. Therefore, our proposed schematic not only overcome the initial manual selection of previous semi-automatic algorithm, it also provides comparable DSC with the semi-automatic algorithm. As we utilized MobileNet-v2 which is well-known in terms of being light in memory usage, the proposed network configuration is also sufficiently light which makes it suitable for use in the clinic which requires results in a few seconds. We discovered that all of the proposed networks function inadequately on slices close to the uterus’s boundaries, which is a shortcoming of the current study. As part of our ongoing research, we will delve deeper into this issue.

References

  • [1] M. Amiri, R. Brooks, B. Behboodi, and H. Rivaz (2020) Two-stage ultrasound image segmentation using u-net and test time augmentation. International journal of computer assisted radiology and surgery 15 (6), pp. 981–988. Cited by: §I.
  • [2] B. Behboodi, H. Rasaee, A. K. Tehrani, and H. Rivaz (2021) Deep classification of breast cancer in ultrasound images: more classes, better results with multi-task learning. In Medical Imaging 2021: Ultrasonic Imaging and Tomography, Vol. 11602, pp. 116020S. Cited by: §I.
  • [3] B. Behboodi and H. Rivaz (2019) Ultrasound segmentation using u-net: learning from simulated data and testing on real data. In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6628–6631. Cited by: §I.
  • [4] P. A. Cohen, A. Jhingran, A. Oaknin, and L. Denny (2019) Cervical cancer. The Lancet 393 (10167), pp. 169–182. Cited by: §I.
  • [5] K. Dilna and D. Jude Hemanth (2020) Fibroid detection in ultrasound uterus images using image processing. In International Conference on Innovative Computing and Communications, pp. 173–179. Cited by: §I.
  • [6] A. Howard, A. Zhmoginov, L. Chen, M. Sandler, and M. Zhu (2018) Inverted residuals and linear bottlenecks: mobile networks for classification, detection and segmentation. Cited by: §II-C.
  • [7] Y. Hsiao, S. Yang, Y. Chen, T. Chen, H. Tsai, M. Chou, and P. Chou (2021) Updated applications of ultrasound in uterine cervical cancer. Journal of Cancer 12 (8), pp. 2181. Cited by: §I.
  • [8] S. J. Huh, W. Park, and Y. Han (2004) Interfractional variation in position of the uterus during radical radiotherapy for cervical cancer. Radiotherapy and oncology 71 (1), pp. 73–79. Cited by: §I.
  • [9] D. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §II-C.
  • [10] I. Loram, A. Siddique, M. B. Sánchez, P. Harding, M. Silverdale, C. Kobylecki, and R. Cunningham (2020) Objective analysis of neck muscle boundaries for cervical dystonia using ultrasound imaging and deep learning. IEEE journal of biomedical and health informatics 24 (4), pp. 1016–1027. Cited by: §I.
  • [11] S. A. Mason, I. M. White, S. Lalondrelle, J. C. Bamber, and E. J. Harris (2020) The stacked-ellipse algorithm: an ultrasound-based 3-d uterine segmentation tool for enabling adaptive radiotherapy for uterine cervix cancer. Ultrasound in medicine & biology 46 (4), pp. 1040–1052. Cited by: §I, §III, §IV.
  • [12] A. Ouahabi and A. Taleb-Ahmed (2021) Deep learning for real-time semantic segmentation: application in ultrasound imaging. Pattern Recognition Letters 144, pp. 27–34. Cited by: §I.
  • [13] M. J. Padghamod and J. P. Gawande (2014) Classification of ultrasonic uterine images. Adv Res Electr Electron Eng 1 (3), pp. 89–92. Cited by: §I.
  • [14] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 234–241. Cited by: §II-C.
  • [15] J. Shi, S. Zhou, X. Liu, Q. Zhang, M. Lu, and T. Wang (2016) Stacked deep polynomial network based representation learning for tumor classification with small ultrasound image dataset. Neurocomputing 194, pp. 87–94. Cited by: §I.
  • [16] P. Sombune, P. Phienphanich, S. Phuechpanpaisal, S. Muengtaweepongsa, A. Ruamthanthong, and C. Tantibundhit (2017)

    Automated embolic signal detection using deep convolutional neural network

    .
    In 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 3365–3368. Cited by: §I.
  • [17] A. K. Tehrani, M. Amiri, I. M. Rosado-Mendez, T. J. Hall, and H. Rivaz (2021)

    Ultrasound scatterer density classification using convolutional neural networks and patch statistics

    .
    IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. Cited by: §I.
  • [18] K. Wu, X. Chen, and M. Ding (2014) Deep learning based classification of focal liver lesions with contrast-enhanced ultrasound. Optik 125 (15), pp. 4057–4063. Cited by: §I.