Automatic Estimation of Fetal Abdominal Circumference from Ultrasound Images

02/09/2017 ∙ by Jaeseong Jang, et al. ∙ 0

Ultrasound diagnosis is routinely used in obstetrics and gynecology for fetal biometry, and owing to its time-consuming process, there has been a great demand for automatic estimation. However, the automated analysis of ultrasound images is complicated because they are patient-specific, operator-dependent, and machine-specific. Among various types of fetal biometry, the accurate estimation of abdominal circumference (AC) is especially difficult to perform automatically because the abdomen has low contrast against surroundings, non-uniform contrast, and irregular shape compared to other parameters.We propose a method for the automatic estimation of the fetal AC from 2D ultrasound data through a specially designed convolutional neural network (CNN), which takes account of doctors' decision process, anatomical structure, and the characteristics of the ultrasound image. The proposed method uses CNN to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein) and Hough transformation for measuring AC. We test the proposed method using clinical ultrasound data acquired from 56 pregnant women. Experimental results show that, with relatively small training samples, the proposed CNN provides sufficient classification results for AC estimation through the Hough transformation. The proposed method automatically estimates AC from ultrasound images. The method is quantitatively evaluated, and shows stable performance in most cases and even for ultrasound images deteriorated by shadowing artifacts. As a result of experiments for our acceptance check, the accuracies are 0.809 and 0.771 with the expert 1 and expert 2, respectively, while the accuracy between the two experts is 0.905. However, for cases of oversized fetus, when the amniotic fluid is not observed or the abdominal area is distorted, it could not correctly estimate AC.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 6

page 8

page 9

page 11

page 12

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Ultrasound is the most commonly used tool in the field of obstetrics for the anatomical and functional surveillance of fetuses. Fetal biometry (estimation of the fetal biparietal diameter (BPD), head circumference (HC), and abdominal circumference (AC)) has been known to be useful for predicting intrauterine growth restriction and fetal maturity, and for estimating gestational age [1]. Acquisition of the standard plane which includes specific anatomical structures as landmarks is prerequisite for the subsequent biometric measurements including BPD, HC, AC, and femur length (FL) [2]. In clinical practice, clinicians manually obtain the standard planes and this process requires knowledge of anatomy and spatial perception thus accuracy is dependent on operator’s experiences [3, 4]. The accuracy of estimated fetal weight using ultrasound holds intra- and inter-observer variability as the fetal weight is extrapolated from a formulation of fetal biometric measurements [2]. And among biometric measurements, AC is most predictive of fetal weight thus, a variation in AC measurement leads to inaccurate fetal weight estimation [5]. To ensure a precise AC plane that is perpendicular to the true fetal longitudinal axis, clinician has to continuously move the transducer to find a plane consisting accurate landmarks. This process is firstly, cumbersome as fetal movement, breathing movement, and fetal position hinder prompt acquisition of the plane; and secondly, may lead to inaccurate measurement as inexperienced operators often fail to adhere to multiple landmarks of correct AC plane [6]. Therefore, development and implementation of automated fetal biometric measurements has recently gained spotlight in hope to improve clinicians’ workflow and to overcome operator-dependency [1, 6].

For stable morphologcal information extraction from ultrasound images, numerous methods have been suggested to handle noisy ultrasound images, which are affected by signal dropouts, artifacts, missing boundaries, attenuation, shadows, and speckle [7]. In most methods, in order to deal with such inherent difficulties, image intensity-based or gradient-based methods have been preferred to extract the boundaries of target anatomies [1, 8, 9, 10, 12], and abdominal circumference [11, 13]. While the image gradient-based methods shows a stable performance and progresses for HC and FL which have high contrast against surroundings, automatic measurement of AC is considered as more challenging issue because fetal abdomen has low contrast against surroundings, non-uniform contrast, and irregular shape in ultrasound images.

In addition to the boundary extraction methods, it is important to evaluate how a given ultrasound image is proper for AC measurement [14]. Although the evaluation is an essential to automate entire diagnostic process for AC measurement, the model is limited to find a proper plane but not to extract AC.

Instead of such image-based approaches, machine learning methods, such as the probabilistic boosting tree (PBT)

[15], have been used for fetal biometry including AC. The PBT method is a multi-class discriminative model constructing a tree with its nodes as distinct strong classifiers made by several weak classifiers. By classifying segment structures in ultrasound images, this method estimates fetal biometry parameters [15]. Although this approach showed some notable results, it requires complex, well-annotated data to train the tree.

Recently, with great successes in object recognition, convolutional neural network (CNN) has attracted much attention and was also applied in fetal biometry to analyze high-level features from ultrasound image data. Each method aims to find a standard abdominal plane [3] from succesive ultrasound frames or localize a fetal abdomen [16] from a ultrasound image. Although the approaches their function. These approaches, however, implement only a part of an entire measurement process and need to be integrated for full automation of the measurement process. Additionally, this method has faced obstacles in the clinical environment: (i) it is difficult to collect sufficient data for training, and (ii) it is difficult to cope with serious artifacts including shadowing artifacts [3].

In this paper, we propose a method that increases classification performance with relatively small number of data and also deals with artifacts by including ultrasound propagation direction as well as multiple scale patches as inputs. The proposed method classifies images patches from an ultrasound image into anatomical structures so that the classification allows the verification of acceptability of a given abdominal plane. By detecting anatomical structures in a fetal abdomen, we estimate the AC of the accepted plane using the ellipse detection method based on Hough transform. We validated our method using ultrasound data of the AC measurement from fetuses at 20-34 weeks of gestation. Three trained clinicians evaluated the accepted abdominal planes and AC estimated by the method.

The major contribution of our work is as follows:

  • We develop a specialized CNN structure that takes account of sonographers’ decision process by considering the characteristics of ultrasound imaging. The proposed CNN structure shows high training performance in spite of a relatively low number of training samples.

  • We develop a framework that combines the CNN and Hough transform to complement each other. The CNN simultaneously provides evidence for AC plane evaluation and pre-processing of an ultrasound image for AC estimation. With the combination, we can achieve a more stable AC estimation compared to the case of using a mathematical model alone.

Ii Methods

Fig. 1: Fetal abdominal ultrasound images and anatomical structures. In a standard plane, stomach bubble (SB) and (UV) appearing as ”hockey-stick” bending against SB must be demonstrated.

Normal-view Wide-view

Ultrasound propagation direction

W-view

N-view

U-Dir

Input

CNN

Semantic segmantation

Output

AC fitting

Plane acceptance check

AF

SB

UV

SA
Fig. 2: Overall process of the proposed framework. The proposed framework performs semantic segmentation by using a CNN, AC measurement, and plane acceptance check. Especially, the CNN used for the semantic segmentation admits normal-view (N-view), wide-view (W-view), and ultrasound propagation direction (U-Dir).

Fetal AC measurement requires a suitable selection of transabdominal ultrasound images, as shown in Fig. 1, and the identification of the fetal region from the noisy ultrasound images. The standard AC plane must contain the stomach bubble(SB) and the portal section from the umbilical vein(UV), which has the characteristic “hockey-stick” appearance [24]. Additionally, there exists a portion of the fetal boundary that overlaps with a portion of the amniotic fluid (AF) boundary. To utilize these facts, SB, UV, and AF should be distinguished from each other and shadowing artifacts (SA) because all of them appear as anechoic region in B-mode ultrasound images.

Taking account of these observations, the proposed method consists of three main steps: anatomical structure detection using CNN, fetal abdominal region detection using Hough transform, and acceptance plane verification using another CNN (Fig. 2). Before explaining the proposed method in detail, let us review the CNN.

Ii-a Proposed CNN Structure

CNN is a type of artificial neural network inspired by visual information processing in the brain. To recognize complex features from the visual information, CNN consists of several layers, which extract and repeatedly combine low-level features for composing high-level features. The composed high-level features are used to classify an input image.

Generally, many CNNs consist of combinations of convolutional, pooling, and fully connected layers. The convolutional layer (C-layer) extracts higher-level features by convolving received feature maps from the previous layer and activating the convolved features. In this paper, the rectified linear unit (ReLU)

is used as the activation function. A C-layer is usually followed by a pooling layer (P-layer), which reduces the dimensions of feature maps by “max pooling.” The max pooling downsamples the input feature maps by striding a rectangular receptive field and taking the maximum in the field. After C-layers and P-layers, a fully connected layer (F-layer) integrates high-level features and produces compact feature vectors. Like the C-layers, ReLU is used as the activation function of the F-layers in our research. On the final layer, say the

-th layer, the output layer produces the posterior probability

for each class. Classification is achieved by finding label corresponding to the maximum of .

Training a CNN desires to find proper parameters of the CNN, say , using training data. To find a proper , entropy or energy is defined and desired to be minimized where denotes training data. In other words, a proper set of parameters is obtained by the optimization problem

(1)

For more details about CNN, readers may refer to [17, 18].

Fig. 3: The relation between ultrasound propagation direction (white arrows) and image pattern direction (yellow arrows) in image patches. In the image patch corresponding to shadowing artifact (light blue box), the image pattern is strongly related to the ultrasound propagation direction compared to the patch corresponding to AF (red box).
Fig. 4: Variation of observed image feature according to the size of a view. Local pattern of dark region appear similar in a relatively small size of view (two green boxes). However, as the size of view increases (two yellow boxes), distinct image feature appears in each view.

The proposed CNN structure is based on the following observations, motivated by the comparison between the sonographers’ classification process and conventional CNN structures for object recognition:

  1. A position is classified as a shadowing artifact not only by the local image pattern at the position but also by the expected ultrasound propagation direction and the position of hard materials (spine, ribs) (Fig. 3).

  2. For accurate classification, both the local and non-local structure information need to be integrated. For example, in Fig. 4, we can observe two positions having similar local patterns with distinct non-local structures.

Based on the first observation, the proposed CNN structure admits the ultrasound propagation direction as one of input. For example, the propagation direction can be simply modelled by

(2)

where is the position with respect to the probe position. For the computational efficiency of the CNN, the size of input should be as small as possible. However, as per our second observation, we need both the local and non-local structure information to classify each position accurately. Therefore, we used two image patches corresponding to a normal view and a wide view as inputs. Firstly, a sized local image patch was selected as the normal-view image patch to analyze the local structure around a given position by using the image pattern. Secondly, a sized non-local image patch was selected as the wide-view image patch to combine the local structure with other structures far from the position. To reduce computational cost, the wide-view image patch was simplified into a low-resolution image of size .

The output of the proposed CNN structure is a vector, which corresponds to 4 categories of SA and the main anatomical structures in the standard abdominal plane: SB, UV, and AF. We chose all the image patches centered at dark pixels () in a given ultrasound image. Then, the proposed CNN allows the classification of the chosen image patches into 4 categories. With this result, a semantically segmented image is made by coloring each of the chosen pixels according to their categories.

The proposed CNN begins with three branches, which are designed to handle the propagation direction and each image patch of multiple views. Each branch extracts the desired image features and analyzes the propagation direction.

Input

Convolution Filter Size : Outputs :

Convolution Filter Size : Outputs :

Pooling Filter Size : Outputs :

Pooling Filter Size : Outputs :

Convolution Filter Size : Outputs :

Convolution Filter Size : Outputs :

Pooling Filter Size : Outputs :

Pooling Filter Size : Outputs :

Fully Connected Outputs :

Fully Connected Outputs :

Fully Connected Outputs :

Concatenation Outputs :

Fully Connected Outputs :

Fully Connected Outputs :

Fully Connected Outputs :

Output

Normal-View

Wide-View

Propagation Data
Fig. 5: The proposed CNN structure. The CNN uses input data as a combination of image patches of multiple sizes and ultrasound propagation direction. From the image patches and the propagation directions, feature vectors are extracted and combined to classify a given image patch.

As shown in Fig. 5, two branches for image analysis basically consist of pairs of convolutional and max-pooling layers as well as a fully connected layer. In the branch analyzing the normal-view image patch, the first and second convolutional layers, respectively, used and filters, and the max-pooling was used with a stride step of in both max-pooling layers. The branch analyzing the wide-view image patch consists of the same structure as the combinations of the convolutional and max-pooling layers, too. The ultrasound propagation direction is analyzed through a fully connected layer to detect the propagation direction.

The result produced by each branch is concatenated into one feature vector for classification. This vector passes through two fully connected layers to classify the given data into 4 classes. We made a semantic segmentation image using this classification results (Fig. 2).

Ii-B Measurement Agreement

The commonly used AC measurement is the manual fitting of an ellipse (or circle) to a fetal abdominal contour. In order to detect this ellipse form automatically, the ellipse detection method based on Hough transform [19, 20, 21, 22] has been proposed. However, direct application of these methods to extracted AF region could produce undesired ellipse candidates (Fig. 6(d)), since the AF region in our semantic image does not surround the entire fetal abdomen region (Fig. 6(c)). To select the proper ellipse out of candidates generated from the ellipse detection method [19], we only accept ellipses with the ratio of minor axis to major axis greater than 0.6. Among remaining candidates, the half of the candidates which have less amount of AF are selected as our fetal abdominal boundary (Fig. 6(e)). For a stable result, the medians of major axis, minor axis, center, and angle from the positive horizontal axis to the major axis of the candidate ellipses are taken as parameters of a final ellipse. We estimate AC by calculating the selected ellipse boundary using

(3)

where and are the major and minor axis of ellipse.

Transabdominal images demonstrating proper landmarks for the true axial plane for AC measurement were obtained by an expert. The AC measurement was performed either manually by other experts or by using the proposed method. The assessment of localization of fetal abdomen region and the comparison of AC values between the manual and CNN method was performed.

(a)

(b)

(c)

(d)

(e)

(f)
Fig. 6: The fetal abdomen detection work flow. (a) is acquired sementic segmentation image. (b) and (c) are extracted AF region and each boundary images respectively. (d) are candidates of fetal abdomen generated by Hough transform. With these candidates, the best fitting ellipse which we choose is shown in (e) with an experts’ caliper placement (f).

Ii-C Plane Acceptance Check

In this section, we evaluate the suitability of the selected plane to determine whether the plane is appropriate for measuring AC. The semantic segmentation image is cropped to the estimated fetal abdomen area from the previous step and the cropped image is rescaled to be size as described in Fig. 7. Especially, gray region corresponding to the shadowing artifact is excluded when the image is rescaled. By admitting the rescaled image as the input, CNN in Table estimates the suitability with the probability that the given image is appropriate for measuring AC. The CNN consists of three pairs of convolutional and pooling layers, and three fully-connected layers. The first convolutional layer detects features from different channels (RGB) which mean different anatomical structures, and the feature information is propagated through the following convolutional layers to analyse anatomical configuration. Last three fully-connected layers integrate the detected features and determine the suitability.

Transabdominal ultrasound images were obtained by an expert and reviewed by each of the two ultrasound experts, including the operator. When either of the experts accepts a given image, the given image was considered as an acceptable image. The proposed CNN were evaluated by comparing its acceptance result to experts’ acceptance result.

(a)

(b)

(c)
Fig. 7: The process for the acceptance check. (a) is an semantic segmentation image with its detected ellipse. Based on the detected ellipse, semantic segmentation image in a abdominal region is cropped like the yellow box in (b), and (c) the cropped image is rescaled image and used as an input of the CNN for the acceptance check.
Input Umbilical vein image
Type Maps Filter size Stride
C -
P 2
C -
P 2
F - -
F - -
Output F -
TABLE I: The proposed CNN structure for plane acceptance check. The output of the network has 8 classes which correspond to the 8 directions.

Iii Results

Iii-a Data and Experimental Setting

For training and evaluation, fetal abdominal ultrasound images were provided by the department of Obstetrics and Gynecology, Yonsei university college of medicine, Seoul, Korea (IRB no.: 4-2017-0049). We were provided with 88 cases of ultrasound images and each case consists of several true and false abdominal ultrasound images obtained from a pregnant woman by experts with an IU22 (Phillips, Seoul, Korea) ultrasound machine and a 2–6-MHz transabdominal transducer. The provided cases were separated into “training cases” and “test cases” which are consist of 56 cases and 32 cases, respectively. The training images were used to generate training data for the classification CNN and the acceptance check CNN, and tune a heuristic parameters for the ellipse detection.

Caffe [23] was used to implement and train the two proposed CNN in our framework. Our framework which consists of the proposed CNNs and the Hough transform-based ellipse detection [27] was implemented with MATLAB and Python.

Iii-B Training Performance of the proposed CNN

As mentioned, fetal abdominal ultrasound images from the 56 test cases were provided and 13261 pairs of image patches of multiple views were extracted from the images with the ultrasound propagation direction in those patches. The extracted patches was divided into 2 sets, training set and test set, to evaluate training process by the simple cross validation. The ratio of the training set to the test set are approximately 2:1. We used ADAM to minimize the loss function

[25] and dropout ratio on the last layer during training to prevent overfitting [26].

(a)

(b)
Fig. 8:

The initialized filters of the fully-connected layer for propagation direction. Filters are initialized so that each component of filters follows Gaussian distribution (a) and the filters are uniformly distributed toward the imaging range (b). (f) is an expert’s example of AC caliper placement. Compared to automatic caliper placement, the expert’s placement tends to be smaller.

Fig. 9: Train loss (a) and test accuracy (b). We trained the following three casese : 1. the filters are initialized to be unifomrly distributed toward the imaging domain (blue lines). 2. the filters are initialized randomly with the Gaussian distribution (green lines). 3. ultrasound propagation direction is not used for the classification (red lines). On the other hand, in the image branches, the weights were initialized with the same values for the three cases.

Vectors in Fig. 8 correspond to initialized filters, say directional filters, of the fully-connected layer in a branch for propagation direction analysis, say direction branch. Fig. 8

shows the directional filters initialized as randomly distributed vector with a normal distribution and as uniformly distributed vectors. Training processes with the two initialization strategies were compared to the training process without the direction analysis branch. In this comparison, filters in image branches were initialized with same filters for the three cases. The training loss and test accuracy changes are plotted as graph in Fig.

9(a) and (b), respectively. When the directional filters are randomly initialized , not all filter vectors could be toward the image range and some vectors are obtuse for all ultrasound propagation direction in the image range as described in Fig. 8

(b). Because the inner-products between the obtuse vectors and ultrasound propagation directions are negative, neurons corresponding to the obtuse filter vectors is not activated by ReLU function and the filters are not updated during training. In Fig.

9, the difference of training performances is not notable between the cases with randomly initialized directional filter and without the direction analysis branch. On the other hand, when the directional filters are initialized to be uniformly distributed toward the image range, all directional filters contribute classification. Therefore, it results that convergence speed increases in the uniform initialization case. In the following sections, we use the trained filters with the initialized directional filters as uniformly distributed vector toward the imaging range.

Iii-C AC Measurement

Fig. 10: The result for AC measurement. By applying the proposed CNN to (a) and (e), corresponding semantic segmentation images, (b) and (f), are obtained, respectively. (c) and (g) are the AC localization by the proposed Hough transform-based method. The placement of calipers are similar with experts’ caliper placement in (d) and (h).

For the assessment of AC measurement, the ultrasound images labelled as true axial planes by the experts with AC measurement were used to perform semantic segmentation using the proposed CNN. At every anechoic point in the images, image patches corresponding to the normal- and wide-views, and ultrasound propagation direction were used as the input of the proposed CNN. As described in Fig. 10, the classification results for anechoic points in a given ultrasound image are represented as color maps, whose red, green, blue, and gray colors correspond to SB, UV, AF, and SA, respectively.

Since it could be very inefficient to choose candidate ellipses among all possible ellipses, we should filter out worthless ellipses. For example, ellipses which overlap amniotic fluid region or have an abnormal ratio between the major and minor axes have a low possibility to be selected as candidates. In order to make clear criteria, we used 26 true abdominal images with AC annotations and length from the training images. From the true abdominal images, is heuristically chosen as the lower bound of the ration between the major and minor axes of ellipse which fits to AC annotation. With the criteria, 25 candidates of ellipsoidal contours were selected from a single AF region image extracted from a semantic segmentation image and the median of parameters were chosen. Although AC contours were localized abdominal regions well by using the lower bound of the ratio, the detected contours tend to be larger than contours annotated by the experts as described in Fig. 10. By comparing the two AC measurement results with experts’ measurement with the training images, we decided to multiply to adjust the automatic measurement.

The ellipse detection were tested with the provided 40 true abdominal images with AC annotation from the test images. Although false SB and UV regions are observed in AF regions and fetal abdomen is not fully surrounded by AF, the proposed Hough transform-based approach segments fetal abdominal regions. We compared the AC estimation from accepted ultrasound images between the experts and our method. Some comparison of abdominal contours selected by the experts and ellipsoidal contour are plotted in Fig. 10.

We evaluated the performance of AC measurement with dice similarity metric

(4)

where is the abdomen region obtained from our AC estimation and is the ground truth abdomen region delineated by the doctors. This similarity metric explains how the ground truth and detected region are close to and overlapped with each other. The dice similarity was % for 56 cases whose AC measurement was given by just one expert. For the top 80% dice similarity, the score was %.

Iii-D Acceptance Check

To train a CNN for plane acceptance check, we used 265 true and false transabdominal plane images from images of the training cases. Training and test sets for the acceptance check CNN consist of 209 and 56 cases of the annotated images, respectively. For each case in the training and test sets, semantic segmentation was performed and fetal abdominal region was localized by using the proposed Hough transform-based approach. Based on the localization, the semantic image was cropped and rescaled as mentioned above. To augment our training set, the rescaled image was rotated with every 20 degrees from 0 to 340 degree and mirrored. For training, ADAM [25] and dropout [26] were applied, too. After the training, a threshold level to accept a true axial plane is determined to maximize test accuracy of the proposed acceptance check CNN by using the test set.

For the performance evaluation, 105 transabdominal images among the annotated ultrasound images from images of the test cases were used for evaluation. For the valuation, we compared acceptance check results among the 2 experts and the CNN by using the accuracy :

(5)

where , , , and are the number of true positive, true negative, false positive, and false negative, respectively. As shown in Table II, the accuracy of our acceptance check results are 0.809 and 0.771 with the expert 1 and expert 2, respectively while the accuracy between the two experts is 0.905.

Expert # 1 Total
False True
Expert # 2 False 75 2 77
True 8 20 28
Total 83 22 105
Expert # 1 Total
False True
CNN False 69 6 75
True 14 16 30
Total 83 22 105
Expert # 2 Total
False True
CNN False 64 11 75
True 13 17 30
Total 77 28 105
TABLE II: Confusiion matrix for the acceptance check among the experts and the CNN for acceptance check

Iv Discussion and Conclusion

Although CNN showed good performance in image recognition recently, it requires the collection of a large amount of training data to achieve satisfactory pixel-wise classification results. Unfortunately, owing to the limitation of gathering clinical data, it is difficult to collect sufficient data to guarantee satisfactory classification for various cases of ultrasound images. If a CNN is trained only with image data owing to the lack of physical characteristics, the number of training data required increases. We attempted to evade this problem by designing modality-specific structured CNN and obtained notable improvement in the training performance.

We also used CNN technique to evaluate the suitability of a selected image for proper AC measurement, where the suitability check was performed by analyzing anatomical configuration of the SB and UV regions in semantic segmentation images. With 3D ultrasound imaging systems, this CNN method can be used to select the best abdominal biometry plane from volume data. Other approaches for the suitability evaluation are reported in the literatures [3, 4, 14].

Fig. 11: Cases whose AC fittings are inapproparate. In the first row, the fitted ellipse in (c) is properly localized but has bigger shpae due to the lack of abdominal boundary information along the minor axis direction. In the case of the second row, a lack of AF region on the left part causes underestimate of AC even though SA region in fetal abdomen is classified.

The proposed method has a room for further improvements. First, abdominal region detection should be refined because it does not influence only AC estimation but also localization of the abdominal region which is used for the acceptance check. Fig. 11 indicates cases that detected ellipses by our method are insufficient to be accepted although SA in fetal abdomen is classified well. In both cases in Fig. 11, detected ellipses have relatively acceptable position but a lack of AF region causes such inaccurate caliper localization. As explained, in clinical situations, it could not be guaranteed that sufficient anechoic points appear in AF region of ultrasound image for accurate and stable ellipse detection. Therefore, it would be desirable to combine our method with a supplementary method or develop an advanced CNN to detect the ribs and spine where they are known to be crucial for acceptance check and AC fitting.

Second, although we augmented the given true and false abdominal plane images, the numbers of the true and false images were not balanced to guarantee a balanced performance for the true and false cases. For false abdominal images, our acceptance check performed well while the accuracy for true abdominal planes is lower than the one for the false planes. And the given true images are not sufficient to represent features of true abdominal planes.

Additionally, established architectures can be adopted to a part of proposed CNN architecture, such as U-net. Due to a limited memory of our computing environment, we had a difficulty in sharing and training the established architectures. With a sufficient computing environment, the performance could be improved by sharing the established architectures and their pre-trained filters, which is called ”domain-transferred” deep CNN [3].

In our experiments, because of technical difficulties, only fetal ultrasound images were provided without probe geometry which are available when the method is implemented into a ultrasound system. Due to the absence of the information, the performance of our framework could decrease. In our experiments, in order to evaluate ultrasound propagation direction, we assumed that the probe is located at a certain point over the image (Fig. 8) and the position was applied to all provided images even though the images have different imaging range.

In our results, there is no performance comparison to existing methods of automatic AC measurement because none of existing methods provides a stable performance and quantitative results under the similar experimental environmental with us. We may refer to [7] for quantitative results for other parts, such as head and femur.

In summary, we proposed a method for automatic estimation of AC from ultrasound images. This method shows good performance in most cases with relatively small number of training data. This suggests that machine learning might find a breakthrough in the medical imaging field by focusing on developing modality-specific structures of CNN. Even though the proposed method shows some limitations in cases of oversized fetuses and images highly corrupted by shadowing artifact, we expect that the proposed method of automated AC measurement contribute to measure accurate AC leading to estimate fetal weight accurately as well as to decrease operator dependency of AC measurement. Furthermore, our method will be helpful for artificial intelligence technique of automated measurement on ultrasonography in addition to current automation techniques.

V Acknowledgements

This work was supported by the National Institute for Mathematical Sciences (NIMS) grant funded by the Korean government (No. A21300000) and the National Research Foundation of Korea (NRF) grant 2015R1A5A1009350 and 2017R1A2B20005661.

References

  • [1] V. Chalana, T.C. Winter, D.R. Cyr, D.R. Haynor, and Y. Kim, “Automatic fetal head measurements from sonographic images,” Academic Radiology, vol. 3, no. 8, pp. 628–635, 1996.
  • [2] F. P. Hadlock, R. B. Harrist, R. S. Sharman, R. L. Deter, and S. K. Park, “Estimation of fetal weight with the use of head, body,and femur measurements-a prospective study,” American journal of obstetrics & gynecology, vol. 151, no. 3, pp. 333–337, 1985.
  • [3] H. Chen, D. Ni, J. Qin, S. Li, X. Yang, T. Wang T, and P.A. Heng. “Standard plane localization in fetal ultrasound via domain transferred deep neural networks,” IEEE Journal of Biomedical & Health Informatics, vol. 19, no. 5, pp. 1627-1636, 2015.
  • [4] D. Ni, X. Yang, X. Chen, C.T. Chin, S. Chen, P.A. Heng, S. Li, J. Qin, and R. Wang, “Standard plane localization in ultrasound by radial component model and selective search,” Ultrasound in Medicine & Biology, vol. 40, no. 11, pp. 2728–2742, 2014.
  • [5] S. Campbell and D. Wilkin “Ultrasonic measurement of fetal abdomen circumference in the estimation of fetal weight,” BJOG: An International Journal of Obstetrics & Gynaecology, vol. 82, no. 9, pp.689–697, 1975.
  • [6] J. Espinoza, S. Good, E. Russell, and W. Lee, “Does the use of automated fetal biometry improve clinical work flow efficiency?,” Journal of Ultrasound in Medicine, vol. 32, no. 5, pp. 847–850, 2013.
  • [7] S. Rueda et al., “Evaluation and comparison of current fetal ultrasound image segmentation methods for biometric measurements: a grand challenge,” IEEE Transactions on Medical Imaging, vol. 33, no. 4, pp. 797-813, 2014.
  • [8] S.D. Pathak, D.R. Haynor, and Y. Kim, “Edge-guided boundary delineation in prostate ultrasound images,” IEEE Transactions on Medical Imaging, vol. 19, no. 12, pp. 1211–1219, 2000.
  • [9] Y. Yu and S.T. Acton, “Speckle reducing anisotropic diffusion,” IEEE Transactions on Image Processing, vol. 11, no. 11, pp. 1260–1270, 2002.
  • [10] S.M. Jardim and M.A. Figueiredo, “Segmentation of fetal ultrasound images,” Ultrasound in Medicine & Biology, vol. 31, no. 2, pp. 243–250, 2005.
  • [11] W. Wang et al., “Detection and measurement of fetal abdominal contour in ultrasound images via local phase information and iterative randomized Hough transform,” Bio-medical Materials and Engineering, vol. 24, no. 1, pp. 1261–1267, 2014.
  • [12] J. Yu, Y. Wang, and P. Chen, “Fetal ultrasound image segmentation system and its use in fetal weight estimation,” Med. Biol. Eng. Comput., vol. 46, no. 12, pp. 1227–-1237, Dec. 2008.
  • [13] J. Nithya and M. Madheswaran, “Detection of intrauterine growth retardation using fetal abdominal circumference,” International Conference on Computer Technology and Development, 2009.
  • [14] A. C. Kumar, K. S. Shriram, “Automated scoring of fetal abdomen ultrasound scan-planes for biometry,” Proc. of 12th International Symposium on Biomedical Imaging, IEEE, pp. 862–865, 2015.
  • [15] G. Carneiro, B. Georgescu, S. Good, and D. Comaniciu, “Detection and measurement of fetal anatomies from ultrasound images using a constrained probabilistic boosting tree,” IEEE Transactions on Medical Imaging, vol. 27, no. 9, pp. 1342–1355, 2008.
  • [16] H. Ravishankar, S. M. Prabhu, V. Vaidya, and N. Singhal

    “Hybrid approach for automatic segmentation of fetal abdomen from ultrasound images using deep learning,”

    International Symposium on Biomedical Imaging, 2016.
  • [17] Y. LeCun, et al.,

    Backpropagation applied to hand-written zip code recognition,”

    Neural Computation, vol. 1, issue. 4, pp. 541–551, 1989.
  • [18] Y. LeCun, Y. Bengio, G. Hinton “Deep learning,” Nature, vol. 521, no. 7553, pp. 436-444, 2015
  • [19] Y. Xie and Q. Ji, “A new efficient ellipse detection method,”

    Proceedings of 16th International Conference on Pattern Recognition

    , IEEE, vol. 2, pp. 957–960, 2002.
  • [20] R.A. McLaughlin, “Randomized Hough transform: improved ellipse detection with comparison.” Pattern Recognition Letters, vol. 19, no. 3, pp. 299-305, 1998.
  • [21] L. Xu, E. Oja, and P. Kultanen, “A new curve detection method: randomized Hough transform (RHT).” Pattern Recognition Letters, vol. 11, no. 5, pp. 331-338, 1990.
  • [22] N. Bennett, R. Burridge, and N. Saito, “A method to detect and characterize ellipses using the Hough transform.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 7, pp. 652-657, 1999.
  • [23] Y. Jia et al., “Caffe: Convolutional architecture for fast feature embedding,” arXiv Preprint arXiv:1408.5093, 2014.
  • [24] L.J. Salomon et al., “Practice guidelines for performance of the routine mid-trimester fetal ultrasound scan,” Ultrasound Obstet. Gynecol., vol. 37, pp. 116–126, 2010.
  • [25] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [26] N. Srivastava, G.E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
  • [27] M. Simonovsky, Ellipse detection using 1D hough transform (http://www.mathworks.com/matlabcentral/fileexchange/33970-ellipse-detection-using-1d-hough-transform), MATLAB Central File Exchange, 2013.