Fetal sonographic (FS) examinations are widely applied in clinical settings due to its non-invasive nature, reduced cost, and real-time acquisition (Rueda). FS examinations are consisted of first, second and third trimester examination, and limited examination (AIUM), which covers a range of critical inspections such as evaluation of a suspected ectopic pregnancy (Chambers1990; Hilla), and confirmation of the presence of an intrauterine pregnancy (Barnhart2011; Jeve2011a; Thilaganathan2011). The screening and evaluation of fetal anatomy are critical during the second and third trimester examinations. The screening is usually assessed by ultrasound after approximately 18 weeks’ gestational (menstrual) age. According to a survey (Murphy2018), neonatal mortality in the United States in 2016 was 5.9 deaths per 1,000 live births, and birth defects are the leading cause of infant deaths, accounting for 20% of all infant deaths. Besides, congenital disabilities occur in one in every 33 babies (about 3% of all babies) born in the United States each year. In this case, the screening and evaluation of fetal anomaly will provide crucial information to families prior to the anticipated birth of their child on diagnosis, underlying etiology, and potential treatment options, which can greatly improve the survival rate of the fetus. However, the physiological evaluation of fetal anomaly requires well trained and experienced sonographers to obtain standard planes. Although a detailed quality control guideline was developed for the evaluation of standard plane (Murphy2018), the accuracy of the measurements is highly dependent on the operator’s training, skill, and experience. According to a study (Murphy2018)
, intraobserver and interobserver variability exist in routine practice, and inconsistent image quality can lead to variances in specific anatomic structures captured by different operators. Furthermore, in areas where medical conditions are lagging, there is a lack of well-trained doctors, which makes FS examinations impossible to perform. To this end, automatic approaches for FS image quality assessment are needed to ensure that the image is captured as required by guidelines and provide accurate and reproducible fetal biometric measurements(Zhang).
To obtain standard planes and assess the quality of FS images, it is necessary that all the essential anatomical structures in the imaging should appear full and remarkable with clear boundary (AIUM). For each medical section, there are different essential structures. In our research, we consider three medical sections: the heart section, the head section, and the abdominal section. The essential structures corresponding to these sections are given in Table 1. The list of essential anatomical structures used to evaluate the image quality is defined by the guideline (AIUM) and further refined by two senior radiologists with more than ten years of experience of FS examination at the West China Second Hospital Sichuan University, Chengdu, China. A comparison of standard and non-standard planes can be illustrated in Fig. 1.
There are various types of challenges concerning the automatic quality control of FS images. As illustrated in Fig. 2
, the main challenges can be divided into three types: the first type is that the image usually suffers from the influence of noise and shadowing effect, the second type is that similar anatomical structures could be confused due to the low resolution of the images and the third type is that the fetal location during the scanning is unstable which will cause the rotation of some anatomical structure. The first type of challenges can only be solved by using more advanced scanning machines, but we can tackle the rest two challenges by a more scientific approach. Specifically, we need to find an efficient feature extraction method, which remains robust to the distinction between image rotation and similar structures. In recent years, deep learning techniques have been widely applied in many medical imaging fields due to the technique’s stability and efficiency, such as anatomical object detection and segmentation(Ghesu2016; Zhanga; Ghesu) and brain abnormalities detection (Kebir2019; Sujit). A well-designed neural network can efficiently extract the features for classification and identification. In our approach, we firstly design a Feature Extraction Network (FEN) to extract deep level features from FS images, then we feed the extracted features to the region proposal network (RPN) and then to class prediction network (CPN) to identify the region of interest (ROI) and classification simultaneously. Besides, to further improve the performance of our framework, we introduce a relation module to fully utilize the relationship between the entire image and each detected structure. In conclusion, our contribution can be summarized as follows:
An automatic fetal sonographic image quality control framework is proposed for the detection and classification of the two-dimensional fetal heart standard plane. Our model is highly robust against the interference of image rotation and similar structures, and the detection speed is quite fast to meet the clinical requirements fully.
We have introduced many recent advanced object detection technologies into our framework, such as relation module, spatial pyramid pooling, etc. The results of detection and classification are quite promising compared with state-of-the-art methods.
Our framework is generalized and can be well applied to other standard planes. We have shown the results when our framework is applied to the abdominal and head standard plane, which are quite competitive compared with other existing advanced methods.
|Section name||Essential anatomical structure|
|Head section||Cavum septi pellucidi|
|Abdominal section||Stomach bubble|
|Heart section||Left ventricle|
2 Related work
In recent years, with the rapid development of computer vision technology, many intelligent automatic diagnostic techniques for FS images have been proposed. For example, Zehui Lin et al.(Lin) proposed a multi-task convolutional neural network (CNN) framework to address the problem of standard plane detection and quality assessment of fetal head ultrasound images. Under the framework, they introduced prior clinical and statistical knowledge to reduce the false detection rate further. The detection speed of this method is quite fast, and the result achieves promising performance compared with state-of-the-art methods. Zhoubing Xu et al. (Xu2018) proposed an integrated learning framework based on deep learning to perform view classification and landmark detection of the structures in the fetal abdominal ultrasound image simultaneously. The automatic framework achieved a higher classification accuracy better than clinical experts, and it also reduced landmark-based measurement errors. Lingyun et al. (Wu2017) proposed a computerized FS image quality assessment scheme to assist the quality control in the clinical obstetric examination of the fetal abdominal region. This method utilizes the local phase features along with the original fetal abdominal ultrasound images as input to the neural network. The proposed scheme achieved competitive performance in both view classification and region localization. Cheung-Wen Chang et al. (Chang2018) proposed an automatic Mid-Sagittal Plane (MSP) assessment method for categorizing the 3D fetal ultrasound images. This scheme also analyzes corresponding relationships between resulting MSP assessments and several factors, including image qualities and fetus conditions. It achieves a correct high rate for the results of MSP detection. Chandan et al. (Kumar) proposed an automatic method for fetal abdomen scan-plane identification based on three critical anatomical landmarks: the spine, stomach, and vein. In their approach, a Biometry Suitability Index (BSI) is proposed to judge whether the scan-plane can be used for biometry based on detected anatomical landmarks. The results of the proposed method over video sequences were closely similar to the clinical expert’s assessment of scan-plane quality for biometry. Chen et al. (Chen2015)
presented transfer learning frameworks to the automatic detection of different standard planes from ultrasound imaging videos. The framework utilizes spatio-temporal feature learning with knowledge transferred recurrent neural network (T-RNN) consisting of a deep hierarchical visual feature extractor and a temporal sequence learning model. The experiment shows that its results outperform state-of-the-art methods. Baumgartner et al.(Baumgartner) proposed a novel framework based on convolutional neural networks to automatically detect 13 standard fetal views in freehand 2-D ultrasound data and provide localization of the anatomical structures through a bounding box. A notable innovation is that the network learns to localize the target anatomy using weak supervision based on image-level labels only. Namburetea et al. (Namburete2018) proposed a multi-task, fully convolutional neural network framework to address the problem of 3D fetal brain localization, alignment to a referential coordinate system, and structural segmentation. This method optimizes the network by learning features shared within the input data belonging to the correlated tasks, and it achieves a high brain overlap rate and low eye localization error. However, there are no existing automatic quality control methods for fetal heart planes, and the detection accuracy of existing methods on other planes is relatively low due to the use of the outdated design of neural networks. Therefore, it is desirable to propose a more efficient framework that can not only provide accurate clinical assessment in fetal heart plane but can also increase the detection accuracy in other planes.
The framework of our methods can be illustrated in Fig. 3. First, the original image is smoothed by the Gaussian filter and inputted to FEN. Next, FEN will extract deep level features of the image by the convolutional neural network and input to RPN and CPN, respectively. Then, RPN will locate the position of essential structures with the help of feature pyramid networks, and CPN will judge whether the structures meet the standard as well as predict the class. Last, the two networks will combine information and output the final result. In this section, we will briefly introduce the network structure and then elaborate on the feature extraction, the ROI localization, and the structure classification in detail.
3.1 Image Preprocessing
In this part, we mainly implement two operations. To protect the personal information of the subject in FS imaging, we firstly use the threshold method to remove the text in the image. Then we use Gaussian filtering to reduce image noise.
Gaussian filtering is a linear smoothing filter that is suitable for eliminating Gaussian noise and is widely used in the noise reduction process of image processing (Deng). Specifically, suppose denotes the original image matrix, then the processed image matrix can be computed by:
where is two-dimensional Gaussian kernel, and defined by:
The two-dimensional Gaussian function is rotationally symmetrical; that is, the smoothness of the filter in all directions is the same. This ensures that the Gaussian smoothing filter does not bias in either direction, which benefits the feature extraction in the following stage.
3.2 Feature Extraction Network
We use state of the art convolutional neural network techniques to design our feature extraction network. The convolutional neural network (CNN) has unique advantages in speech recognition and image processing with its special structure of local weight sharing, which can greatly reduce the number of parameters and improve the accuracy of recognition (Ren; Zhao; Dai). CNN typically consists of pairs of convolutional layers, average pooling layers, and fully connected (FC) layers. In the convolutional layer, several output feature maps can be obtained by the convolutional calculation between the input layer and kernel. Specifically, suppose denotes the th output feature map in layer , denotes the th feature map in layer, denotes the kernel generating that feature map, then we can get:
where is the bias term in th layer,
denotes rectified linear unit, and is defined as:=. It is also worth mentioning that we use global average pooling (GAP) instead of local pooling for pooling layers. The aim is to apply GAP to replace the FC layer, which can regularize the structure of the entire network to prevent overfitting (Lin2013). The setting of the convolution layer is shown in Table 2:
|Layer||Kernal size||Channel depth||Stride|
To fully utilize relevant features between objects and further improve detection accuracy, we introduce the relation module presented by Han Hu. Specifically, firstly the geometry weight is defined as:
Where and are geometric features, is a dimension lifting transformation by using concatenation. After that, the appearance weight is defined as:
Where and are the pixel weights from the previous network. Then the relation weight indicating the impact from other objects is computed as:
Lastly, the relation feature of the whole object set with respect to the object is defined as:
This module achieves a great performance in the instance recognition and duplicate removal, which increases the detection accuracy significantly.
The SPP layer we use here denotes the spatial pyramid pooling (SPP) layer presented by Kaiming He (He). Specifically, the response map after FC layer is divided into (pyramid base), (lower middle of the pyramid), (higher middle of the pyramid),
(pyramid top) four sub-maps and do max pooling separately. A problem with the traditional CNN network for feature extraction is that there is a strict limit on the size of the input image, this is because there is a need for the FC layer to complete the final classification and regression tasks, and since the number of neurons of the FC layer is fixed, the input image to the network must also have fixed size. Generally, there are two ways of fixing input image size: cropping and wrapping, but these two operations either cause the intercepted area not to cover the entire target or bring image distortion, thus applying SPP is necessary. The SPP network also contributes to multi-size extraction features and is highly tolerant to target deformation.
The design of Bottle Net borrows the idea of Residual Networks (He2016)
. A common problem with deep networks is that gradient depth and gradient explosions are prone to occur as the depth deepens. The main reason for this phenomenon is the over-fitting problem caused by the loss of information. Each convolutional layer or pooling layer will downsample the image, producing a lossy compression effect. With network going deeper, some strange phenomena will appear in these images, where different categories of images produce similarly stimulating effects on the network. This reduction in the gap will make the final classification effect less than ideal. To let our network extract deeper features more efficiently, we add the residual network structure to our model. By introducing the data output of the previous layers directly into the input part of the latter data layer, we introduce a richer dimension by combining the original vector data and the subsequently downsampled data. In this way, the network can learn more features of the image.
3.3 ROI Localization with RPN
The RPN is designed to localize the ROI that encloses the essential structures given in Table 1. To achieve this goal, we first use a feature pyramid network (FPN) (Lin2017a) to generate candidate anchors instead of the traditional RPN network used in Faster-RCNN (Ren). FPN could connect the high-level features of low-resolution and high-semantic information with the low-level features of high-resolution and low-semantic information from top to bottom so that features at all scales have rich semantic information. Specifically, the setting of FPN is shown in Table 3.
In the traning process, we define the metrics of intersection over union (IoU) to evaluate the quality of ROI localization:
where A is a computerized ROI and B is a manually labelled ROI (Ground Truth). In the training preocess, we set the samples with IoU higher than 0.5 as positive samples, and IoU lower than 0.5 as negative samples.
3.4 Class Prediction with CPN
For different sections, we use CPN to classify essential structures. For the head section, there are cavum septi pellucidi and thalamus to be classified. For the abdominal section, there are stomach bubble, spine, and umbilical vein to be classified. For the heart section, there are left ventricle, left atrium, right ventricle, and right atrium to be classified. To improve classification accuracy, we choose focal loss (Lin2017)
as the loss function. In the training process of the neural network, the internal parameters are adjusted by the minimization of the loss function of all training samples. The proposed focal loss enables highly accurate dense object detection in the presence of vast numbers of background examples, which is suitable in our model. The loss function can be defined as:
where is the focusing parameter, and . is defined as:
where represents the truth label of a sample, and
represents the probability that the neural network predicts this class.
4 Experiments and results
In this section, we will start with a brief explanation of the process of obtaining and making data sets for training and testing. Then a systematic evaluation scheme will be proposed to test the efficacy of our method in FS examinations. The evaluation is carried out in four parts. First, we investigate the performance of ROI localization; we will use Mean Average Precision (mAP) and box-plot to evaluate it. Second, we quantitatively analyze the performance of classification with common indicators: accuracy (ACC), specificity (Spec), sensitivity (Sen), precision (Pre), F1-score (F1), and area of the receiver of operation curve (AUC). Third, we demonstrate the accuracy of our scheme when compared with experienced sonographers. Fourth, we test the running time of detecting a single FS image.
4.1 Data preparation
All the FS images used for training and testing our model were acquired from the West China Second Hospital Sichuan University from April 2018 to January 2019. The FS images were recorded with a conventional hand-held 2-D FS probe on pregnant women in the supine position, by following the standard obstetric examination procedure. The fetal gestational ages of all subjects ranging from 20 to 34 weeks. All FS images were acquired with a GE Voluson E8 and Philips EPIQ 7 scanner.
There are, in total, 1325 FS images of the head section, 1321 FS images of the abdominal section, and 1455 FS images of the heart section involved for the training and testing of our model. The training set, validation set, and test set of each section are all divided by a ratio of 3:1:1. The ROI labeling of essential structures in each section is achieved by two senior radiologists with more than ten years of experience in the FS examination by marking the smallest circumscribed rectangle of the positive sample. The negative ROI samples are randomly collected from the background of the images.
4.2 Evaluation metrics
For testing the performance of ROI localization, firstly, we define the metrics of intersection over union (IoU) between prediction and ground truth and use box-plots to evaluate ROI localization intuitively. As illustrated before, IoU is defined as:
Where A is computerized ROI, and B is ground truth (manually labeled) ROI. Second, we use average precision (AP) to quantitively evaluate the detection results of each essential anatomical structure and mean average precision (mAP) to illustrate the overall quality of ROI localization.
To test the performance of classification results, we use several popular evaluation metrics. Supposerepresents the number of true positives of a certain class, is the number of false positives, is the number of false negatives and is the number of true negatives, then the definitions of accuracy (ACC), specificity (Spec), sensitivity (Sen), precision (Pre) and F1-score (F1) are as following:
The area of the receiver of operation curve (AUC) is defined as the area under the receiver operating characteristic (ROC) curve, which is equivalent to the probability that a randomly chosen positive example is ranked higher than a randomly chosen negative example (Letters). To show the effectiveness of advanced techniques we add to the framework, and two different structures are also tested. Where NRM means the removal of the relation module, and NSPP means the removal of the Spatial Pyramid Pooling (SPP) layer in the feature extraction network. By comparing the difference in classification and detection results, it is clear to see their impact on overall network performance.
4.3 Results of ROI localization
To demonstrate the efficacy of our method in localizing the position of essential anatomical structures in FS images, we carry out the experimental evaluation in two parts. First, we use box-plots to evaluate ROI localization intuitively. Second, we use average precision (AP) and mean average precision (mAP) to illustrate the quality of ROI localization quantitively.
For the head standard plane, as illustrated in the related work, there is already a state-of-the-art method proposed for the quality assessment (Lin) (Denoted as Lin), so we have compared its results with our method. Also, to show the effectiveness of advanced object detection techniques we add to the network, our methods have also been compared with other popular object detection frameworks, including SSD (Liu2016), YOLO (Redmon2016; Redmon), Faster R-CNN (Ren). The test of the effectiveness of the relation module we add to the network is also carried out, with Non-NM denoting the framework without the relation module.
As shown in Fig. 4, our method has achieved a high IoU in all three sections. Specifically, for the head section, the median of IoU values in all the anatomical structures are above 0.955. Also, for the heart section and the abdominal section, the median is above 0.945 and 0.938, respectively. Also, the minimum of IoU values for all three sections are above 0.93. As a comparison, the state-of-the-art framework for the quality assessment of the fetal abdominal images proposed by Lingyun et al. (Wu2017) has only achieved a median of below 0.9. It proves the effectiveness of our method in localizing ROI.
Box-plots of IoU values for three sections. The three lines on each box represent the four quartiles of the IoU values.
As shown in Table 4, we observe that our method has the highest mAP compared to the method proposed by Lin and other popular object detection frameworks. Also, we have improved the detection accuracy significantly in TV and CSP and overcome the limitation in Lin’s method. This is because our method could detect flat and smaller anatomical structures more precisely. It is worth mentioning that after adding the relation module to our network, the detection accuracy has been significantly improved in all the anatomical structures, which proves the effectiveness of this module.
|Faster R-CNN VGG16||73.56||94.65||93.41||80.59||87.35||94.78||87.39|
|Faster R-CNN Resnet50||72.48||95.4||92.78||85.47||84.71||95.31||87.69|
As shown in Table 5, since it is our first attempt to evaluate the image quality in the heart section, so we have only compared our method with state-of-the-art object detection frameworks. We observe that our approach has the highest average precision in all the anatomical structures. Also, as shown in Table 6, we have achieved quite promising detection accuracy. It proves that our framework is generalized and can be well applied to the quality assessment of other standard planes.
|Faster R-CNN VGG16||85.52||81.15||87.11||80.59||95.57||85.99|
|Faster R-CNN Resnet50||89.44||83.59||90.78||85.47||94.01||88.66|
|Faster R-CNN VGG16||90.25||92.15||88.72||82.54||88.42|
|Faster R-CNN Resnet50||91.29||93.59||90.85||81.34||89.27|
4.4 Results of classification accuracy
To illustrate the performance of our model in classifying the essential anatomical structures, we firstly use area of receiver (ROC) of operation curve to characterize the performance of the classifier visually, then we use several authoritative indicators to measure it quantitatively: accuracy (ACC), specificity (Spec), sensitivity (Sen), precision (Pre) and F1-score (F1). Also, to show the effectiveness of our proposed network in classification, we have compared our method with other popular classification networks, including AlexNet (Krizhevsky2012), VGG16, VGG19 (Simonyan2014), and ResNet50 (He2016a). The comparison with Lin’s method is also carried out.
As shown in Fig. 5, it is observed that the classifier achieves quite promising performance in all the three sections with the true positive rate reaching 100% while the false positive rate is less than 10%. Also, the ROC achieves at 0.96, 0.95, and 0.98 for the head section, abdominal section, and heart section, respectively.
From Table 7, we can observe that the classification results of our method are superior to other state-of-the-art methods. Specially, we achieve the best results with a precision of 94.63%, a specificity of 96.39%, and an AUC of 98.26%, which are better than Lin’s method. The relative inferior results in sensitivity, accuracy, and F1-score can be further improved if we add prior clinical knowledge into our framework (Lin). Table 8 and 9 illustrate the classification results in abdominal and heart section. We can observe that our method has achieved quite promising results in most indicators compared with existing methods. It demonstrates the effectiveness of our proposed method in classifying anatomical structures of all the sections.
4.5 Running Time Analysis
First, We test the running time of detecting a single FS image in a workstation equipped with 3.60 GHz Intel Xeon E5-1620 CPU and a GP106-100 GPU. The results are given in Fig. 6. It is observed that detecting a single frame could only cost 0.78s, which is fast enough to meet clinical needs. As shown in Table 10, we have compared our method with different single-task and multi-task networks in terms of the average detection speed and network parameters. It is observed that although the network parameters of our method are much more than Faster R-CNN + ResNet50, there is not much difference in detection time, this is because our network shared many low-level features, which could achieve a more efficient detection with using only a few parameters.
|Multi-task||Faster R-CNN VGG16||0.534||130.45|
|Faster R-CNN ResNet50||0.586||27.07|
Fig. 7, Fig. 8 and Fig. 9 depict the comparison of our results with the manually labeled images by experts in the head section, abdominal section, and heart section, respectively. Our method displays the classification and detection results simultaneously to assist in sonographers’ observation. It can be seen that our method is perfectly aligned with professional sonographers.
In this paper, an autonomous image quality assessment approach for FS images was investigated. The experimental results show that our proposed scheme achieves a highly precise ROI localization and a considerable degree of accuracy for all the essential anatomical structures in the three standard planes. Also, the conformance test shows that our results are highly consistent with those of professional sonographers, and running time tests show that the image detection speed per frame is much higher than sonographers, which means this scheme can effectively replace the work of sonographers. In our proposed network, to further improve detection and classification accuracy, we also modify the recently published advanced object detection technologies and adapt them to our model. The experiment shows these modules are highly useful, and the overall performance is better than the state-of-the-art methods such as the FS image assessment framework proposed by Zehui Lin (Lin). After the Feature Extraction Network, we also divide the network into Region Proposal Network and Class Prediction Network. Accordingly, the features in the detection network can avoid interfering with the features in the classification network, so the detection accuracy is further increased. Also, the detection speed can be significantly improved, as the classification and localization are performed simultaneously.
Although our method achieves quite promising results, there are still some limitations. First, for the training sets, we regard the manually labeled FS images by two professional sonographers as the ground truth, but the results of manual labeling will have some accidental deviation even though they all have more than ten years of experience. In future studies, we will invite more professional clinical expects to label the FS images and collect more representative datasets. Second, there still remain some detection and classification errors in our results. This is because our evaluation criteria are rigorous, and the midsection of a single anatomical structure could lead to a negative score on the image. Third, all the FS images are collected from GE Voluson E8 and Philips EPIQ 7 scanner, however, different types of ultrasonic instruments will produce different ultrasound images, which may cause our method not to be applied well to the FS images produced by other machines.
Our proposed method further boosts the accuracy in the assessment of two dimensional FS standard plane. Although three dimensional and four-dimensional ultrasound testing are popular recently, they are mainly utilized to meet the needs of pregnant women and their families to view baby pictures instead of serving the diagnosing purpose visually. Two-dimensional ultrasound images are still the most authoritative basis for judging fetal development (AIUM). As illustrated before, there are still many challenges for the automatic assessment of 2D ultrasound images, such as shadowing effects, similar anatomical structures, different fetal positions, etc. To overcome these challenges and futher promote the accuracy and robustness of detection and classification, it may be useful to add some prior clinical knowledge (Lin) and more advanced attention modules to the network. In the future, we will also investigate the automatic selection technology for finding the standard scanning plane, which will find a standard plane containing all the essential anatomical structures without sonographers’ intervention.
The quality control for the FS images is significant for the biometric measurements and fetal anomaly diagnosis. However, the current FS examinations require well-trained and experienced sonographers to perform and are full of subjectivity. To develop an autonomous quality control approach and offer objective and accurate assessments, we propose an automatic FS image quality control scheme in this paper. In our scheme, we have designed three networks for detection and classification based on deep learning. The proposed scheme can reduce the workload of doctors while ensuring nearly the same accuracy and lower the skill requirement of sonographers, which will make it possible for fetal FS examinations in those areas where medical conditions are lagging.
Our proposed scheme has been evaluated with extensive experiments. The results show that our scheme is not only comparable to manual labeling by experts in locating the location of anatomical structures, but also very accurate on the classification. Furthermore, evaluating an FS image takes less than a second. In the future, we will extend our research into more fetal sections and try to propose an automatic selection technology for the standard plane of the FS images.
Declaration of Competing Interest
The authors have no financial interests or personal relationships that may cause a conflict of interest.
We acknowledge West China Second Hospital Sichuan University for providing the fetal ultrasound image datasets. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.