Fetal imaging has revolutionized prenatal care, allowing for early identification of structural and functional anomalies. Rigorous evaluation of fetal brain maturity and development is particularly critical due to the high prevalence of abnormalities (an estimated 3 in 1000 pregnancies)phe2017 . Magnetic resonance imaging (MRI) has unrivaled diagnostic efficacy in detecting neurodevelopmental abnormalities. Despite its robust clinical utility, interpretation of fetal brain MRI remains a tremendous challenge for radiologists and clinicians due to the rapidly changing architecture of the normally maturing brain.
Knowledge of the gestational age of the fetus is crucial to the accurate interpretation of fetal brain MRI. Despite its importance, our understanding of anatomical neurodevelopmental milestones in utero is limited to the gross evaluation of the cortical gyration (i.e. folding pattern of the brain’s cortical surface); and at this time, such coarse assessments are performed by a relatively “primitive” means of visual inspection and among a small pool of expert pediatric neuroradiologists at tertiary care fetal centers.
This study investigates the problem of predicting gestational age of fetal brain using multi-view MRI sequences. This study draws inspiration from the recent development of attention-based neural networks, a class of deep-learning methods that is capable of producing a spatial map that highlights regions of interest towards image object detection and interpretationattention2018xray . This study’s contributions are as follows: (i) We demonstrate the proposed end-to-end framework is able to powerfully predict gestational age of fetal brains. (ii) We present experimental evidence that attention on the weakly-supervised activation maps can significantly improve the accuracy of these prediction from a baseline model without attention. (iii) The proposed multi-view prediction model largely improves the regression performance by incorporating useful information from different views.
The model pipeline as shown in Figure 1 takes a 2D MRI scan image and predicts the chronological age . The 2D scan is selected from the center of a depth-wise stack of 2D MRI sequences for each patient. We define the objective as a regression problem to minimize , where represents the true fetal brain ages and represents our model’s predictions. For the model, we use residual networks resnet as the backbone model in the global branch for age regression. We use both ResNet-18 and ResNet-50 variant with different number of layers. We integrate three images (views) into our model: axial, sagittal, and coronal orientations. We train three models from each of these views, and combine these models together to form one final age prediction.
Computational analysis for fetal brain images is extremely challenging because the random positions and orientations of fetal brain subregions are variable across patients. Furthermore, information that is unrelated to the fetus (mother’s placenta and organs) act as clutter and may negatively affect performance. These reasons motivate the use of attention activation maps to crop out the part of that only pertains to the fetus.
2.1 Attention to detect and localize the fetus
To extract image features pertaining to the fetal brain and to also filter out any clutter, we introduce an attention mechanism during the training of our model. As shown in Figure 1
, an attention heatmap is extracted from the feature maps after the last stage of residual block with max intensity projection i.e. max pooling across channel dimension. The pixelwise values in the heatmap (attention map) highlight the local region where the network learns to pay higher attention. We use this attention map for automatic segmentation by thresholding pixel values higher than a fixed value, cropping out
with a rectangular bounding box that captures the largest number of thresholded values with the minimum perimeter, and resizing (and interpolating) the cropped imageinto the same size () as the original image . Finally, the cropped image is fed into the local branch of our model (Figure 1). We study different ways of predicting the age: 1) using only the global branch (no attention and cropping), 2) using the local branch predictions from the masked inputs, 3) using the fusion branch by concatenating features from both local and global branches followed by fully-connected layer, and 4) averaging (instead of concatenating) global and local features.
demonstrates learned visualization of attention heatmaps, mask inference, and automatic sub-image cropping from our pipeline. By visual inspection, we verify that our model learns the location of fetal brains in an unsupervised manner, given the small size of these brains with respect to the surrounding environment and with no groundtruth locations. This is strong evidence that the attention-aware models can learn local features from correct brain regions. Next, a sub-image is cropped from the whole input image (auto-segmentation). The thresholding value used to binarize and compute the bounding box is an important hyper-parameter. We discuss further in the next section. In experiments, without intensely working on tune this hyper-parameter, we set the threshold value as 0.3 for ResNet-18 and 0.4 for ResNet-50 respectively.
No public database of fetal MRI exists currently. We collected a robust database of 1927 fetal brain MRI from our clinical picture archiving and communication system (PACS). Each MRI was manually interpreted by an expert pediatric neuroradiologist to extract developmentally normal studies. Among those, the optimal T2-fast sequences in the three standard planes (axial, coronal, and sagittal) were identified. A total of 741 studies that had all three MRI planes were included in this study. Fetal gestational age (in days) was calculated from the estimated date of delivery, in accordance with the current obstetrical guidelines and standard of care. These ages serve as ground truth labels and range from 125 to 273 days. The entire dataset was split into training (70%), validation(10%), and test(70%) sets.
As the evaluation metric, the R2 score ranging from 0 to 1 and mean absolute error (MAE) is leveraged to show the quantized difference between model prediction and groundtruth labels. For comparison, we show the results of different base architectures (ResNet-18 and ResNet-50) with various layers in Table1 and Table 2. The results show that the increase of network depth benefits the final task of age regression, where more complicated feature representation are learned from deeper layers. Besides, the ablative study also evaluates various comparison methods, such as single-view versus multi-view, with or without attention mechanism. The quantization performance in the table indicates that both the attention mechanism and the multi-view learning are beneficial to the final prediction. Specifically, in the single-view prediction, the axial and coronal planes can provide more useful information or more efficient image features to estimate fetal ages. The combination of multi-view learning largely increases the regression accuracy compared to any single plane. With the attention mechanism, the model can better understand the local image features such as brain shape or contour leading to a better evaluation performance, especially when the original regression performance without attention is not good enough. Finally, by leveraging both the attention mechanism and multi-view learning, the state-of-the-art results are achieved with a R2 score of 0.94 and mean error of 6.8 days.
The qualitative visualization of this best result is demonstrated in Fig. 2(B), which demonstrates the qualitative visualization of regression performance on test dataset. X-axis notes the groundtruth labels while Y-axis notes the model prediction ages for the corresponding samples. The model-estimation results are expected to be aligned with groundtruth values as much as possible, which is annotated as the blue line of . Visual verification notes that the multi-view integration method with deep regression model can get a accurate performance for fetal brain age estimation.
To better understand how the threshold value in the attention mechanism influences the model learning, an analysis experiments with difference thresholds for ROI cropping was conducted. The R2 score curve of test performance is shown in Fig. 2(C), which results from training the local branch ResNet-18 model using various thresholding values in attention mechanism. From the curve, if the threshold value is too large or small, the learning prediction model will be damaged. A smaller threshold will prefer to crop a larger sub-image and then push the local branch model to be closer to the global branch, which can not efficiently help extract local image features. If the threshold is too large, the model will get worse, since almost no sub-images will be cropped for learning local regions and then the local branch model cannot learn anything.
This work proposes an end-to-end framework for efficient prediction of gestational age of fetal brain. Especially, experimental evidence shows that attention mechanism and multi-view learning can improve the accuracy of fetal brain age regression from a baseline model. For the future work, 3D convolutional network will be investigated more. External cohorts from outside institutions will also be used for further validation of this newly proposed method.
- (1) National congenital anomaly and rare disease registration service. Public Health England (2017).
- (2) Qingji Guan, Yaping Huang, Zhun Zhong, Zhedong Zheng, Liang Zheng and Yi Yang. Diagnose like a Radiologist: Attention Guided Convolutional Neural Network for Thorax Disease Classification. arXiv:1801.09927 (2018).
- (3) Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun. Deep Residual Learning for Image Recognition Disease Classification. CVPR 770–778, IEEE Computer Society (2016).