DeepAI
Log In Sign Up

An automatic COVID-19 CT segmentation network using spatial and channel attention mechanism

04/14/2020
by   Tongxue Zhou, et al.
0

The coronavirus disease (COVID-19) pandemic has led to a devastating effect on the global public health. Computed Tomography (CT) is an effective tool in the screening of COVID-19. It is of great importance to rapidly and accurately segment COVID-19 from CT to help diagnostic and patient monitoring. In this paper, we propose a U-Net based segmentation network using attention mechanism. As not all the features extracted from the encoders are useful for segmentation, we propose to incorporate an attention mechanism including a spatial and a channel attention, to a U-Net architecture to re-weight the feature representation spatially and channel-wise to capture rich contextual relationships for better feature representation. In addition, the focal tversky loss is introduced to deal with small lesion segmentation. The experiment results, evaluated on a COVID-19 CT segmentation dataset where 473 CT slices are available, demonstrate the proposed method can achieve an accurate and rapid segmentation on COVID-19 segmentation. The method takes only 0.29 second to segment a single CT slice. The obtained Dice Score, Sensitivity and Specificity are 83.1

READ FULL TEXT VIEW PDF

page 9

page 14

04/14/2020

An automatic COVID-19 CT segmentation based on U-Net with attention mechanism

The coronavirus disease (COVID-19) pandemic has led a devastating effect...
03/09/2021

Quadruple Augmented Pyramid Network for Multi-class COVID-19 Segmentation via CT

COVID-19, a new strain of coronavirus disease, has been one of the most ...
10/01/2022

Attention Augmented ConvNeXt UNet For Rectal Tumour Segmentation

It is a challenge to segment the location and size of rectal cancer tumo...
07/24/2021

Dual-Attention Enhanced BDense-UNet for Liver Lesion Segmentation

In this work, we propose a new segmentation network by integrating Dense...
05/07/2020

Hypergraph Learning for Identification of COVID-19 with CT Imaging

The coronavirus disease, named COVID-19, has become the largest global p...

1 Introduction

In December 2019, a novel coronavirus, now designated as COVID-19 by the World Health Organization (WHO), was identified as the cause of an outbreak of acute respiratory illness [25, 15]. The pandemic of COVID-19 is spreading all over the world and causes a devastating effect on the global public health. As a form of pneumonia, the infection causes inflammation in alveoli, which fills with fluid or pus, making the patient difficult to breathe [23]. Similar to other coronaviral pneumonia such as Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS), COVID-19 can also lead to acute respiratory distress syndrome (ARDS) [7, 11]. In addition, the number of people infected by the virus is increasing rapidly. Up to April 19, 2020, 2,241,359 cases of COVID-19 have been reported in over 200 countries and territories, resulting in approximately 152,551 deaths111https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports/, while there is no efficient treatment at present.

Due to the fast progression and infectious ability of the disease, it’s urgent to develop some tools to accurate diagnose and evaluate the disease. Although the real-time polymerase chain reaction (RT-PCR) assay of the sputum is considered as the gold standard for diagnosis, while it is time-consuming and has been reported to suffer from high false negative rates [21, 13]. In clinical practice, Chest Computed tomography (CT), as a non-invasive imaging approach, can detect certain characteristic manifestations in the lung associated with COVID-19, for example, ground-glass opacities and consolidation are the most relative imaging features in pneumonia associated with SARS-CoV-2 infection. Therefore, Chest CT is considered as a low-cost, accurate and efficient method diagnostic tool for early screening and diagnosis of COVID-19. It can be evaluated how severely the lungs are affected, and how the patient’s disease is evolving, which is helpful in making treatment decisions [12, 16, 14, 10, 26].

A number of artificial intelligence (AI) systems based on deep learning have been proposed and results have been shown to be quite promising in medical image analysis

[6, 17, 2, 20]. Compared to the traditional imaging workflow heavily relies on the human labors, AI enables more safe, accurate and efficient imaging solutions. Recent AI-empowered applications in COVID-19 mainly include the dedicated imaging platform, the lung and infection region segmentation, the clinical assessment and diagnosis, as well as the pioneering basic and clinical research [22]. Segmentation is an essential step in AI-based COVID-19 image processing and analysis for make a prediction of disease evolution. It delineates the regions of interest (ROIs), e.g., lung, lobes, bronchopulmonary segments, and infected regions or lesions, in the chest X-ray or CT images for further assessment and quantification [22]. There are a number of researches related to COVID-19. For example, Zheng et al. [28] proposed a weakly-supervised deep learning-based software system using 3D CT volumes to detect COVID-19. Goze et al. [4] presented a system that utilises 2D slice analysis and 3D volume analysis to achieve the detection of COVID-19. Jin et al. [9] proposed an AI system for fast COVID-19 diagnosis, where a segmentation model is first used to obtain the lung lesion regions, and then the classification model is used to determine whether it is COVID-19-like for each lesion region. Li et al. [12]

developed a COVID-19 detection neural network (COVNet) to extract visual features from volumetric chest CT exams for distinguishing COVID-19 from Community Acquired Pneumonia (CAP). Chen et al.

[3] proposed to use UNet++[30] to extract valid areas and detect suspicious lesions in CT images.

U-net [18] is the most widely used encoder-decoder network architecture for medical image segmentation, since the encoder captures the low-level and high-level features, and the decoder combines the semantic features to construct the final result. However, not all features extracted from the encoder are useful for segmentation. Therefore, it is necessary to find an effective way to fuse features, we focus on the extraction of the most informative features for segmentation. Hu et al. [5] introduced the Squeeze and Excitation (SE) block to improve the representational power of a network by modelling the interdependencies between the channels of its convolutional features. Roy et al. [19] introduced to use both spatial and channel SE blocks (scSE), which concurrently recalibrates the feature representations spatially and channel-wise, and then combine them to obtain the final feature representation. Inspired by this work, we incorporate an attention mechanism including both spatial attention and channel one to our segmentation network to extract more informative feature representation to enhance the network performance.

In this paper, we propose a deep learning based segmentation with the attention mechanism. A preliminary conference version appeared at ISBI 2020 [29]

, which focused on the multi-model fusion issue. This journal version is a substantial extension, including (1) An automatic COVID-19 CT segmentation network. (2) A focal tversky loss function (different from the paper of ISBI) which is introduced to help to segment the small COVID-19 regions. (3) An attention mechanism including a spatial and a channel attention is introduced to capture rich contextual relationships for better feature representations.

The paper is organized as follows: Section 2 offers an overview of this work and details our model, Section 3 describes experimental setup, Section 4 presents the experimental results, Section 5 discusses the proposed method and concludes this work.

2 Method

2.1 The proposed network architecture

Our network is mainly based on the U-Net architecture [18], in which we integrate an attention mechanism, res_dil block and deep supervision. The encoder of the U-Net is used to obtain the feature representations. The feature representation at each layer are input into an attention mechanism, where they will be re-weighted along channel-wise and space-wise, and the most informative representations can be obtained, and finally they are projected by decoder to the label space to obtain the segmentation result. In the following, we will describe the main components of our model: encoder, decoder, and res_dil block, deep supervision and attention mechanism. The network architecture scheme is described in Fig. 1.

Figure 1: The architecture of the proposed network. The network takes a CT slice as input and directly outputs the COVID-19 region.

2.2 Encoder and decoder

The encoder is used to obtain the feature representations. It includes a convolutional block, a res_dil block followed by skip connection. In order to maintain the spatial information, we use a convolution with stride = 2 to replace pooling operation. It’s likely to require different receptive field when segmenting different regions in an image. All convolutions are

and the number of filter is increased from 32 to 512. Each decoder level begins with upsampling layer followed by a convolution to reduce the number of features by a factor of 2. Then the upsampled features are combined with the features from the corresponding level of the encoder part using concatenation. After the concatenation, we use the res_dil block to increase the receptive field. In addition, we employ deep supervision [8] for the segmentation decoder by integrating segmentation layers from different levels to form the final network output, shown in Fig. 2.

2.3 Res_dil block and deep supervision

It’s likely to require different receptive field when segmenting different regions in an image. Since standard U-Net can not get enough semantic features due to the limited receptive field, inspired by dilated convolution [27], we proposed to use residual block with dilated convolutions on both encoder part and decoder part to obtain features at multiple scales, the architecture of res_dil is shown in Fig. 2. The res_dil block can obtain more extensive local information to help retain information and fill details during training process.

To demonstrate that the proposed res_dil can enlarge the receptive field mathematically, we let be a discrete function, and let be a discrete filter size . The discrete convolution operator can be described as follows:

(1)

Let be a dilation factor and the -dilated convolution operation can be defined as:

(2)

We assume , ,…, are a discrete functions, and , ,…, are discrete filters. In addition, we apply the filters with exponentially increasing dilation factors, such as , ,… . Then, the discrete function can be described as:

(3)

According to the definition of receptive field, the receptive field size of each element in is , which is a square of exponentially increasing size. So we can obtain a receptive field by applying our proposed res_dil block with the dilation factor 2 and 4, respectively, while the classical convolution can only obtain receptive field, see Fig. 3.

Figure 2: The architecture of our proposed Res_dil block (left) and Deep supervision (right). IN refers instance normalization, Dil_conv the dilated convolution (rate = 2, 4, respectively). We refer to the vertical depth as level, with higher levels being higher spatial resolution. In the deep supervision part, refers the output of res_dil block of the level in the decoder, refers the segmentation result of the level in the decoder.
Figure 3: The illustration of receptive field, denotes the receptive field, denotes the convolution kernel size, and denotes the dilated factor. (a) a convolution network which consists of two and convolutional layers. (b) a convolution network which consists of two and dilated convolutional layers.

2.4 Attention mechanism

In U-net shaped network, not all the features obtained by the encoder are effective for segmentation. In addition, not only the different channels (filters) have various contributions but also different spatial location in each channel can give different weights on feature representation for segmentation. To this end, we introduced a ”scSE based ” attention mechanism in both encoder and decoder to take into account the most informative feature representations along channel-wise and spatial-wise for segmentation, the architecture is described in Fig. 4.

The individual feature representations from each channel are first concatenated as the input representation , , is the number of channel in each layer. To simplify the description, we take .

In the channel attention module, a global average pooling is first performed to produce a tensor

, which represents the global spatial information of the representation, with its element

(4)

Then two fully-connected layers are applied to encode the channel-wise dependencies, , with ,

, being weights of two fully-connected layers and the ReLU operator

. is then passed through the sigmoid layer to obtain the channel-wise weights, which will be applied to the input representation through multiplication to achieve the channel-wise representation , the indicates the importance of the channel of the representation:

(5)

In the spatial attention module, the representation can be considered as , , , , and then a convolution operation , with weight , is used to squeeze the spatial domain, and to produce a projection tensor, which represents the linearly combined representation for all channels for a spatial location. The tensor is finally passed through a sigmoid layer to obtain the space-wise weights and to achieve the spatial-wise representation , the that indicates the importance of the spatial information of the representation:

(6)

The fused feature representation is obtained by adding the channel-wise representation and space-wise representation:

(7)

The attention mechanism can be directly adapted to any feature representation problem, and it encourages the network to capture rich contextual relationships for better feature representations.

Figure 4: The architecture of attention mechanism. The individual feature representations (, , …, ) are first concatenated as , and then they are recalibrated spatially and channel-wise to achieve the and , final they are added to obtain the rich fused feature representation .

2.5 Loss function

In the medical community, the Dice Score Coefficient (DSC), defined in (5), is the most widespread metric to measure the overlap ratio of the segmented region and the ground truth, and it is widely used to evaluate segmentation performance. Dice Loss (DL) in (6) is defined as a minimization of the overlap between the prediction and ground truth.

(8)
(9)

where is the number of pixels in the image, is the set of the classes,

is the probability that pixel

is of the tumor class and is the probability that pixel is of the non-tumor class . The same is true for and , and is a small constant to avoid dividing by 0.

One of the limitation of Dice Loss is that it penalizes false positive (FP) and false negative (FN) equally, which results in segmentation maps with high precision but low recall. This is particularly true for highly imbalanced dataset and small regions of interests (ROI) such as COVID-19 lesions. Experimental results show that FN needs to be weighted higher than FP to improve recall rate. Tversky similarity index [24] is a generalization of the DSC which allows for flexibility in balancing FP and FN:

(10)

Another issue with the DL is that it struggles to segment small ROIs as they do not contribute to the loss significantly. To address this, Abraham et al. [1] proposed the Focal Tversky Loss function (FTL).

(11)

where varies in the range . In practice, if a pixel is misclassified with a high Tversky index, the FTL is unaffected. However, if the Tversky index is small and the pixel is misclassified, the FTL will decrease significantly. To this end, we used FTL to train the network to help segment the small COVID-19 regions.

3 Experimental setup

3.1 Dataset and preprocessing

The two datasets used in the experiments come from Italian Society of Medical and Interventional Radiology: COVID-19 CT segmentation dataset 222http://medicalsegmentation.com/covid19/. Dataset-1 includes 100 axial CT images from 60 patients with Covid-19. The images have been resized, greyscaled and compiled into a single NIFTI-file. The image size is pixels. The images have been segmented by a radiologist using three labels: ground-glass, consolidation and pleural effusion. Dataset-2 includes 9 volumes, total 829 slices, where 373 slices have been evaluated and segmented by a radiologist as COVID-19 cases. We resize these images from pixels to pixels same as Dataset-1. And an intensity normalization is applied to both datasets. Since there are severe data imbalance in the dataset. For example, in Dataset-1, only 25 slices have pleural effusion, which is the smallest region among all the COVID-19 lesion regions (see the green region in Fig. 4). In Dataset-2, only 233 slices have consolidation, which takes up a small amount of pixels in the image (see the yellow region in Fig. 4). We take all the lesion labels as a COVID-19 lesion. Because of the small number of data in both two datasets, we combine the two datasets as our final training dataset. Here, we give some example images of the COVID-19 CT segmentation dataset in Fig. 5.

Figure 5: Example images of the COVID-19 CT segmentation dataset. (a) and (c): CT image from Dataset-1 and Dataset-2, (b) and (d): The Ground truth of (a), (c), respectively, ground-glass is shown in blue, consolidation is shown in yellow and pleural effusion is shown in green.

3.2 Implementation details

Our network is implemented in Keras with a single Nvidia GPU Quadro P5000 (16G). The network is trained by focal tversky loss and is optimized using the Adam optimizer, the initial learning rate = 5e-5 with a decreasing learning rate factor 0.5 with patience of 10 epochs. Early stopping is employed to avoid over-fitting if the validation loss is not improved over 50 epochs. We randomly split the dataset into 80% training and 20% testing.

3.3 Evaluation metrics

Segmentation accuracy determines the eventual success or failure of segmentation procedures. To measure the segmentation performance of the proposed methods, three evaluation metrics: Dice, Sensitivity and Specificity are used to obtain quantitative measurements of the segmentation accuracy.

1) Dice: It is designed to evaluate the overlap rate of prediction results and ground truth. Dice ranges from 0 to 1, and the better predict result will have a larger Dice value.

(12)

2) Sensitivity(also called the true positive rate, the recall): It measures the proportion of actual positives that are correctly identified:

(13)

3) Specificity(also called the true negative rate): It measures the proportion of actual negatives that are correctly identified:

(14)

where represents the number of true positive voxels, represents the number of true negative voxels, represents the number of false positive voxels, and represents the number of false negative voxels.

4 Experiment results

In this section, we conduct extensive comparative experiments including quantitative analysis and qualitative analysis to demonstrate the effectiveness of our proposed method.

4.1 Quantitative analysis

To assess the performance of our method, and to analyze the impact of the proposed components of our network, we did an ablation study with regard to the attention mechanism and Focal Tversky Loss function (FTL), we refer our proposed network without the attention mechanism to baseline, the results are shown in Table 1. We can observe the baseline trained with DL achieves dice score, sensitivity and specificity of 80.4%, 75.7%, 99.8%, respectively. However, using the focal tversky loss can aide the network to focus more on the false negative voxels, which increases 0.87% of dice score, 3.43% of sensitivity for Baseline and 1.96% of dice score, 13.04% of sensitivity for our proposed network. We can also observe in Table 1 that integrating the attention mechanism to the segmentation network can boost the performance, since we can see an increase of 1.37% of dice score and 1.32% of sensitivity for ’Baseline+DL’ and 2.47% of dice score and 10.73% of sensitivity for ’Baseline+FTL’. The main reason is that the attention mechanism can help to emphasis on the most important feature representation for segmentation. In addition, the proposed network trained by FTL combines the benefits of attention mechanism with FTL to outperform all other methods with dice , sensitivity and specificity.

Model Parameters Dice (%) Sensitivity (%) Specificity (%)
Baseline + DL , 80.4 75.7 99.8
Baseline + FTL , , 81.1 78.3 99.7
Ours + DL , 81.5 76.7 99.7
Ours + FTL , 83.1 86.7 99.3
Table 1: Comparison of different methods on COVID-19 CT segmentation dataset, Baseline denotes our proposed network without the attention mechanism, bold results show the best score.

4.2 Qualitative analysis

In order to evaluate the effectiveness of our model, we randomly select several examples on COVID-19 CT segmentation dataset and visualize the results in Fig. 6. From Fig. 6, we can observe that the baseline trained by DL could give a rough segmentation result, while it fails to segment many small lesion regions. With the application of focal tversky loss, it can help to improve the segmentation result with a much better result. In addition, the attention mechanism can help to further refine the segmentation result. The proposed network trained by FTL can achieve the result closest to the ground truth. The obtained results have demonstrated that leveraging the attention mechanism and FTL can generally enhance the COVID-19 segmentation performance.

Figure 6: Segmentation results of some examples on COVID-19 CT dataset. The first three examples are with many COVID-19 lesion regions, the last two examples are with few COVID-19 lesion regions. (a) CT image, (b) Baseline trained by dice loss, (c) Baseline trained by focal tversky loss, (d) Proposed network trained by focal tversky loss, (e) Ground truth, red arrow emphasizes the improvement of using focal tversky loss (from (b) to (c)), green arrow emphasizes the improvement of applying attention mechanism (from (c) to (d)).

5 Conclusion

In this paper, we have presented a U-Net based segmentation network using attention mechanism. Since most current segmentation networks are trained with dice loss, which penalize the false negative voxels and false positive voxels equally, contributing a high specificity but low sensitivity. To this end, we applied the focal tversky loss to train the model to improve the small ROI segmentation performance. Moreover, we improve the baseline by incorporating an attention mechanism including a spatial attention and a channel attention in each layer to capture rich contextual relationships for better feature representations. We evaluated our proposed network on COVID-19 CT segmentation datasets, and the experiment results demonstrate the the superior performance of our method. However, the study is limited by the small dataset, in the future we would like to apply a larger training dataset to refine our model, and achieve more competitive results.

References

  • [1] N. Abraham and N. M. Khan (2019) A novel focal tversky loss function with improved attention u-net for lesion segmentation. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 683–687. Cited by: §2.5.
  • [2] T. D. Bui, J. Lee, and J. Shin (2019) Incorporated region detection and classification using deep convolutional networks for bone age assessment. Artificial intelligence in medicine 97, pp. 1–8. Cited by: §1.
  • [3] J. Chen, L. Wu, J. Zhang, L. Zhang, D. Gong, Y. Zhao, S. Hu, Y. Wang, X. Hu, B. Zheng, et al. (2020) Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography: a prospective study. medRxiv. Cited by: §1.
  • [4] O. Gozes, M. Frid-Adar, H. Greenspan, P. D. Browning, H. Zhang, W. Ji, A. Bernheim, and E. Siegel (2020) Rapid ai development cycle for the coronavirus (covid-19) pandemic: initial results for automated detection & patient monitoring using deep learning ct image analysis. arXiv preprint arXiv:2003.05037. Cited by: §1.
  • [5] J. Hu, L. Shen, and G. Sun (2018) Squeeze-and-excitation networks. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 7132–7141. Cited by: §1.
  • [6] Q. Hu, L. F. d. F. Souza, G. B. Holanda, S. S. Alves, F. H. d. S. Silva, T. Han, and P. P. Rebouças Filho (2020)

    An effective approach for ct lung segmentation using mask region-based convolutional neural networks

    .
    Artificial Intelligence in Medicine, pp. 101792. Cited by: §1.
  • [7] C. Huang, Y. Wang, X. Li, L. Ren, J. Zhao, Y. Hu, L. Zhang, G. Fan, J. Xu, X. Gu, et al. (2020) Clinical features of patients infected with 2019 novel coronavirus in wuhan, china. The Lancet 395 (10223), pp. 497–506. Cited by: §1.
  • [8] F. Isensee, P. Kickingereder, W. Wick, M. Bendszus, and K. H. Maier-Hein (2017) Brain tumor segmentation and radiomics survival prediction: contribution to the brats 2017 challenge. In International MICCAI Brainlesion Workshop, pp. 287–297. Cited by: §2.2.
  • [9] S. Jin, B. Wang, H. Xu, C. Luo, L. Wei, W. Zhao, X. Hou, W. Ma, Z. Xu, Z. Zheng, et al. (2020) AI-assisted ct imaging analysis for covid-19 screening: building and deploying a medical ai system in four weeks. medRxiv. Cited by: §1.
  • [10] J. Lei, J. Li, X. Li, and X. Qi (2020) CT imaging of the 2019 novel coronavirus (2019-ncov) pneumonia. Radiology 295 (1), pp. 18–18. Cited by: §1.
  • [11] H. Li, S. Liu, X. Yu, S. Tang, and C. Tang (2020) Coronavirus disease 2019 (covid-19): current status and future perspective. International Journal of Antimicrobial Agents, pp. 105951. Cited by: §1.
  • [12] L. Li, L. Qin, Z. Xu, Y. Yin, X. Wang, B. Kong, J. Bai, Y. Lu, Z. Fang, Q. Song, et al. (2020) Artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest ct. Radiology, pp. 200905. Cited by: §1, §1.
  • [13] T. Liang et al. (2020) Handbook of covid-19 prevention and treatment. Zhejiang: Zhejiang University School of Medicine. Cited by: §1.
  • [14] M. Ng, E. Y. Lee, J. Yang, F. Yang, X. Li, H. Wang, M. M. Lui, C. S. Lo, B. Leung, P. Khong, et al. (2020) Imaging profile of the covid-19 infection: radiologic findings and literature review. Radiology: Cardiothoracic Imaging 2 (1), pp. e200034. Cited by: §1.
  • [15] W. H. Organization et al. (2020) WHO director-general’s opening remarks at the media briefing on covid-19-11 march 2020. Geneva, Switzerland. Cited by: §1.
  • [16] F. Pan, T. Ye, P. Sun, S. Gui, B. Liang, L. Li, D. Zheng, J. Wang, R. L. Hesketh, L. Yang, et al. (2020) Time course of lung changes on chest ct during recovery from 2019 novel coronavirus (covid-19) pneumonia. Radiology, pp. 200370. Cited by: §1.
  • [17] G. Piantadosi, M. Sansone, R. Fusco, and C. Sansone (2020) Multi-planar 3d breast segmentation in mri via deep convolutional neural networks. Artificial Intelligence in Medicine 103, pp. 101781. Cited by: §1.
  • [18] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §1, §2.1.
  • [19] A. G. Roy, N. Navab, and C. Wachinger (2018) Concurrent spatial and channel ‘squeeze & excitation’in fully convolutional networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 421–429. Cited by: §1.
  • [20] B. Savelli, A. Bria, M. Molinara, C. Marrocco, and F. Tortorella (2020) A multi-context cnn ensemble for small lesion detection. Artificial Intelligence in Medicine 103, pp. 101749. Cited by: §1.
  • [21] F. Shan+, Y. Gao+, J. Wang, W. Shi, N. Shi, M. Han, Z. Xue, D. Shen, and Y. Shi (2020) Lung infection quantification of covid-19 in ct images with deep learning. arXiv preprint arXiv:2003.04655. Cited by: §1.
  • [22] F. Shi, J. Wang, J. Shi, Z. Wu, Q. Wang, Z. Tang, K. He, Y. Shi, and D. Shen (2020) Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19. arXiv preprint arXiv:2004.02731. Cited by: §1.
  • [23] F. Shi, L. Xia, F. Shan, D. Wu, Y. Wei, H. Yuan, H. Jiang, Y. Gao, H. Sui, and D. Shen (2020) Large-scale screening of covid-19 from community acquired pneumonia using infection size-aware classification. arXiv preprint arXiv:2003.09860. Cited by: §1.
  • [24] A. Tversky (1977) Features of similarity.. Psychological review 84 (4), pp. 327. Cited by: §2.5.
  • [25] J. T. Wu, K. Leung, and G. M. Leung (2020) Nowcasting and forecasting the potential domestic and international spread of the 2019-ncov outbreak originating in wuhan, china: a modelling study. The Lancet 395 (10225), pp. 689–697. Cited by: §1.
  • [26] Z. Ye, Y. Zhang, Y. Wang, Z. Huang, and B. Song (2020) Chest ct manifestations of new coronavirus disease 2019 (covid-19): a pictorial review. European Radiology, pp. 1–9. Cited by: §1.
  • [27] F. Yu and V. Koltun (2015) Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122. Cited by: §2.3.
  • [28] C. Zheng, X. Deng, Q. Fu, Q. Zhou, J. Feng, H. Ma, W. Liu, and X. Wang (2020) Deep learning-based detection for covid-19 from chest ct using weak label. medRxiv. Cited by: §1.
  • [29] T. Zhou, S. Ruan, and S. Canu (2020) A multi-modal fusion network based on attention mechanism for brain tumor segmentation. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI 2020), Cited by: §1.
  • [30] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang (2018) Unet++: a nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3–11. Cited by: §1.