COVID-19 Detection in Computed Tomography Images with 2D and 3D Approaches

05/16/2021
by   Sara Atito Ali Ahmed, et al.
Sabancı University
10

Detecting COVID-19 in computed tomography (CT) or radiography images has been proposed as a supplement to the definitive RT-PCR test. We present a deep learning ensemble for detecting COVID-19 infection, combining slice-based (2D) and volume-based (3D) approaches. The 2D system detects the infection on each CT slice independently, combining them to obtain the patient-level decision via different methods (averaging and long-short term memory networks). The 3D system takes the whole CT volume to arrive to the patient-level decision in one step. A new high resolution chest CT scan dataset, called the IST-C dataset, is also collected in this work. The proposed ensemble, called IST-CovNet, obtains 90.80 COVID-19 among normal controls and other types of lung pathologies; and 93.69 accuracy and 0.99 AUC score on the publicly available MosMed dataset that consists of COVID-19 scans and normal controls only. The system is deployed at Istanbul University Cerrahpasa School of Medicine.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 6

page 11

05/06/2022

A High-Resolution Chest CT-Scan Image Dataset for COVID-19 Diagnosis and Differentiation

During the COVID-19 pandemic, computed tomography (CT) is a good way to ...
07/12/2021

Visual Transformer with Statistical Test for COVID-19 Classification

With the massive damage in the world caused by Coronavirus Disease 2019 ...
11/06/2021

Detecting COVID-19 from Chest Computed Tomography Scans using AI-Driven Android Application

The COVID-19 (coronavirus disease 2019) pandemic affected more than 186 ...
07/08/2021

3D RegNet: Deep Learning Model for COVID-19 Diagnosis on Chest CT Image

In this paper, a 3D-RegNet-based neural network is proposed for diagnosi...
07/10/2021

COVID Detection in Chest CTs: Improving the Baseline on COV19-CT-DB

The paper presents a comparative analysis of three distinct approaches b...
08/09/2021

Multi-Slice Net: A novel light weight framework for COVID-19 Diagnosis

This paper presents a novel lightweight COVID-19 diagnosis framework usi...
05/28/2020

Human Recognition Using Face in Computed Tomography

With the mushrooming use of computed tomography (CT) images in clinical ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

COVID-19 is a highly contagious disease caused by the SARS-CoV-2 virus, which spread rapidly around the world starting early 2020 (Zhu et al. [48]). The definitive diagnosis of COVID-19 is based on real-time reverse transcriptase polymerase chain reaction (RT-PCR) positivity for the presence of coronavirus [6, 36].

Due to the long duration to obtain the RT-PCR results and the prevalence of false negative results [31], the medical community has been in search of alternative or supplementary methods, including screening chest X-ray or Computed Tomography (CT) scans of patients for patterns of pneumonia caused by the COVID-19 infection.

The chest X-ray consists of a single 2-dimensional, frontal image of the thorax; consequently detection of COVID-19 infection in a chest X-ray presents as a typical image classification problem. On the other hand, a chest CT scan consists of a variable number of 2-dimensional axial slices, resulting in a more challenging problem. Specifically, the number of slices in the volume vary (typically [200-500]) and the shape and size of lung tissue within the slice vary significantly between slices.

Detecting COVID-19 in computed tomography or X-ray images has been addressed in many studies since the beginning of the pandemic [41, 33, 11, 44, 28, 42, 29]. Some of these systems only address the 2-class problem: distinguishing between normal and COVID-19 infected parenchyma, while others aim to detect COVID-19 infection among all possible conditions (normal lung parenchyma and other lung pathologies, including other types of pneumonia). The latter, which is the problem addressed in this work, is a significantly more difficult problem as non-COVID-19 pneumonia presents similar patterns to COVID-19.

(b) Normal lung parenchyma
(a) COVID-19
(b) Normal lung parenchyma
(c) Others (including Non-COVID-19 pneumonia, tumors and emphysema.)
Fig. 1: IST-C dataset samples. The ground glass opacities can be observed in the COVID-19 images, marked with the ellipses.
(a) COVID-19

We propose a deep learning ensemble (IST-CovNet) for detecting COVID-19 infections in high resolution chest CT scans, where we combine slice-based and volume-based

approaches. The slice-based approach takes individual slices as input and outputs the COVID-19 probability for that slice. To obtain the patient-level decision from slice-level predictions, we have evaluated different classifier combination techniques, including simple averaging and Long-Short Term Memory (LSTM) networks. This system is based on transfer learning using the Inception-ResNet-V2

[40] network that is expended with a novel attention mechanism [7]. The volume-based approach is based on the DeCoVNet architecture of Wang et al. [42] with slight modifications to the architecture. In both approaches, we use the pretrained U-Net [35]

architecture to find the lung regions in the slice images. Focusing to lung areas by masking the input with the lung mask is found to be an important step to reduce overfitting with such high-dimensional data

[10]

. To obtain the patient-level decision from slice-level predictions, we have evaluated different classifier combination techniques, including simple averaging and Long-Short Term Memory (LSTM) networks. To combine 2D and 3D systems, we used ensemble averaging, multi-variate regression and Support Vector Machines (SVMs).

A new dataset (IST-C) is collected at Istanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine (IUC), consisting of 712 chest CT scans collected from 645 patients. It includes samples from COVID-19 infected patients, as well as normal lung parenchyma and Non-COVID-19 pneumonia, tumors and emphysema patients. Figure 1 shows three samples from the IST-C dataset collected in this work, including a typical COVID-19 involvement pattern termed as ground glass opacity, along with normal lung parenchyma and other conditions including non-COVID-19 pneumonia, tumors and emphysema.

The contributions of this work are the following:

  • We have collected 712 high resolution chest CT scans from 645 patients, showing normal lung parenchyma, COVID-19 infections, and other pathologies, including non-COVID-19 pneumonia, tumors and emphysema. The IST-C dataset is made public along with our results as benchmark111http://github.com/suverim.

  • We present a deep neural network ensemble (IST-CovNet) that combines slice-based and volume-based approaches and achieves state-of-art accuracies on the publicly available MosMed and IST-C datasets.

  • We compare the two commonly used approaches in COVID-19 detection systems along with relevant preprocessing, segmentation and combination alternatives, as well as proposing novel attention and combination strategies for the slice-level approach.

  • The system is deployed at Istanbul University Cerrahpaşa School of Medicine, to alert attending physicians for CT scans that show COVID-19 infections.

Dataset Description Resolution # CT Scans # Slices # COVID-19 # Normal # Others
CC-19 [26] CT scans collected from 3 different hospitals and 6 different scanners High 89 34,006 68 21 0
MosMed [32] CT scans with indicated COVID-19 severity level (4 levels) High 1,110 46,411 856 254 0
BIMCV-COVID19 [8] COVID-19 and Normal only High 2,068 314,056 1,141 927 0
COVID-CT-MD [2] COVID-19, Normal, Other High 305 45,471 170 77 61
HKBU-HPML-COVID-19 [15] COVID-19, Normal, Other. Collected from different hospitals High 6,878 406,449 2,513 1,927 2,435
IST-C (this work) CT scans from one hospital High 712 200,647 336 245 131
TABLE I: Some of the publicly available COVID-19 CT scan datasets. The first four datasets contain scans of only COVID-19 infected patients and those with normal lung parenchyma. IST-C dataset collected in this work includes non-COVID-19 pneumonia, tumors and emphysema as well.
# Patients # CT volumes Total # slices Avg # slices/person
COVID-19 300 336 92,905 276 83
”Normal” 245 245 67,712 277 67
”Other” 131 131 40,030 306 98
Overall 645 712 200,647 282 82
TABLE II: Overview of the IST-C dataset: COVID-19 infections are all people diagnosed with the infection; ”Normal” is everyone with no infection whatsoever; ”Other” is all other types, including pneumonia, tumors and emphysema.

Ii Related Works

Automatic COVID-19 detection research in literature have targeted both chest X-rays [41, 33, 11] and CT scans [44, 28, 42, 29] as input and there have been many systems published in peer-reviewed venues or pre-print sites since the beginning of the pandemic.

Comprehensive literature reviews can be found in surveys about artificial intelligence (AI) based approaches to COVID-19 in

[39, 20, 34]. Among these surveys, Ozsahin et al. [34] structure their survey into 3 groups: systems aiming to differentiate between i) COVID-19 versus normal lung parenchyma, ii) COVID-19 versus non-COVID-19 (sometimes called COVID-19 negative) consisting of both normal lung parenchyma and other types of pneumonia, and iii) COVID-19 versus other types of pneumonia. Systems included in this survey report the accuracy and/or the Area Under the Curve (AUC) score related to the Receiver Operating Characteristic (ROC) curve. State-of-art results are above 90% accuracy and 0.95 AUC for the first problem (i) and approximately 88% accuracy and 0.90 AUC for the second problem (ii).

AI based COVID-19 detection approaches are two-fold: 2D or slice-based approach, taking a single slice image as input and obtain a score for individual slices [33], while 3D or volume-based approach, taking the whole volume (sequence of slices) as the input and produce a single score for the patient [44, 28, 42, 11]. In slice-based models, output scores of slices are often combined by averaging, to obtain the patient-level scores and decisions. Among volume-based approaches, most systems use adaptive-pooling operation for combining slice level features to obtain a patient-level decision [28, 42]

, while others use a more implicit combination using Recurrent Neural Networks (RNN)

[11]. An advantage of 2D models is the direct interpretability while the 3D models is potentially more powerful as they leverage end-to-end optimization rather than a 2-stage process of obtaining patient score after slice level scores.

In the remainder of this section, we focus on a subset of the literature due to space limitations, reporting systems that analyze CT scans (not X-rays), address the problem of separating COVID-19 samples from all non-COVID-19 samples (not just normal lung parenchyma), and appear on peer-reviewed venues.

Li et. al [28] developed a model called COVNet, that is based on the Resnet [40]

backbone. The varying number of CT slices are input into parallel branches that use shared weights and the deep features extracted from each are combined by a max-pooling operation. They report 0.96 AUC score on the 3-class classification problem of distinguishing between normal lung parenchyma, COVID-19 and other lung pathologies.

Wang et. al. [42] use the pretrained U-Net [35] architecture to segment lung regions and obtain the lung mask volume. Then, the proposed DeCovNet takes the whole CT volume along with the corresponding lung mask volume as input, and outputs a patient-level probability for COVID-19. The variable number of slices is handled using adaptive maxpool operation. Authors report %0.91 accuracy and a 0.959 AUC score on the 2-class problem of separating COVID-19 positive cases from all others (non-COVID-19, including other pneumonia).

Hammoudi et al. [11] split a chest X-ray into patches and after obtaining patch-level predictions using deep convolutional networks, they use bidirectional recurrent networks to combine them to predict patient health status.

Liu et. al [29] fine-tune well-known deep neural networks for the primary task of detecting COVID-19 and the auxiliary task of identifying the different types of COVID-19 patterns (e.g. ground glass opacities, crazy paving appearance, air bronchograms) observed in the slice-image. They report that using the auxiliary task helps with the detection performance, which reaches 89.0% accuracy.

Harmon et al. [12] test the performance of a baseline deep neural network approach in a multi-center study. The approach consists of lung segmentation using AH-Net [30] and the classification of segmented 3D lung regions by pretrained DenseNet121 [19]. On a 1,337-patient test set they report an accuracy of 0.908 and AUC score of 0.949.

Among systems that report on the MosMed dataset, Jin et al. [22] propose a deep learning slice-based approach employing ResNet-152 [13] architecture. The developed model achieved comparable performance to experienced radiologists with an AUC score of 0.93.

He at al. [14] proposed a differentiable neural architecture search framework for 3D chest CT-scans classification with the Gumbel-Softmax technique [21] to improve the searching efficiency. The experimental results show that their automatically searched model outperforms three of the state-of-the-art 3D models achieving an accuracy of 82.29% on MosMed dataset.

Fig. 2: Segmentation network U-Net[35]: input is a slice image and the output is the corresponding lung mask.

Iii IST-C Dataset

While there are many works on automatic detection of COVID-19 infection on X-ray or CT images, there are only a handful publicly accessible COVID-19 datasets. CT scan datasets we found in literature at the time of preparing this manuscript are shown in Table I. Note that three of these datasets, CC-19 [26], MosMed dataset [32] and BIMCV-COVID19 [8], only contain COVID-19 and normal lung parenchyma. On the other hand, in MosMed, the COVID-19 samples are also labelled with the severity of the infection in 4 levels (CT-1 to CT-4).

Lack of publicly available datasets results in researchers collecting and reporting on their own datasets, thereby rendering a comparison between different approaches difficult. To address this issue, we have collected a new open-source dataset called IST-C, retrospectively from patients admitted to the Radiology department of Cerrahpaşa Faculty of Medicine from March 2020 to August 2020. The collected dataset consists of 336 chest CT scans from COVID-19 infected patients, along with 245 scans showing normal lung parenchyma and 131 scans from Non-COVID-19 pneumonia, tumors and emphysema patients. These two last groups will be called simply as ”Normal” and ”Other” from here on. The detailed statistics of the dataset are shown in Table

II.

The collected CT scans in DICOM format consists of 16-bit gray scale images of size . Each scan is accompanied with a set of personal attributes, such as patient ID, age, gender, location, date, etc. (not used in this work). The average age of the patients is years, in which of the patients are male and patients are female.

The annotation of this dataset is at CT scan level: the CT of a patient as a whole is labelled as COVID-19, ”Normal”, or ”Other” by expert radiologists at Istanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine. In the remainder of this article, we refer to patient-level instead of CT-level decisions, even though some patients may have more than one CT.

Sample images extracted from COVID-19, ”Normal” and ”Other” classes are shown in Figure 1. The anonymized dataset is now shared publicly at http://github.com/suverim.

Iv Preprocessing

Pixel values of images in the CT dataset are in Hounsfield Unit (HU) which is a radiodensity measurement scale that maps distilled water to and air to . The HU values range between and , with higher values being obtained from bones and metal implants in the body and lung regions typically ranging in .

Similar to literature, we process chest CT scans such that values higher than are mapped to and the range is normalized to the linearly.

Slice images that are originally are resized to match the input size of the respective deep networks, namely for slice-based system and for the volume-based system. For the 3D approach, we have also reduced the slice count by half, so that the whole CT volume consisting of up to around 500 slice images fits in the GPU memory. This reduction is done for only the IST-C dataset where the number of slices per CT scan is high (Table II).

V Lung Segmentation

Lung shapes vary greatly within a chest CT scan, as can be seen in Figure 1. With the aim of focusing on the lung areas, we used the pretrained U-Net network to segment lung regions from non-lung areas.

The U-Net architecture was first proposed by Ronneberger et al [35] for biomedical image segmentation in general and trained specifically for lungs by Johannes et al. [24]. Since then has been used in detecting lung regions extensively in the diagnosis of lung health [44, 28, 42]. The U-Net network, shown in Figure 2, is named after the U-shape formed by the encoder branch consisting of convolutional layers and the decoder branch consisting of deconvolution operations. The network also has skip connections in each layer, carrying the output of earlier layers to later layers.

Lung segmentation is applied to individual slices in the CT volume. The output for each slice is the corresponding binary segmentation mask, separating lung areas (including air pockets, tumors and effusions in lung regions) from background or other organs, as shown in Figure 3. The segmentation extracts left and right lungs separately, although this information is not used in our model.

Lung segmentation with U-Net is very successful, as reported in [24] and also observed in our case. Nonetheless, in order not to miss infected regions, we dilated the masks with a 10-pixel structuring disk. Sample slices from the IST-C dataset and corresponding lung masks obtained by U-Net and the dilated masks are shown in Figure 3.

Fig. 3: Sample slice images along with their segmentation masks as obtained by U-Net and dilated masks.
Fig. 4: The base network and the inserted attention-based layer. Attention layer takes the feature maps

as an input and estimate the attention map

, which is then used to attend to the original features after a sigmoid activation.

Vi Slice-based Approach

In this approach, CT slices are analyzed independently, before combining them to obtain patient-level predictions.

Vi-a Base Model

To construct the base network architecture, we employed Inception-ResNet-V2 architecture [40]

, one of the top-ranked architectures of the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) 2014

[38]. The network architecture was used successfully in various image classification and object detection tasks [3, 27].

Inception-ResNet-V2 network is an advanced convolutional neural network that combines the inception module with ResNet

[13] to increase the efficiency of the network. The network is layers deep with only million parameters. It consists of three main reduction modules with , , and inception blocks for each module, respectively. The size of the output feature maps of the three reduction modules are , , and , respectively.

Training a large deep learning network from scratch is time consuming and requires a tremendous amount of training data. Therefore, our approach is based on fine-tuning a pre-trained Inception-ResNet-V2 model, that is originally trained on the ImageNet dataset with 1.2 million hand-labeled images of 1,000 different object classes.

Vi-B Attention Mechanism

To investigate the predictions of the trained base model, we applied Class Activation Mapping (CAM) [47]

on some of the images from the validation set. Observing that the attention of the network is not always directed to the area of interest (lung tissues) in misclassified images, we decided to use attention maps and thereby guide the network to the regions that are important to the problem at hand. Attention mechanism has been successfully applied in many computer vision tasks, including fine-grained image recognition

[46] and face attributes classification [5].

We add an attention map block inserted to the backbone of our base network, as shown in Figure 4. The input to the attention layer is a convolutional feature map , where , , and are the height, width, and the number of channels, respectively. The output of the attention module is the masked feature map , obtained via element-wise multiplication of the feature maps and sigmoid () attenuated attention layer output, .

Unlike the standard approach of learning the attention layer fully within the network, the approach used in this work is suggested to be an explainable and modular approach [7]

. It makes the assumption that an attention map can be represented using the linear combination of a set of basis vectors, as:

where is the average attention map; and are the height and width of the images; is the matrix of the basis vectors; and are the coefficients.

The average lung map and the 12 basis vectors

are obtained by applying Principal Component Analysis to lung masks obtained by U-Net segmentation network. The 12 basis vectors that retain approximately 75% of the variance are shown in Figure

5 and U-Net is explained in Section V.

To obtain the attention map coefficients an additional convolutional block is inserted to the network getting the input from the feature maps , as shown in Figure 4

. The convolutional block consists of a separable convolutional layer which is a depth-wise convolution performed independently over each channel of an input, followed by a pointwise convolution, batch normalization, and ReLU activation function. The output of the convolutional block (or attention coefficient block) are the weights

which form the coefficients in the linear basis vector representation.

(a)
(b)
Fig. 5: (a) Mean mask

and (b) The first 12 eigenvectors.

Vi-C Implementation Details

The Inception-ResNet-V2 network used as the base model in the slice-based approach is chosen due to its relatively small size and good performance. The network has an RGB image input size of . The output layer of the model is replaced with a fully connected layer with

hidden units to represent the given classes: COVID-19 vs Non-COVID-19 (including “Normal“ and “Other“ samples). All the layers in the classification network are finetuned and optimized using categorical cross-entropy loss function.

For the attention based model, we added the attention layer after the first reduction block as shown in Figure 4. As for the attention loss function, we trained the network in unsupervised manner. Even in the absence of the attention map supervision, we found that the attention module is able to learn the discriminative regions automatically.

The implementation is done using the Inception-ResNet-V2 model provided in the Matlab deep learning toolbox. Besides, several commonly used data augmentation techniques are applied during training, such as rotation [ to ], and translation [, ], and and scaling [, ].

Throughout this work, we set the batch size equal to and the initial learning rate as 1e-5 with a total of epochs with Adam algorithm for parameters optimization. The training process takes around minutes per epoch for IST-C dataset and minutes per epoch for MosMed datasets employing 8GB Nvidia GeForce RTX 2080 GPU.

Vi-D Combining Slice-level Predictions

The straightforward approach to obtain patient-level decision is to combine the predictions of the slice based model using simple averaging of slice-level predictions. This is evaluated as the base model, to obtain the patient-level score.

However, simple averaging does not take into account the information about the characteristics of COVID-19 infection, such as the fact that the patterns are often seen in the lower parts of the lungs. To learn this type of information about the slice sequence and to also handle the variable length of the slice sequence, we also used Recurrent Neural Networks (RNNs) as an alternative [37].

We used Long Short-Term Memory (LSTM) network [18]

that is the most powerful type of recurrent network. The input to the network consists of deep features corresponding to each slice in the CT volume. The features are extracted from the last pooling layer of the slice-based CNN model with the attention module, discussed in Section

VI). The LSTM learns to combine the slice-level features to obtain patient-level predictions.

The LSTM architecture consists of 3 layers: i) a bidirectional LSTM layer with hidden units and a dropout layer to reduce overfitting; ii) another bidirectional LSTM layer with hidden units; and iii) a fully connected layer with an output size corresponding to the number of classes (2 or 3 in our case).

It is important to note that the number of slices in the CT volumes varies substantially which can introduce lots of padding into the training process of the LSTMs and consequently negatively impact the classification accuracy. To overcome this issue, we normalized each CT sequence into 282 slices (i.e. the mean slice count across the IST-C dataset), by either dropping or replicating slices depending on the length of the volume. After normalization, each slice of the CT volume is passed to the trained CNN model for feature extraction. Then, the LSTM model is trained using the sequence of the feature vectors corresponding to the slices.

Vii Volume-base Approach

The 3D volume-based approach takes as input the whole CT volume and outputs patient-level decision, based on a single step processing of the input. It uses the lung segmentation volume obtained by U-Net (described in V), followed by a classification network based on DeCoVNet [42].

The segmentation network (U-Net) takes as input a single slice of the chest CT and outputs a binary mask indicating the lung region. The classification network subsequently takes the CT volume and the corresponding binary mask volume and outputs the patient-level scores.

Vii-a Classification network

The classification network used in our work is based on DeCoVNet that has been proposed by Wang et al. [42]. We have made some modifications to this network, without significantly changing its architecture.

Fig. 6: Architecture of the classification network which is based on DeCoVNet [42].

The network consists of three consecutive blocks, (1) Stem (2) ResBlocks (3) Classifier, as shown in Figure 6 and detailed in Table III. The stem block consists of a convolutional layer with a receptive field size 5x7x7 (depth, height, width), as used in well-known networks AlexNet [25] and Resnet [13]. The convolutional layer is followed by a batchnorm layer and a pooling layer. We evaluated using both a single channel input, consisting of the slice image with the lung mask applied, as well as the 2-channel input, consisting of the input slice and its lung mask, as in the original network. As we expected, the 2-channel approach led to less efficient training and did not bring accuracy gains.

The second stage of the networks consists of two 3D residual blocks (ResBlocks), with maxpool operation in between to reduce the volume depth by half 64xT/2x64x64 . In each block, there are 2 kernels: 3x1x1 , 1x3x3

(depth,height,width) with a stride of 1 in each dimension and padding of 1 wherever needed. The output volume is of size

128xT/2x32x32 . This block is adopted without any modification.

The third block, called the Progressive classifier, starts with an adaptive maxpool operation that handles the variable number of slices and outputs 128x16 feature maps of size . It is followed by 3 convolution layers and pooling operations, followed by a fully connected output layer with softmax activation.

The main modification in this block is to enrich the feature representation. The original DeCoVNet had a global max pooling layer with nodes, in the penultimate layer. We extended the Progressive classifier block by adding a new layer of concatenated features obtained using global max pool operation after each of the 3D convolutional layers. More specifically, from a convolutional layer with FxDxHxW output volume, the global max pooling operation outputs a vector of size . The resulting 192-dimensional () feature vector is fully connected to the output layer (2 nodes with softmax activation), as shown in Figure 6. We thus increased the penultimate layer size from 32 to 192.

This feature representation was inspired by the work in [4], where authors proposed to approximate a deep learning ensemble by replicating the output layer with connections from earlier layers and extending the loss function to include all the loss terms [4]. The classification network architecture is given in Table III.

Operation Output Penult.
Stem Conv3d@5x7x7 16xTx64x64
ResBlocks ResBlock@3x1x1&1x3x3 64xTx64x64
MaxPool3d 64xT/2x64x64
ResBlock@3x1x1&1x3x3 128xT/2x64x64
Progressive AdaptiveMaxPool3d 128×16×32×32
Classifier Conv3d@3×3×3 96×16×32×32
GlobalPool3d 96x1x1x1
———— 2nd Block ————
MaxPool3d 96×4×16×16
Conv3d@3×3×3 48×4×16×16
Dropout3d (p=0.5) 48×4×16×16
GlobalPool3d 48×1×1×1
———— 3rd Block ————
MaxPool3d 48×4×16×16
Conv3d@3×3×3+ReLU 48×4×16×16
GlobalPool3d 48x1x1x1
FullyConnected 2
TABLE III: The 3D-classification network architecture. The residual blocks have two kernels.

Vii-B Implementation Details

We first the network in two steps: first we trained with the 1,110-sample MosMed dataset that contains only COVID-19 and healthy classes, as pretraining. This step is important so as to increase the size of the data used to learn the network weights. In the second stage of training, we fine-tuned the network on the IST-C dataset, which contains samples from other pneumonia conditions as well.

In both training stages, the settings are the same and as follows: the loss function is the categorical cross-entropy; the optimizer is the Adam optimizer used with 1e-5 learning rate. Since the graphical card Nvidia 2080 can only process a single batch at a time, the batch size is one due to memory constraints. We also used data augmentation exactly same with DeCoVNet: scaling (), rotation ( degrees) and translations ( pixels).

All 3D systems were run for 200 epochs and validation set accuracy was observed. The optimal weights were chosen as those giving the highest validation set results and applied to the test set to get probability distribution over the COVID-19/Non-COVID-19 classes.

The 3D systems take around 8 minutes for an epoch of IST-C dataset and 4 minutes for an epoch of MosMed dataset in 8Gb Nvidia 2080.

Viii Combining Multiple Systems

After training the 2D and 3D systems, we combine output of the systems (patient-level predictions) to obtain the final prediction. In contrast, please note that Section VI-D discusses the combination of slice-level predictions to obtain patient-level predictions for the 2D approach.

The 2D (slice-based) approach is realized with or without the attention mechanism and using different combination mechanisms to obtain the patient-level decision. Similarly, the 3D (volume-based) approach is realized with 1-channel input where the input is masked with the lung mask, or with 2-channel input as in the original DeCovNet [42].

The combination methods that were evaluated were averaging, multivariate linear regression and Support Vector Machines (SVM). However, we only report ensemble averaging results because multi-variate regression essentially assigned the same weights to the two combined systems and the SVM did not bring noticeable improvements to justify the more complex combination method.

Ix Experimental Evaluation

We evaluated the 2D and 3D approaches to COVID-19 detection using the IST-C and MosMed datasets that are described in Section III.

We split the IST-C database into training/validation/testing data. For ”COVID-19” class, volumes are used for testing and the rest are used of the training and the validation. For ”Normal” and ”Others” classes, and volumes are used for testing, respectively. In total, we assigned volumes for testing and for training and validation. The MosMed dataset was split randomly as train-test, with a 80-20% split.

We first trained both systems (2D and 3D) on MosMed training set and tested the ensemble on MosMed test set. For IST-C, the 2D system was trained using only the training portion of IST-C data set, while the 3D system was first pretrained with MosMed and finetuned on IST*C training set.

We have done some extensive evaluation comparing different preprocessing, segmentation, architecture and ensemble methods. However for the sake of clarity, we report only the most important experiments, using accuracy and AUC scores, in line with the literature.

  Model   Accuracy (%)    AUC
2D - Base Network + Averaging
80.80 4.88 0.87
2D - Base + Attention + Averaging
85.60 4.35 0.90
2D - Base + Attention + LSTM
87.20 4.14 0.89
3D - DeCoVNet [42] 78.00 5.14 0.78
3D - two-channels 81.45 4.82 0.86
3D - one-channel 87.20 4.14 0.90
Ensemble - Averaging (IST-CovNet) 90.80 3.58 0.95
TABLE IV: Test set performance for the IST-C dataset where the 2D systems were trained with only IST-C and the 3D systems were trained with MosMed and IST-C training subsets. Bold figures indicate the best accuracy in slice-based or volume-based approaches.
  Model   Accuracy (%)   AUC
Jin et al. (2D) [22]
- 0.93
He et al. (3D) [14]
82.29 -
3D - DeCoVNet [42] 82.43 0.82
2D - Base + Attention + Averaging
90.09 3.70 0.96
2D - Base + Attention + LSTM
91.89 3.38 0.95
3D - one-channel 93.24 3.11 0.96

 

Ensemble - Averaging (IST-CovNet) 93.69 3.01 0.99
TABLE V: Test set performance for the MosMed dataset [32] that contains only COVID-19 and Healthy scans. Our approaches were trained using only MosMed training subset. Bold figures indicate the best accuracy in slice-based or volume-based approaches.
  Model   Accuracy (%)    AUC
COVID-FACT [9]
91.83 -
CT-CAPS [17]
89.80 0.93
Deep-CT-Net [16]
86.00 0.886
Ensemble Averaging (IST-CovNet) 87.86 4.05 0.92
TABLE VI: Inter-operability results: Test set performance for COVID-CT-MD dataset [2] where our ensemble system was trained using only MosMed and IST-C datasets.

Ix-a COVID-19 vs. Non-COVID-19

The results for the 2-class problem (distinguishing COVID-19 from all Non-COVID-19 scans) are given in Tables IV and V for IST-C and MosMed respectively.

The best results obtained on the IST-C dataset is 90.40% accuracy and 0.95 AUC score with ensemble averaging of the best 2D and and best 3D system. The results obtained on the MosMed dataset with only COVID-19 and Normal classes are better, given the relatively simpler problem with two classes. Our ensemble method achieved 93.69% accuracy and 0.98 AUC score which is 10% points higher than state-of-art, as indicated in Table V. More importantly, the AUC score is 0.06 higher compared to the best AUC score [23].

One of the motivations of this work was to compare the effectiveness of 2D and 3D approaches. Considering the results given in Tables IV and V, we see that the best 2D and 3D approach have the same accuracy on the IST-C datasets (87.20%), while the 3D system is slightly better for the MosMed dataset (93.24% vs 91.89%).

As for improvements brought by novel approaches in our systems, the attention layer increased the accuracy significantly (85.60% vs 80.80% in IST-C), and the use of LSTM brings another 1-1.2% points improvements in accuracy for both datasets, compared to averaging the slice-level predictions to obtain the patient-level prediction.

For the 3D approach, we observed that the 2-channel input also used in DeCoVNet achieves significantly lower accuracy (81.45% vs 87.20%), probably due to the difficulty in training the first layer weights. The supplied code for DeCoVNet [42] also achieved lower results compared to our modified version (82.43%) vs 87.20% for IST-C and 82.43% vs 93.24% for MosMed).

The accuracy values in Tables IV-VI

are given together with 95% confidence intervals that are computed using the Wilson score interval method

[43] for samples. ROC figures corresponding to IST-C and MosMed datasets are given in Figure 7.

Ix-B Inter-Operability

To study the inter-operability of systems with respect to different tomogrophy equipment/settings, we tested the accuracy of the system trained on MosMed and IST-C datasets on COVID-CT-MD dataset [2]. The results shown in Table VI accuracy and AUC results (87.86% and 0.92) are better than one of the state-of-art results on that dataset [17] and indicate a small accuracy decrease compared to the IST-C dataset results (90.90% vs 87.86%).

(a)
(b)
Fig. 7: ROC curves of the trained models on (a) IST-C dataset and (b) MosMed dataset.
Fig. 8: COVID-19 predicted probability distribution for the IST-C dataset, using the ensemble.

Ix-C Prediction Scores Distribution

The system is designed to alert the attending physicians in case of sufficiently high COVID-19 probability. Hence, we also considered the COVID-19 prediction distribution of the ensemble, shown in Figure 8. An adjustable threshold (e.g. 0.3-0.4) can be set to alert the attending physician, at the risk of some increased False positives.

At 0.3 threshold, we obtain 95.0% sensitivity (true positive rate) and 80.0% specificity (1-false positive rate) on the IST-C test data set. The ROC figures of the ensemble, corresponding to IST-C and MosMed datasets are given in Figure 7.

Fig. 9: Samples of segmentation errors (a) slice image (b) corresponding lung masks. Problematic areas are indicated with red arrows and are often missed lung tissue due to infection or tumors.

Ix-D Lung Segmentation Results

Regarding lung segmentation accuracy, Hofmanninger et al [24] report 97-98% Dice similarity scores measuring how much the mask generated by U-Net and ground-truth overlaps, on different test datasets involving multiple lung pathologies. While their tested datasets also included ground glass opacities observed in COVID-19 cases, we evaluated the segmentation network’s performance specifically for the COVID-19 detection problem by visually checking the segmentation results of 5 slices from sampled at regular intervals from 1,156 CT scans (all covid patients from IST-C and MosMed datasets), for a total of 5,783 slice images. We found around 11 serious segmentation errors, corresponding to roughly %0.19, which is in line with [24]. Samples of these images are given in 9, where lung areas that are considered as background and are highlighted by ellipses. Noting that the errors occur only in some of the slices within one CT scan, we conclude that U-Net provides a successful segmentation, suitable for COVID-19 detection.

Ix-E Discussion

While our 3D approach is based on DeCoVNet [42], we were able to outperform its results on both datasets, thanks to the changes made to the model. In particular, using only one input channel leads to more efficient training, especially since the U-Net lung segmentation is very accurate and enriching the network architecture also contributed to higher accuracy.

Similarly, even though the 2D system is based on fine-tuning a pretrained deep network, the use of the novel attention mechanism and LSTMs to combine slice-level features bring significant improvements over the base network and the standard approach of averaging slice predictions. We are aware of another work that also combines a deep network with LSTMs, related to COVID-19 predictions: Hammoudi et al. [11] use bidirectional LSTMs to predict patient health status by combining the predictions made by a deep network for image-patches of an X-ray.

X Conclusion

In addition to presenting a state-of-art system, we provide an evaluation of different 2D and 3D approaches on two datasets and discuss the effects of relevant preprocessing, segmentation and classifier combination steps on performance.

The collected dataset (IST-C) is made public to contribute to the literature as a challenging new dataset that consists of high resolution chest CT scans from a variety of conditions.

This work was motivated to help combat the pandemic and the developed system (IST-CovNet) is deployed and in use at Istanbul University Cerrahpaşa School of Medicine, to flag suspected COVID-19 cases when the patient is still at the tomography room.

References

  • [1] P. Afshar, S. Heidarian, N. Enshaei, F. Naderkhani, M. J. Rafiee, A. Oikonomou, F. B. Fard, K. Samimi, K. N. Plataniotis, and A. Mohammadi (2020)(Website) External Links: 2009.14623
  • [2] P. Afshar, S. Heidarian, N. Enshaei, F. Naderkhani, M. J. Rafiee, A. Oikonomou, F. B. Fard, K. Samimi, K. N. Plataniotis, and A. Mohammadi (2020)

    COVID-CT-MD: COVID-19 computed tomography (CT) scan dataset applicable in machine learning and deep learning

    .
    External Links: 2009.14623 Cited by: TABLE I, §IX-B, TABLE VI.
  • [3] S. A. A. Ahmed, B. Yanikoglu, C. Zor, M. Awais, and J. Kittler (2020) Skin lesion diagnosis with imbalanced ECOC ensembles. In

    Int. Conf. on Machine Learning, Optimization, and Data Science

    ,
    pp. . Cited by: §VI-A.
  • [4] S. A. A. Ahmed and B. Yanikoglu (2019) Within-network ensemble for face attributes classification. In Int. Conf. on Image Analysis and Processing, pp. 466–476. Cited by: §VII-A.
  • [5] S. A. Aly and B. Yanikoglu (2018) Multi-label networks for face attributes classification. In 2018 IEEE Int. Conf. on Multimedia & Expo Workshops (ICMEW), pp. 1–6. Cited by: §VI-B.
  • [6] V. M. Corman, O. Landt, M. Kaiser, R. Molenkamp, A. Meijer, D. K. Chu, T. Bleicker, S. Brünink, J. Schneider, M. L. Schmidt, et al. (2020) Detection of 2019 novel coronavirus (2019-ncov) by real-time RT-PCR. Eurosurveillance 25 (3), pp. 2000045. Cited by: §I.
  • [7] H. Dang, F. Liu, J. Stehouwer, X. Liu, and A. K. Jain (2020) On the detection of digital face manipulation. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    pp. 5781–5790. Cited by: §I, §VI-B.
  • [8] M. de la Iglesia Vayá, J. M. Saborit, J. A. Montell, A. Pertusa, A. Bustos, M. Cazorla, J. Galant, X. Barber, D. Orozco-Beltrán, F. García-García, M. Caparrós, G. González, and J. M. Salinas (2020) BIMCV COVID-19+: a large annotated dataset of RX and CT images from COVID-19 patients. External Links: 2006.01174 Cited by: TABLE I, §III.
  • [9] M. Dialameh, A. Hamzeh, H. Rahmani, A. R. Radmard, and S. Dialameh (2020) Screening COVID-19 based on CT/CXR images & building a publicly available CT-scan dataset of COVID-19. External Links: 2012.14204 Cited by: TABLE VI.
  • [10] N. Gupta, A. Kaul, D. Sharma, et al. (2020) Deep learning assisted COVID-19 detection using full CT-scans. TechRxiv. Cited by: §I.
  • [11] K. Hammoudi, H. Benhabiles, M. Melkemi, F. Dornaika, I. Arganda-Carreras, D. Collard, and A. Scherpereel (2020) Deep learning on chest X-ray images to detect and evaluate pneumonia cases at the era of COVID-19. arXiv preprint arXiv:2004.03399. Cited by: §I, §II, §II, §II, §IX-E.
  • [12] S. A. Harmon, T. H. Sanford, S. Xu, E. B. Turkbey, H. Roth, Z. Xu, D. Yang, A. Myronenko, V. Anderson, A. Amalou, et al. (2020) Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets. Nature Communications 11 (1), pp. 1–7. Cited by: §II.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 770–778. Cited by: §II, §VI-A, §VII-A.
  • [14] X. He, S. Wang, X. Chu, S. Shi, J. Tang, X. Liu, C. Yan, J. Zhang, and G. Ding (2021) Automated model design and benchmarking of 3D deep learning models for COVID-19 detection with chest CT scans. arXiv preprint arXiv:2101.05442. Cited by: §II, TABLE V.
  • [15] X. He, S. Wang, S. Shi, X. Chu, J. Tang, X. Liu, C. Yan, J. Zhang, and G. Ding (2020) Benchmarking deep learning models and automated model design for COVID-19 detection with chest CT scans. medRxiv. External Links: Document, Link, https://www.medrxiv.org/content/early/2020/06/17/2020.06.08.20125963.full.pdf Cited by: TABLE I.
  • [16] S. Heidarian, P. Afshar, N. Enshaei, F. Naderkhani, A. Oikonomou, S. F. Atashzar, F. B. Fard, K. Samimi, K. N. Plataniotis, A. Mohammadi, and M. J. Rafiee (2020) COVID-FACT: a fully-automated capsule network-based framework for identification of COVID-19 cases from chest CT scans. External Links: 2010.16041 Cited by: TABLE VI.
  • [17] S. Heidarian, P. Afshar, A. Mohammadi, M. J. Rafiee, A. Oikonomou, K. N. Plataniotis, and F. Naderkhani (2020) CT-CAPS: feature extraction-based automated framework for covid-19 disease identification from chest CT scans using capsule networks. External Links: 2010.16043 Cited by: §IX-B, TABLE VI.
  • [18] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural Computation 9 (8), pp. 1735–1780. Cited by: §VI-D.
  • [19] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 4700–4708. Cited by: §II.
  • [20] Md. M. Islam, F. Karray, R. Alhajj, and J. Zeng (2020)(Website) External Links: 2008.04815 Cited by: §II.
  • [21] E. Jang, S. Gu, and B. Poole (2016) Categorical reparameterization with Gumbel-softmax. arXiv preprint arXiv:1611.01144. Cited by: §II.
  • [22] C. Jin, W. Chen, Y. Cao, Z. Xu, Z. Tan, X. Zhang, L. Deng, C. Zheng, J. Zhou, H. Shi, et al. (2020) Development and evaluation of an artificial intelligence system for COVID-19 diagnosis. Nature Communications 11 (1), pp. 1–14. Cited by: §II, TABLE V.
  • [23] S. Jin, B. Wang, H. Xu, C. Luo, L. Wei, W. Zhao, X. Hou, W. Ma, Z. Xu, Z. Zheng, et al. (2020) AI-assisted CT imaging analysis for COVID-19 screening: building and deploying a medical AI system in four weeks. MedRxiv. Cited by: §IX-A.
  • [24] H. Johannes, P. Jeanny, R. Sebastian, P. Helmut, and L. Georg (2020) Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem. European Radiology Experimental 4 (1). Cited by: §V, §V, §IX-D.
  • [25] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25, pp. 1097–1105. Cited by: §VII-A.
  • [26] R. Kumar, A. A. Khan, S. Zhang, W. Wang, Y. Abuidris, W. Amin, and J. Kumar (2020) Blockchain-federated-learning and deep learning models for COVID-19 detection using CT imaging. arXiv preprint arXiv:2007.06537. Cited by: TABLE I, §III.
  • [27] W. Lee, J. Na, and G. Kim (2019) Multi-task self-supervised object detection via recycling of bounding box annotations. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 4984–4993. Cited by: §VI-A.
  • [28] L. Li, L. Qin, Z. Xu, Y. Yin, X. Wang, B. Kong, J. Bai, Y. Lu, Z. Fang, Q. Song, et al. (2020) Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology. Cited by: §I, §II, §II, §II, §V.
  • [29] B. Liu, X. Gao, M. He, L. Liu, and G. Yin (2020) A fast online COVID-19 diagnostic system with chest CT scans. In Proceedings of KDD, Cited by: §I, §II, §II.
  • [30] S. Liu, D. Xu, S. K. Zhou, O. Pauly, S. Grbic, T. Mertelmeier, J. Wicklein, A. Jerebko, W. Cai, and D. Comaniciu (2018) 3D anisotropic hybrid network: transferring convolutional features from 2D images to 3D anisotropic volumes. In Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, pp. 851–858. Cited by: §II.
  • [31] Q. Long, X. Tang, Q. Shi, Q. Li, H. Deng, J. Yuan, J. Hu, W. Xu, Y. Zhang, F. Lv, et al. (2020) Clinical and immunological assessment of asymptomatic SARS-CoV-2 infections. Nature Medicine 26 (8), pp. 1200–1204. Cited by: §I.
  • [32] S. Morozov, A. Andreychenko, N. Pavlov, A. Vladzymyrskyy, N. Ledikhova, V. Gombolevskiy, I. A. Blokhin, P. Gelezhe, A. Gonchar, and V. Y. Chernina (2020) Mosmeddata: chest CT scans with COVID-19 related findings dataset. arXiv preprint arXiv:2005.06465. Cited by: TABLE I, §III, TABLE V.
  • [33] A. Narin, C. Kaya, and Z. Pamuk (2020) Automatic detection of coronavirus disease (covid-19) using X-ray images and deep convolutional neural networks. arXiv preprint arXiv:2003.10849. Cited by: §I, §II, §II.
  • [34] I. Ozsahin, B. Sekeroglu, M. S. Musa, M. T. Mustapha, and D. U. Ozsahin (2020) Review on diagnosis of COVID-19 from chest CT images using artificial intelligence. Computational and Mathematical Methods in Medicine 9756518. Cited by: §II.
  • [35] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In Int. Conf. on Medical Image Computing and Computer-assisted Intervention, pp. 234–241. Cited by: §I, Fig. 2, §II, §V.
  • [36] G. D. Rubin, C. J. Ryerson, L. B. Haramati, N. Sverzellati, J. P. Kanne, S. Raoof, N. W. Schluger, A. Volpi, J. Yim, I. B. Martin, et al. (2020) The role of chest imaging in patient management during the COVID-19 pandemic: a multinational consensus statement from the fleischner society. Chest 158 (1), pp. 106–116. Cited by: §I.
  • [37] D. E. Rumelhart, G. E. Hinton, and R. J. Williams (1986) Learning representations by back-propagating errors. Nature 323 (6088), pp. 533–536. Cited by: §VI-D.
  • [38] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet Large Scale Visual Recognition Challenge. Int. Journal of Computer Vision (IJCV) 115 (3), pp. 211–252. Cited by: §VI-A.
  • [39] F. Shi, J. Wang, J. Shi, Z. Wu, Q. Wang, Z. Tang, K. He, Y. Shi, and D. Shen (2020) Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for COVID-19. IEEE Reviews in Biomedical Engineering (), pp. 1–1. Cited by: §II.
  • [40] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi (2017)

    Inception-V4, Inception-Resnet and the impact of residual connections on learning

    .
    In Thirty-First AAAI Conf. on Artificial Intelligence, Cited by: §I, §II, §VI-A.
  • [41] L. Wang and A. Wong (2020) COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. arXiv preprint arXiv:2003.09871. Cited by: §I, §II.
  • [42] X. Wang, X. Deng, Q. Fu, Q. Zhou, J. Feng, H. Ma, W. Liu, and C. Zheng (2020) A weakly-supervised framework for COVID-19 classification and lesion localization from chest CT. IEEE Transactions on Medical Imaging 39 (8), pp. 2615–2625. Cited by: §I, §I, §II, §II, §II, §V, Fig. 6, §VII-A, §VII, §VIII, §IX-A, §IX-E, TABLE IV, TABLE V.
  • [43] E. B. Wilson (1927) Probable inference, the law of succession, and statistical inference. Journal of the American Statistical Association 22 (158), pp. 209–212. External Links: https://www.tandfonline.com/doi/pdf/10.1080/01621459.1927.10502953 Cited by: §IX-A.
  • [44] X. Xu, X. Jiang, C. Ma, P. Du, X. Li, S. Lv, L. Yu, Q. Ni, Y. Chen, J. Su, et al. (2020) A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering 6 (10), pp. 1122–1129. Cited by: §I, §II, §II, §V.
  • [45] J. Zhao, Y. Zhang, X. He, and P. Xie (2020) Covid-CT-dataset: a CT scan dataset about COVID-19. arXiv preprint arXiv:2003.13865.
  • [46] H. Zheng, J. Fu, T. Mei, and J. Luo (2017) Learning multi-attention convolutional neural network for fine-grained image recognition. In Proceedings of the IEEE Int. Conf. on computer vision, pp. 5209–5217. Cited by: §VI-B.
  • [47] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016) Learning deep features for discriminative localization. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2921–2929. Cited by: §VI-B.
  • [48] N. Zhu, D. Zhang, W. Wang, X. Li, B. Yang, J. Song, X. Zhao, B. Huang, W. Shi, R. Lu, et al. (2020) A novel coronavirus from patients with pneumonia in China, 2019. New England Journal of Medicine. Cited by: §I.