An artificial intelligence system for predicting the deterioration of COVID-19 patients in the emergency department
During the COVID-19 pandemic, rapid and accurate triage of patients at the emergency department is critical to inform decision-making. We propose a data-driven approach for automatic prediction of deterioration risk using a deep neural network that learns from chest X-ray images, and a gradient boosting model that learns from routine clinical variables. Our AI prognosis system, trained using data from 3,661 patients, achieves an AUC of 0.786 (95 CI: 0.742-0.827) when predicting deterioration within 96 hours. The deep neural network extracts informative areas of chest X-ray images to assist clinicians in interpreting the predictions, and performs comparably to two radiologists in a reader study. In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at NYU Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time. In summary, our findings demonstrate the potential of the proposed system for assisting front-line physicians in the triage of COVID-19 patients.READ FULL TEXT VIEW PDF
An artificial intelligence system for predicting the deterioration of COVID-19 patients in the emergency department
In recent months, there has been a surge in patients presenting to the emergency department (ED) with respiratory illnesses associated with SARS CoV-2 infection (COVID-19) baugh2020creating ; debnath2020machine . Evaluating the risk of deterioration of these patients to perform triage is crucial for clinical decision-making and resource allocation whiteside2020redesigning . While ED triage is difficult under normal circumstances dorsett2020point ; mckenna2019emergency , during a pandemic, strained hospital resources increase the challenge warner2020stop ; debnath2020machine . This is compounded by our incomplete understanding of COVID-19. Data-driven risk evaluation based on artificial intelligence (AI) could, therefore, play an important role in streamlining ED triage.
As the primary complication of COVID-19 is pulmonary disease, such as pneumonia cozzi2020chest , chest X-ray imaging is a first-line triage tool for COVID-19 patients. Although other imaging modalities, such as computer tomography (CT), provide higher resolution, chest X-ray images are less costly, inflict a lower radiation dose, and are easier to obtain without incurring the risk of contaminating imaging equipment and disrupting radiologic services american2020acr . In addition, abnormalities in the chest X-ray images of COVID-19 patients have been found to mirror abnormalities in CT scans wong2020frequency . Consequently, chest X-ray imaging is considered a key tool in assessing COVID-19 patients rubin2020role
. Unfortunately, although the knowledge of the disease is rapidly evolving, understanding of the correlation between pulmonary parenchymal patterns visible in the chest X-ray images and clinical deterioration is limited. This motivates the use of machine learning approaches for risk stratification using chest X-ray imaging, which may be able to learn such correlations automatically from data.
The majority of related previous works using imaging data of COVID-19 patients concentrate more on diagnosis than prognosis kundu2020might ; khan2020coronet ; ucar2020covidiagnosis ; li2020artificial ; ozturk2020automated ; wang2020fully ; zhang2020clinically ; singh2020classification . Prognostic models have a number of potential real-life applications, such as: consistently defining and triaging sick patients, alerting bed management teams on expected demands, providing situational awareness across teams of individual patients, and more general resource allocation kundu2020might . Prior methodology for prognosis of COVID-19 patients via machine learning mainly use routinely-collected clinical variables wynants2020prediction ; debnath2020machine such as vital signs and laboratory tests, which have long been established as strong predictors of deterioration news ; shamout2019deep
. Some studies have proposed scoring systems for chest X-ray images to assess the severity and progression of lung involvement using deep learningli2020automated , or more commonly, through manual clinical evaluation borghesi2020covid ; toussie2020clinical ; cozzi2020chest . In general, the role of deep learning for the prognosis of COVID-19 patients using chest X-ray imaging has not yet been fully established.
In this work, we present an AI system that performs an automatic evaluation of deterioration risk, based on chest X-ray imaging, combined with other routinely collected non-imaging clinical variables. The goal is to provide support for critical clinical decision-making involving patients arriving at the ED in need of immediate care debnath2020machine ; fernandes2020clinical . We designed our system to satisfy a clinical need of frontline physicians. We were able to build it due to the availability of a large-scale chest X-ray image dataset. The system is based on chest X-ray imaging, which is already being employed as a first-line triage tool in hospitals cozzi2020chest , while also incorporating other routinely collected non-imaging clinical variables that are known to be strong predictors of deterioration.
Our system is able to accurately predict the deterioration risk on a test set of new patients. It achieves an area under the receiver operating characteristic curve (AUC) of 0.786 (95% CI: 0.742-0.827), and an area under the precision recall curve (PR AUC) of 0.517 (95% CI: 0.434, 0.605) for prediction of deterioration within 96 hours. Additionally, its estimated probability of the temporal risk evolution discriminates effectively between patients, and is well-calibrated. The imaging-based model achieves a comparable AUC to two experienced chest radiologists in a reader study, highlighting the potential of our data-driven approach. In order to verify our system’s performance in a real clinical setting, we silently deployed a preliminary version of it at NYU Langone Health during the first wave of the pandemic, demonstrating that it can produce accurate predictions in real-time. Overall, these results strongly suggest that our system is a viable and valuable tool to inform triage of COVID-19 patients.
Our AI system was developed and evaluated using a dataset collected at NYU Langone Health between March 3, 2020 and June 28, 2020.111This study was approved by the Institutional Review Board, with ID# i20-00858.
The dataset consists of chest X-ray images collected from patients who tested positive for COVID-19 using the polymerase chain reaction (PCR) test, along with the clinical variables recorded closest to the time of image acquisition (e.g. vital signs, laboratory test results, and patient characteristics). The training set consisting of 5,617 chest X-ray images was used for model development and hyperparameter tuning, while the test set consisting of 832 images was used to report the final results. The training and the test sets were disjoint, with no patient overlap. Table1 summarizes the overall demographics and characteristics of the patient cohort in the training and test sets. Supplementary Table 1 summarizes the associated clinical variables included in the dataset.
We define deterioration, the target to be predicted by our models, as the occurrence of one of three adverse events: intubation, admission to the intensive care unit (ICU), and in-hospital mortality. If multiple adverse events occurred, we only consider the time of the first event. Figure 2.a shows examples of chest X-ray images collected from different patients. Although the patient in example 5 had less severe parenchymal findings than patients in examples 3 and 4, the patient was intubated within 24 hours compared to 48 and 96 hours in examples 3 and 4. This highlights the difficulty of assessing the risk of deterioration using only chest X-ray images, since the extent of visible parenchymal disease is not fully predictive of the time of deterioration.
|Characteristic||Training set||Test set|
|Females, n (%)||1,206 (41.0)||305 (42.5)|
|Age (years), mean (SD)||62.9 (17.2)||64.9 (17.2)|
|BMI (kg/m), mean (SD)||29.4 (7.0)||29.5 (8.6)|
|Adverse events, n||1,311||594|
|ICU admission, n||387||113|
|Composite outcome, n||730||225|
|Chest X-ray exams, n||5,224||770|
|Composite outcome within 24 hours, n (%)||349 (6.7%)||74 (9.6%)|
|Composite outcome within 48 hours, n (%)||553 (10.6%)||101 (13.1%)|
|Composite outcome within 72 hours, n (%)||735 (14.1%)||130 (16.9%)|
|Composite outcome within 96 hours, n (%)||876 (16.8%)||156 (20.3%)|
|Total number of images, n||5,617||832|
Table 2 summarizes the performance of all the models in terms of the AUC and PR AUC for the prediction of deterioration within 24, 48, 72, and 96 hours from the time of the chest X-ray exam. The receiver operating characteristic curves and precision-recall curves can be found in Supplementary Figure 4. Our ensemble model consisting of COVID-GMIC and COVID-GBM achieves the best AUC performance across all time windows compared to COVID-GMIC and COVID-GBM individually. This highlights the complementary role of chest X-ray images and routine clinical variables in predicting deterioration. The weighting of the predictions of COVID-GMIC and COVID-GBM was optimized on the validation set, as shown in Supplementary Figure 2.b. Similarly, the ensemble of COVID-GMIC and COVID-GBM outperforms all models across all time windows in terms of the PR AUC, except for the 96 hours window.
|Test set (n=832)|
|24 hours||48 hours||72 hours||96 hours||24 hours||48 hours||72 hours||96 hours|
|(0.692, 0.796)||(0.683, 0.788)||(0.701, 0.797)||(0.727, 0.813)||(0.164, 0.321)||(0.254, 0.421)||(0.337, 0.499)||(0.446, 0.613)|
|(0.627, 0.754)||(0.661, 0.766)||(0.661, 0.766)||(0.691, 0.781)||(0.140, 0.281)||(0.225, 0.395)||(0.296, 0.465)||(0.363, 0.532)|
|COVID-GMIC||(0.713, 0.818)||(0.700, 0.798)||(0.720, 0.814)||(0.742, 0.827)||(0.187, 0.336)||(0.254, 0.427)||(0.351, 0.533)||(0.434, 0.605)|
|Reader study dataset (n=200)|
|24 hours||48 hours||72 hours||96 hours||24 hours||48 hours||72 hours||96 hours|
|(0.521, 0.707)||(0.559, 0.719)||(0.612, 0.764)||(0.666, 0.806)||(0.251, 0.475)||(0.381, 0.613)||(0.535, 0.744)||(0.650, 0.827)|
|(0.544, 0.727)||(0.556, 0.720)||(0.578, 0.728)||(0.640, 0.777)||(0.268, 0.501)||(0.360, 0.585)||(0.479, 0.688)||(0.603, 0.792)|
|Radiologist A +||0.642||0.663||0.692||0.741||0.403||0.499||0.609||0.740|
|Radiologist B||(0.555, 0.729)||(0.580, 0.737)||(0.618, 0.763)||(0.673, 0.804)||(0.286, 0.534)||(0.385, 0.618)||(0.507, 0.726)||(0.649, 0.830)|
|(0.550, 0.730)||(0.621, 0.775)||(0.681, 0.817)||(0.746, 0.866)||(0.282, 0.527)||(0.435, 0.671)||(0.572, 0.788)||(0.698, 0.879)|
|(0.624, 0.776)||(0.644, 0.790)||(0.679, 0.816)||(0.724, 0.847)||(0.304, 0.563)||(0.434, 0.680)||(0.566, 0.778)||(0.724, 0.870)|
|COVID-GMIC||(0.617, 0.779)||(0.629, 0.771)||(0.705, 0.837)||(0.753, 0.875)||(0.305, 0.543)||(0.399, 0.636)||(0.604, 0.811)||(0.718, 0.881)|
Performance of the outcome classification task on the held-out test set, and on the subset of the test set used in the reader study. We include 95% confidence intervals estimated by 1,000 iterations of the bootstrap methodefron1994introduction . The optimal weights assigned to the COVID-GMIC prediction in the COVID-GMIC and COVID-GBM ensemble were derived through optimizing the AUC on the validation set as described in Supplementary Figure 2.b. The ensemble of COVID-GMIC and COVID-GBM, denoted as ‘COVID-GMIC + COVID-GBM’, achieves the best performance across all time windows in terms of the AUC and PRAUC, except for the PR AUC in the 96 hours task. In the reader study, our main finding is that COVID-GMIC outperforms radiologists A & B across time windows longer than 24 hours, with 3 and 17 years of experience, respectively. Note that the radiologists did not have access to clinical variables and as such their performance is not directly comparable to the COVID-GBM model; we include it only for reference. The area under the precision-recall curve is sensitive to class distribution, which explains the large differences between the scores on the test set and the reader study subset.
To illustrate the interpretability of COVID-GMIC, we show in Figure 3 the saliency maps for all time windows (24, 48, 72, and 96 hours) computed for four examples from the test set. Across all four examples, the saliency maps highlight regions that contain visual patterns such as airspace opacities and consolidation, which are correlated with clinical deterioration li2020automated ; toussie2020clinical . These saliency maps are utilized to guide the extraction of six regions of interest (ROI) patches cropped from the entire image, which are then associated with a score that indicates its relevance to the prediction task. We also note that in the last example, the saliency maps highlight right mid to lower paramediastinal and left mid-lung periphery, while missing the dense consolidation in the periphery of the right upper lobe. This suggests that COVID-GMIC emphasizes only the most informative regions in the image, while human experts can provide a more holistic interpretation covering the entire image. It might, therefore, be useful to enhance GMIC through a classifier agnostic mechanism zolna2020classifier , which finds all the useful evidence in the image, instead of solely the most discriminative part. We leave this for future work.
We compared the performance of COVID-GMIC with two chest radiologists from NYU Langone Health (with 3 and 17 years of experience) in a reader study with a sample of 200 frontal chest X-ray exams from the test set. We used stratified sampling to improve the representation of patients with a negative outcome in the reader study dataset. We describe the design of the reader study in more detail in the Methods section.
As shown in Table 2, our main finding is that COVID-GMIC achieves a comparable performance to radiologists across all time windows in terms of AUC and PR AUC, and outperforms radiologists for 48, 72, and 96 hours. For example, COVID-GMIC achieves AUC of 0.808 (95% CI, 0.746-0.866) compared to AUC of 0.741 average AUC of both radiologists in the 96 hours prediction task. We hypothesize that COVID-GMIC outperforms radiologists on this task due to the currently limited clinical understanding of which pulmonary parenchymal patterns predict clinical deterioration, rather than the severity of lung involvement toussie2020clinical . Supplementary Figure 5 shows AUC and PR AUC curves across all time windows.
We use a modified version of COVID-GMIC, referred to hereafter as COVID-GMIC-DRC, to generate discretized deterioration risk curves (DRCs) which predict the evaluation of the deterioration risk based on chest X-ray images. Figure 4.a shows the DRCs for all the patients in the test set. The DRC represents the probability that the first adverse event occurs before time , where is equal to 3, 12, 24, 48, 72, 96, 144, and 192 hours. The mean DRCs of patients who deteriorate (red bold line) is significantly higher than the mean DRCs of patients who are discharged without experiencing any adverse events (blue bold line). We evaluate the performance of the model using the concordance index, which is computed on patients in the test set who experienced adverse events. For a fixed time the index equals the fraction of pairs of patients in the test data for which the patient with the higher DRC value at experiences an adverse event earlier. For equal to 96 hours, the concordance index is 0.713 (95% CI: 0.682-0.747), which demonstrates that COVID-GMIC-DRC can effectively discriminate between patients. Other values of yield similar results, as reported in Supplementary Table 5.
.b shows a reliability plot, which evaluates the calibration of the probabilities encoded in the DRCs. The diagram compares the values of the estimated DRCs for the patients in the test set with empirical probabilities that represent the true frequency of adverse events. To compute the empirical probabilities, we divided the patients into deciles according to the value of the DRC at each time. We then computed the fraction of patients in each decile that suffered adverse events up to . The fraction is plotted against the mean DRC of the patients in the decile. The diagram shows that these values are similar across the different values of , meaning the model is well-calibrated (for comparison, perfect calibration would correspond to the diagonal black dashed line).
Our long-term goal is to deploy our system in existing clinical workflows to assist clinicians. The clinical implementation of machine learning models is a very challenging process, both from technical and organizational standpoints (baier2019challenges, )
. To test the feasibility of deploying the AI system in the hospital, we silently deployed a preliminary version of our AI system in the hospital system and let it operate in real-time beginning on May 22, 2020. The deployed version includes 15 models that are based on DenseNet-121 architectures, and use only chest X-ray images. The models were developed to predict deterioration within 96 hours using a subset of our data collected prior to deployment from 3,425 patients. The models were serialized and served with TensorFlow Serving componentstensorflow2015-whitepaper on an Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz; no GPUs were used. Images are preprocessed as explained in the Methods section. Our system produces predictions essentially in real-time - it takes approximately two seconds to extract an image from the DICOM receiver (C-STORE), apply the image preprocessing steps, and get the prediction of a model as a Tensorflow tensorflow2015-whitepaper output.
Of the 375 exams collected between May 22, 2020 and June 24, 2020, 38 exams were associated with a positive 96 hour deterioration outcome. An ensemble of the deployed models, obtained by averaging their predictions, achieved an AUC of 0.717 (95% CI: 0.622-0.801) and a PR AUC of 0.289 (95% CI: 0.181-0.465). These results are comparable to those obtained on a retrospective test set used for evaluation before deployment, which are 0.748 (95% CI: 0.708-0.790) AUC and 0.365 (95% CI: 0.313-0.465) PR AUC. The decrease in accuracy may indicate changes in the patient population as the pandemic progressed.
In this work, we present an AI system that is able to predict deterioration of COVID-19 patients presenting to the ED, where deterioration is defined as the composite outcome of mortality, intubation, or ICU admission. The system aims to provide clinicians with a quantitative estimate of the risk of deterioration, and how it is expected to evolve over time, in order to enable efficient triage and prioritization of patients at the high risk of deterioration. The tool may be of particular interest for pandemic hotspots where triage at admission is critical to allocate limited resources such as hospital beds.
Recent studies have shown that chest X-ray images are useful for the diagnosis of COVID-19 wynants2020prediction ; narin2020automatic ; khan2020coronet ; ucar2020covidiagnosis ; ozturk2020automated . Our work supplements those studies by demonstrating the significance of this imaging modality for COVID-19 prognosis. Additionally, our results suggest that chest X-ray images and routinely collected clinical variables contain complementary information, and that it is best to use both to predict clinical deterioration. This builds upon existing prognostic research, which typically focuses on developing risk prediction models using non-imaging variables extracted from electronic health records shamout2020review ; wynants2020prediction . In Supplementary Table 4, we demonstrate that our models’ performance can be improved by increasing the dataset size. The current dearth of prognosis models that use both imaging and clinical variables may partly be due to the limited availability of large-scale datasets including both data types and outcome labels, which is a key strength of our study. In order to assess the clinical benefits of our approach, we conducted a reader study, and the results indicate that the proposed system can perform comparably to radiologists. This highlights the potential of data-driven tools for assisting the interpretation of X-ray images.
The proposed deep learning model, COVID-GMIC, provides visually intuitive saliency maps to help clinicians interpret the model predictions ahmad2018interpretable . Existing works on COVID-19 often use external gradient-based algorithms, such as gradCAM selvaraju2017grad , to interpret deep neural network classifiers song2019current ; brunese2020explainable ; paul2020generalizability . However, visualizations generated by gradient-based methods are sensitive to minor perturbation in input images, and could yield misleading interpretations adebayo2018sanity . In contrast, COVID-GMIC has an inherently interpretable architecture that better retains localization information of the more informative regions in the input images.
We performed prospective validation of an early version of our system through silent deployment in a hospital which uses the Epic electronic health record system. The results suggest that the implementation of our AI system in the existing clinical workflows is feasible. Our model does not incur any overhead operational costs on data collection, since chest X-ray images are routinely collected from COVID-19 patients. Additionally, the model can process the image efficiently in real-time, without requiring extensive computational resources such as GPUs. This is an important strength of our study, since very few studies have implemented and prospectively validated risk prediction models in general brajer2020prospective . To the best of our knowledge, our study is the first to do so for the prognosis of COVID-19 patients.
Our approach has some limitations that will be addressed in future work. The silent deployment was based only on the model that processes chest X-ray exams, and did not include routine clinical variables, nor any interventions. The performance of this model dropped from an AUC of 0.748 (95% CI: 0.708- 0.790) during retrospective evaluation to 0.717 (95% CI: 0.622-0.801) during prospective validation, suggesting that the model may need to be fine-tuned as additional data is collected. In addition, further validation is required to assess whether the system can improve key performance measures, such as patient outcomes, through prospective and external validation across different hospitals and electronic health records systems.
Our system currently considers two data types, which are chest X-ray images and clinical variables. Incorporating additional data from patient health records may further improve its performance. For example, the inclusion of presenting symptoms using natural language processing has been shown to improve the performance of a risk prediction model in the EDfernandes2020clinical . Although we focus on chest X-ray images because pulmonary disease is the main complication associated to COVID-19, COVID-19 patients may also suffer poor outcomes due to non-pulmonary complications such as: non-pulmonary thromboembolic events, stroke, and pediatric inflammatory syndromes lodigiani2020venous ; oxley2020large ; viner2020kawasaki . This could explain some of the false negatives incurred by our system; therefore, incorporating other types of data that reflect non-pulmonary complications may also improve prognostic accuracy.
Our system was developed and evaluated using data collected from the NYU Langone Health in New York, USA. Therefore, it is possible that our models overfit to the patient demographics and specific configurations in the imaging acquisition devices of our dataset.
Our findings show the promise of data-driven AI systems in predicting the risk of deterioration for COVID-19 patients, and highlights the importance of designing multi-modal AI systems capable of processing different types of data. We anticipate that such tools will play an increasingly important role in supporting clinical decision-making in the future.
In this section, we first introduce our data collection and preprocessing pipeline. We then formulate the adverse event prediction task and present our multi-modal approach which utilizes both chest X-ray images and clinical variables. Next, we formally define deterioration risk curve (DRC) and introduce our X-ray image-based approach to estimate DRC. Subsequently, we summarize the technical details of model training and implementation. Lastly, we describe the design of the reader study.
We extracted a dataset of 19,957 chest X-ray exams collected from 4,772 patients who tested positive for COVID-19 between March 2, 2020, and May 13, 2020. We applied inclusion and exclusion criteria that were defined in collaboration with clinical experts, as shown in Figure 2.b. Specifically, we excluded 783 exams that were not linked to any radiology report, nine exams that were not linked to any encounter information, and 5,213 exams from patients who were still hospitalised by May 13, 2020. To ensure that our system predicts deterioration prior to its occurrence, we excluded 6,260 exams that were collected after an adverse event and 187 exams of already intubated patients. The final dataset consists of 7,502 chest X-ray exams corresponding to 4,204 unique patients. We split the dataset at the patient level such that exams from the same patient exclusively appear either in the training or test set. In the training set, we included exams that were collected both in the ED and during inpatient encounters. Since the intended clinical use of our model is in the ED, the test set only includes exams collected in the ED. This resulted in 5,224 exams (5,617 images) in the training set and 770 exams (832 images) in the test set. We included both frontal and lateral images, however there were less than 50 lateral images in the entire dataset.
The data used to evaluate the models during deployment consist of 375 exams from 217 patients collected between May 22, 2020 and June 24, 2020. The exams were filtered based on the same criteria described above. Among the 375 exams, 25 chest X-ray exams were collected from patients who were admitted to the ICU within 96 hours, and six exams were collected from patients who were intubated within 96 hours.
After extracting the images from DICOM files, we applied the following preprocessing procedure. We first thresholded and normalized pixel values, and then cropped the images to remove any zero-valued pixels surrounding the image. Then, we unified the dimensions of all images by cropping the images outside the center and rescaling. We performed data augmentation by applying random horizontal flipping (), random rotation (-45 to 45 degrees), and random translation. Supplementary Figure 1 shows the distribution of the size of the images prior to data augmentation, as well as examples of images before and after preprocessing.
In addition to the chest X-ray images, we extracted clinical variables for patients including patient demographics (age, weight, and body mass index), vital signs (heart rate, systolic blood pressure, diastolic blood pressure, temperature, and respiratory rate), and 25 lab test variables listed in Supplementary Table 1. All vital signs were collected prior to the chest X-ray exam.
Our main goal is to predict clinical deterioration within four time windows of 24, 48, 72, and 96 hours. We frame this as a classification task with binary labels indicating clinical deterioration of a patient within the four time windows. The probability of deterioration is estimated using two types of data associated with the patient: a chest X-ray image, and routine clinical variables. We use two different machine learning models for this task: COVID-GMIC to process chest X-ray images, and COVID-GBM to process clinical variables. For each time window , both models produce probability estimates of clinical deterioration, .
In order to combine the predictions from COVID-GMIC and COVID-GBM, we employ the technique of model ensembling dietterich2000ensemble . Specifically, for each example, we compute a multi-modal prediction as a linear combination of and :
where is a hyperparameter. We selected the best by optimizing the average of the AUC and PR AUC on the validation set. In Supplementary Figure 2.b, we show the validation performance of for varying .
The goal of the clinical variables model is to predict the risk of deterioration when the patient’s vital signs are measured. Thus, each prediction was computed using a set of vital sign measurements, in addition to the patient’s most recent laboratory test results, age, weight, and body mass index (BMI). The laboratory test results were represented as maximum and minimum statistics of all values collected within 12 hours prior to the time of the vital sign measurement. The feature sets of vital signs and laboratory tests were then processed using a gradient boosting model ke2017lightgbm which we refer to as COVID-GBM. For the final ensemble prediction,
, we combined the COVID-GMIC prediction with the COVID-GBM prediction computed using the most recently collected clinical variables prior to the chest X-ray exam. In cases where there were no clinical variables collected prior to the chest X-ray, we performed a mean imputation of the predictions assigned to the validation set.
We process chest X-ray images using a deep convolutional neural network model, which we call COVID-GMIC, based on the GMIC model shen2019globally ; shen2020interpretable . COVID-GMIC has two desirable properties. First, COVID-GMIC generates interpretable saliency maps that highlight regions in the X-ray images that correlate with clinical deterioration. Second, it possesses a local module that is able to utilize high-resolution information in a memory-efficient manner. This avoids aggressive downsampling of the input image, a technique that is commonly used on natural images he2016deep ; huang2017densely , which may distort and blur informative visual patterns in chest X-ray images such as basilar opacities and pulmonary consolidation. In Supplementary Table 2, we demonstrate that COVID-GMIC achieves comparable results to DenseNet-121, a neural network model that is not interpretable by design, but is commonly used for chest X-ray analysis rajpurkar2017chexnet ; allaouzi2019novel ; liu2019sdfn ; guan2020multi .
The architecture of COVID-GMIC is schematically depicted in Figure 1.b. COVID-GMIC processes an X-ray image ( and denote the height and width) in three steps. First, the global module helps COVID-GMIC learn an overall view of the X-ray image. Within this module, COVID-GMIC utilizes a global network to extract feature maps , where , , and denote the height, width, and number of channels of the feature maps. The resolution of the feature maps is chosen to be coarser than the resolution of the input image. For each time window , we apply a convolution layer with sigmoid activation to transform into a saliency map that highlights regions on the X-ray image which correlate with clinical deterioration.222 For visualization purposes, we apply nearest neighbor interpolation to upsample the saliency maps to match the resolution of the original image.
For visualization purposes, we apply nearest neighbor interpolation to upsample the saliency maps to match the resolution of the original image.Each element represents the contribution of the spatial location in predicting the onset of adverse events within time window . In order to train , we use an aggregation function to transform all saliency maps for all time windows into classification predictions :
where denotes the set containing the locations of the largest values in , and is a hyperparameter.
The local module enables COVID-GMIC to selectively focus on a small set of informative regions. As shown in Figure 1, COVID-GMIC utilizes the saliency maps, which contain the approximate locations of informative regions, to retrieve six image patches from the input X-ray image, which we call region-of-interest (ROI) patches. Figure 3 shows some examples of ROI patches. To utilize high-resolution information within each ROI patch , COVID-GMIC applies a local network , parameterized as a ResNet-18 he2016deep
, which produces a feature vectorfrom each ROI patch. The predictive value of each ROI patch might vary significantly. Therefore, we utilize the gated attention mechanism ilse2018attention to compute an attention score that indicates the relevance of each ROI patch for the prediction task. To aggregate information from all ROI patches, we compute an attention-weighted representation:
The representation is then passed into a fully connected layer with sigmoid activation to generate a prediction . We refer the readers to Shen et al. shen2020interpretable for further details.
The fusion module combines both global and local information to compute a final prediction. We apply global max pooling to, and concatenate it with to combine information from both saliency maps and ROI patches. The concatenated representation is then fed into a fully connected layer with sigmoid activation to produce the final prediction .
In our experiments, we chose . Supplementary Table 2 shows that COVID-GMIC achieves the best validation performance for this resolution. We parameterize as a ResNet-18 he2016deep which yields feature maps with resolution , and number of channels
. During training, we optimize the loss function:
where BCE denotes binary cross-entropy and is a hyperparameter representing the relative weights on an -norm regularization term that promotes sparsity of the saliency maps. During inference, we use as the final prediction generated by the model.
The deterioration risk curve (DRC) represents the evolution of the deterioration risk over time for each patient. Let denote the time of the first adverse event. The DRC is defined as a discretized curve that equals the probability of the first adverse event occurring before time , where , , , , , , , (all times are in hours).
Following recent work on survival analysis via deep learning gensheimer2018scalable , we parameterize the DRC using a vector of conditional probabilities . The entry of this vector, , is equal to the conditional probability of the adverse event happening before time given that no adverse event occurred before time , that is:333The parameters in our implementation are the complementary probabilities , which is a mathematically equivalent parameterization. We also include an additional parameter to account for patients whose first adverse event occurs after 192 hours.
Given an estimate of
, the DRC can be computed applying the chain rule:
We use the GMIC model to estimate the conditional probabilities from chest X-ray images. We refer to this model as COVID-GMIC-DRC. As explained in the previous section, the GMIC model has three different outputs corresponding to the global module, local module and fusion module. When estimating conditional probabilities for the eight time intervals, we denote these outputs by , , and . During inference, we use the output of the fusion module, , as the final prediction of the conditional-probability vector . We use an input resolution of and parameterize as ResNet-34 he2016deep . The resulting feature maps have resolution and number of channels . The results of an ablation study that evaluates the impact of input resolution and compares COVID-GMIC-DRC to a model based on the Densenet-121 architecture, are shown in the Supplementary Tables 2 and 5. During training, we minimize the following loss function defined on a single example:
where is the negative log-likelihood of the conditional probabilities. For a patient who had an adverse event between and (where ), this negative log-likelihood is given by
The framework can easily incorporate censored data corresponding to patients whose information is not available after a certain point. The negative log-likelihood corresponding to a patient, who has no information after and no adverse events before , equals
Note that each is estimated only using patients that have data available up to
. The total negative log-likelihood of the training set is equal to the sum of the individual negative log-likelihoods corresponding to each patient, which makes it possible to perform minimization efficiently via stochastic gradient descent. In contrast, deep learning models for survival analysis based on Cox proportional hazards regressioncox1984analysis require using the whole dataset to perform model updates ching2018cox ; katzman2018deepsurv ; liang2020early , which is computationally infeasible when processing large image datasets.
In this section, we discuss the experimental setup used for COVID-GMIC, COVID-GMIC-DRC, and COVID-GBM. The chest X-ray image models were implemented in PyTorchNEURIPS2019_9015 and trained using NVIDIA Tesla V100 GPUs. The clinical variables models were implemented using the Python library LightGBM ke2017lightgbm .
We initialized the weights of COVID-GMIC and COVID-GMIC-DRC by pretraining them on the ChestX-ray14 dataset wang2017chestx (Supplementary Table 3 compares the performance of different initialization strategies). We used Adam kingma2014adam with a minibatch size of eight to train the models on our data. We applied data augmentation during training and testing, but not during validation. During testing, we augmented each image ten times and averaged the corresponding outputs to produce the final prediction.
We optimized the hyperparameters using random search bergstra2012random . For COVID-GMIC, we searched for the learning rate on a logarithmic scale, the regularization hyperparameter on a logarithmic scale, and the pooling threshold on a linear scale. For COVID-GMIC-DRC, based on the preliminary experiments, we fixed the learning rate to . We searched for the regularization hyperparameter, on a logarithmic scale, and the pooling threshold . For COVID-GBM, we searched for the learning rate on a logarithmic scale, the number of estimators on a logarithmic scale, and the number of leaves on a linear scale. For each hyperparameter configuration, we performed Monte Carlo cross-validation xu2001monte (we sampled of the data for training and of the data was used for validation). We performed cross-validation using three different random splits for each hyperparameter configuration. We then selected the top three hyperparameter configurations based on the average validation performance across the three splits. Finally, we combined the nine models from the top three hyperparameter configurations by averaging their predictions on the held-out test set to evaluate the performance. This procedure is formally described in Supplementary Algorithm 1.
The reader study consists of 200 frontal chest X-ray exams from the test set. We selected one exam per patient to increase the diversity of exams. We used stratified sampling to ensure that a sufficient number of exams in the study corresponded to the least common outcome (patients with adverse outcomes in the next 24 hours). In more detail, we oversampled exams of patients who developed an adverse event by sampling the first 100 exams only from patients from the test set that had an adverse outcome within the first 96 hours. The remaining 100 exams came from the remaining patients in the test set. The radiologists were asked to assign the overall probability of deterioration to each scan across all time windows of evaluation.
The authors would like to thank Mario Videna, Abdul Khaja and Michael Constantino for supporting our computing environment, Philip P. Rodenbough (the NYUAD Writing Center) and Catriona C. Geras for revising the manuscript, and Boyang Yu, Jimin Tan, Kyunghyun Cho and Matthew Muckley for useful discussions. We also gratefully acknowledge the support of Nvidia Corporation with the donation of some of the GPUs used in this research. This work was supported in part by grants from the National Institutes of Health (P41EB017183, R01LM013316) and the National Science Foundation (HDR-1922658, HDR-1940097).
FES, YS, NW, AK, JP and TM designed and conducted the experiments with neural networks. FES, NW, JP, SJ and TM built the data preprocessing pipeline. FES, NR and BZ designed the clinical variables model. SJ conducted the reader study and analyzed the data. SD and MC conducted literature search. YL, DW, BZ and YA collected the data. DK, LA and WM analyzed the results from a clinical perspective. YA, CFG and KJG supervised the execution of all elements of the project. All authors provided critical feedback and helped shape the manuscript.
The authors declare no competing interests.
Lightgbm: A highly efficient gradient boosting decision tree.In Adv. Neur. In., 3146–3154 (2017).
Proceedings of the IEEE International Conference on Computer Vision, 618–626 (2017).
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778 (2016).
A survey on transfer learning.IEEE Trans. Knowl. Data Eng. 22, 1345–1359 (2009).
The statistics of the clinical variables that were used to develop the COVID-GBM models are listed in Table 1. The raw laboratory test variables were further processed to extract the minimum and maximum statistics.
|Variable, unit||Training Set||Test Set|
|Heart rate, beats per minute||93.7 (25.0)||93.5 (27.0)|
|Respiratory rate, breaths per minute||22.4 (7.0)||23.4 (7.0)|
|Temperature, F||99.4 (1.9)||99.4 (1.9)|
|Systolic blood pressure, mmHg||130.7 (30.0)||129.8 (29.3)|
|Diastolic blood pressure, mmHg||75.9 (17.0)||76.0 (18.0)|
|Oxygen saturation, %||94.1 (4.0)||93.8 (5.0)|
|Albumin, g/dL||3.5 (0.9)||3.5 (0.9)|
|ALT, U/L||49.8 (32.0)||52.2 (36.0)|
|AST, U/L||67.3 (37.0)||69.7 (43.0)|
|Total bilirubin, mg/dL||0.7 (0.4)||0.7 (0.4)|
|Blood urea nitrogen, mg/dL||25.9 (17.0)||26.4 (18.0)|
|Calcium, mg/dL||8.7 (0.8)||8.7 (0.8)|
|Chloride, mEq/L||101.1 (7.0)||101.6 (7.0)|
|Creatinine, mg/dL||1.6 (0.7)||1.6 (0.7)|
|D-dimer, ng/mL||1,321.6 (535.5)||1,146.3 (618.5)|
|Eosinophils, %||0.4 (0.0)||0.4 (0.0)|
|Eosinophils, n||0.03 (0.00)||0.03 (0.00)|
|Hematocrit, %||38.9 (7.3)||38.9 (7.5)|
|LDH, U/L||412.8 (207.0)||404.0 (213.0)|
|Lymphocytes, %||14.1 (10.0)||14.9 (11.0)|
|Lymphocytes, n||1.0 (0.7)||1.0 (0.7)|
|Platelet volume, fL||10.6 (1.4)||10.6 (1.4)|
|Neutrophils, n||6.4 (4.0)||6.3 (3.8)|
|Neutrophils, %||76.6 (14.0)||75.9 (13.0)|
|Platelet, n||226.1 (114.0)||223.7 (103.0)|
|Potassium, mmol/L||4.2 (0.8)||4.2 (0.8)|
|Procalcitonin, ng/mL||1.9 (0.3)||1.9 (0.4)|
|Total protein, g/dL||7.1 (1.1)||7.2 (1.0)|
|Sodium, mmol/L||136.2 (6.0)||136.6 (7.0)|
|Troponin, ng/mL||0.2 (0.1)||0.2 (0.1)|
The average importance of the top ten features computed by the COVID-GBM models are shown in Supplementary Figure 2.a. The importance of a feature is measured by the numbers of times the feature is used to split the data across all trees in a single COVID-GBM model. Age is amongst the top ten features across all time windows, which is consistent with existing findings that mortality is more common amongst elderly COVID-19 patients than younger patients liu2020clinical . The inclusion of the vital sign variables, amongst the top ten features across all models, is also aligned with existing research suggesting that they are strong indicators of deterioration news .
DenseNet huang2017densely is a deep neural network architecture which consists of dense blocks in which layers are directly connected to every other layer in a feed-forward fashion. It achieves strong performance on benchmark natural images dataset, such as CIFAR10/100 krizhevsky2009learning and ILSVRC 2012 (ImageNet) dataset imagenet_cvpr09 while being computationally efficient. Here we compare COVID-GMIC to a specific variant of DenseNet, DenseNet-121, which has been applied to process chest X-ray images in the literature rajpurkar2017chexnet ; allaouzi2019novel ; liu2019sdfn ; guan2020multi .
The model assumes an input size of . We applied DenseNet-121-based models to predict deterioration and also to compute deterioration risk curves. We initialized the models with weights pretrained on the ChestX-ray14 dataset wang2017chestx , provided at https://github.com/arnoweng/CheXNet. We used weight decay in the optimizer. To perform hyperparameter search, we sampled the learning rate and the rate of weight decay per step uniformly on a logarithmic scale between and .
For adverse event prediction, the DenseNet-121 based model yielded test AUCs of 0.687 (95% CI: 0.621 - 0.749), 0.709 (95% CI: 0.653 - 0.757), 0.710 (95% CI: 0.660 - 0.763), and 0.736 (95% CI: 0.691 - 0.782), and PRAUCs of 0.216 (95% CI: 0.155 - 0.317), 0.315 (95% CI: 0.239 - 0.419), 0.373 (95% CI: 0.300 - 0464), and 0.454 (95% CI: 0.384 - 0.542) for 24, 48, 72, and 96 hours. The deterioration risk curves produced by the DenseNet-121 based models and the corresponding reliability plot are presented in Figure 3.
Prior work on deep learning for medical images geras2017high report that using high resolution input images can improve performance. In this section, we analyze the impact of image resolution on our tasks of interest. We consider the following image sizes: , , , and . We pretrain all models on the ChestX-ray14 dataset (wang2017chestx, ) and then fine-tune them on our dataset. Results on the test set are reported in Supplementary Table 2.
The DenseNet-121 based model achieves the best AUCs when using an image size of , and the best concordance index for . Further increasing the resolution does not improve performance. COVID-GMIC achieves the best performance for the highest input image resolution of , while achieving the best concordance index for . While a further increase in performance may be possible, we did not consider any larger image sizes resolutions because the computational cost would become prohibitively high.
|AUC / PR AUC||Concordance index|
|24 hours||48 hours||72 hours||96 hours||96 hours||Average|
|DenseNet-121||0.663 (0.593, 0.724) /||0.688 (0.627, 0.743) /||0.700 (0.647, 0.751) /||0.728 (0.675, 0.771) /||0.700 (0.666,0.733)||0.700 (0.664,0.728)|
|0.214 (0.144, 0.309)||0.300 (0.224, 0.402)||0.370 (0.292, 0.461)||0.453 (0.373, 0.542)|
|0.698 (0.632, 0.763) /||0.721 (0.668, 0.778) /||0.719 (0.670, 0.773) /||0.748 (0.701, 0.795) /||0.701 (0.664,0.736)||0.698 (0.662,0.733)|
|0.218 (0.153, 0.310)||0.310 (0.238, 0.413)||0.390 (0.318, 0.486)||0.469 (0.392, 0.562)|
|0.682 (0.615, 0.747) /||0.710 (0.656, 0.762) /||0.709 (0.654, 0.762) /||0.732 (0.684, 0.778) /||0.705 (0.673,0.739)||0.701 (0.669,0.735)|
|0.208 (0.149, 0.305)||0.318 (0.238, 0.422)||0.383 (0.307, 0.480)||0.441 (0.366, 0.529)|
|0.680 (0.618, 0.741) /||0.709 (0.655, 0.761) /||0.716 (0.666, 0.766) /||0.739 (0.691, 0.784) /||0.701 (0.668,0.734)||0.696 (0.663,0.728)|
|0.180 (0.130, 0.259)||0.278 (0.212, 0.371)||0.369 (0.296, 0.469)||0.441 (0.366, 0.529)|
|COVID-GMIC||0.664 (0.594, 0.735) /||0.688 (0.629, 0.746) /||0.699 (0.648, 0.747) /||0.728 (0.682, 0.772) /||0.712 (0.680,0.745)||0.707 (0.673,0.739)|
|0.202 (0.144, 0.303)||0.263 (0.200, 0.354)||0.342 (0.270, 0.431)||0.424 (0.356, 0.505)|
|0.700 (0.635, 0.765) /||0.714 (0.659, 0.767) /||0.714 (0.662, 0.757) /||0.733 (0.686, 0.776) /||0.713 (0.679,0.748)||0.708 (0.675,0.742)|
|0.210 (0.154, 0.298)||0.300 (0.230, 0.395)||0.389 (0.314, 0.481)||0.443 (0.371, 0.532)|
|0.695 (0.627, 0.760) /||0.716 (0.661, 0.767) /||0.717 (0.663, 0.764) /||0.738 (0.692, 0.780) /||0.686 (0.652,0.722)||0.685 (0.653,0.722)|
|0.200 (0.142, 0.279)||0.302 (0.230, 0.394)||0.374 (0.301, 0.459)||0.439 (0.368, 0.522)|
In data-scarce applications, it is crucial to pretrain deep neural networks on a related task for which a large dataset is available, prior to fine-tuning on the task of interest pan2009survey ; yosinski2014transferable . Given the relatively small number of COVID-19 positive cases in our dataset, we investigate the impact of different weight initialization strategies on our results. Specifically, we compare three strategies: 1) initialization by He et al. he2015delving , 2) initialization with weights from models trained on natural images (ImageNet imagenet_cvpr09 ), and 3) initialization with weights from models trained on chest X-ray images (ChestX-ray14 dataset (wang2017chestx, )). We apply the initialization procedure to all layers except the last fully connected layer, which is always initialized randomly. We then fine-tune the entire network on our COVID-19 task.
Based on results shown in Supplementary Table 3, fine-tuning the network from weights pretrained on the ChestX-ray14 dataset is the most effective strategy for COVID-GMIC. This dataset contains over 100,000 chest X-ray images from more than 30,000 patients, including many with advanced lung disease. The images are paired with labels representing fourteen common thoracic observations: atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, and hernia. By pretraining a model to detect these conditions, we hypothesize that the model learns a representation that is useful for our downstream task of COVID-19 prognosis.
|AUC / PR AUC||Concordance index|
|24 hours||48 hours||72 hours||96 hours||96 hours||Average|
|DenseNet-121||Random||0.687 (0.621, 0.749) /||0.699 (0.644, 0.750) /||0.693 (0.639, 0.744) /||0.705 (0.658, 0.750) /||0.649 (0.612,0.684)||0.648 (0.611,0.683)|
|0.178 (0.134, 0.251)||0.258 (0.201, 0.339)||0.326 (0.264, 0.416)||0.386 (0.323, 0.474)|
|ImageNet||0.701 (0.639, 0.761) /||0.722 (0.668, 0.776) /||0.719 (0.670, 0.772) /||0.745 (0.701, 0.789) /||0.686 (0.652,0.720)||0.683 (0.651,0.715)|
|0.206 (0.152, 0.295)||0.299 (0.232, 0.401)||0.365 (0.294, 0.466)||0.444 (0.375, 0.539)|
|ChestX-ray14||0.687 (0.619, 0.758) /||0.709 (0.653, 0.767) /||0.710 (0.660, 0.763) /||0.736 (0.691, 0.782) /||0.705 (0.673,0.739)||0.701 (0.669,0.735)|
|0.216 (0.155, 0.317)||0.315 (0.239, 0.419)||0.373 (0.300, 0.464)||0.454 (0.384, 0.542)|
|COVID-GMIC||Random||0.675 (0.607, 0.741) /||0.671 (0.617, 0.728) /||0.686 (0.640, 0.732) /||0.708 (0.664, 0.748) /||0.643 (0.608,0.680)||0.640 (0.607,0.676)|
|0.174 (0.125, 0.247)||0.227 (0.177, 0.308)||0.290 (0.235, 0.366)||0.352 (0.294, 0.428)|
|ImageNet||0.694 (0.631, 0.753) /||0.709 (0.657, 0.761) /||0.724 (0.673, 0.769) /||0.737 (0.692, 0.778) /||0.684 (0.651,0.716)||0.680 (0.649,0.711)|
|0.195 (0.138, 0.280)||0.258 (0.197, 0.351)||0.347 (0.278, 0.431)||0.433 (0.360, 0.512)|
|ChestX-ray14||0.695 (0.626, 0.757) /||0.716 (0.659, 0.768) /||0.717 (0.665, 0.762) /||0.738 (0.690, 0.783) /||0.713 (0.679,0.748)||0.708 (0.675,0.742)|
|0.200 (0.142, 0.283)||0.302 (0.228, 0.400)||0.374 (0.302, 0.463)||0.439 (0.368, 0.532)|
We evaluated the impact of the sample size used for training our machine learning models. Specifically, we evaluated our models on a subset of the training data, obtained by randomly sampling 12.5%, 25%, and 50% of the exams. Table 4 presents the AUCs and PR AUCs and the concordance indices achieved on the test set. It is evident that the performance of COVID-GMIC and COVID-GBM improve when increasing the number of images and clinical variables used for training, which highlights the importance of using a large dataset.
|AUC / PR AUC||Concordance index|
|24 hours||48 hours||72 hours||96 hours||96 hours||Average|
|DenseNet-121||12.5%||0.608 (0.538, 0.686) /||0.653 (0.595, 0.712) /||0.672 (0.622, 0.727) /||0.703 (0.657, 0.752) /||0.675 (0.642,0.710)||0.670 (0.637,0.704)|
|0.182 (0.123, 0.270)||0.265 (0.198, 0.353)||0.336 (0.271, 0.424)||0.415 (0.344, 0.500)|
|25%||0.638 (0.568, 0.706) /||0.678 (0.619, 0.735) /||0.682 (0.630, 0.736) /||0.711 (0.664, 0.760) /||0.676 (0.643,0.711)||0.671 (0.638,0.705)|
|0.174 (0.121, 0.258)||0.266 (0.205, 0.362)||0.327 (0.261, 0.415)||0.408 (0.341, 0.495)|
|50%||0.672 (0.607, 0.739) /||0.699 (0.646, 0.754) /||0.698 (0.649, 0.750) /||0.725 (0.681, 0.771) /||0.694 (0.660,0.728)||0.691 (0.657,0.725)|
|0.214 (0.150, 0.319)||0.303 (0.233, 0.397)||0.351 (0.285, 0.437)||0.433 (0.365, 0.517)|
|100%||0.687 (0.621, 0.753) /||0.709 (0.654, 0.763) /||0.710 (0.658, 0.761) /||0.736 (0.689, 0.781) /||0.705 (0.673,0.739)||0.701 (0.669,0.735)|
|0.216 (0.154, 0.317)||0.315 (0.239, 0.417)||0.373 (0.298, 0.475)||0.454 (0.377, 0.552)|
|COVID-GMIC||12.5%||0.640 (0.577, 0.703) /||0.672 (0.618, 0.723) /||0.677 (0.626, 0.723)||0.695 (0.652, 0.738) /||0.673 (0.640,0.706)||0.668 (0.635,0.701)|
|0.145 (0.110, 0.206)||0.231 (0.179, 0.316)||0.318 (0.249, 0.406)||0.384 (0.319, 0.474)|
|25%||0.661 (0.598, 0.724) /||0.672 (0.618, 0.728) /||0.677 (0.631, 0.727) /||0.693 (0.648, 0.737) /||0.689 (0.655,0.723)||0.680 (0.646,0.714)|
|0.177 (0.125, 0.263)||0.254 (0.196, 0.346)||0.327 (0.266, 0.416)||0.395 (0.329, 0.477)|
|50%||0.646 (0.577, 0.716) /||0.681 (0.622, 0.738)/||0.687 (0.632, 0.739) /||0.716 (0.668, 0.763)/||0.699 (0.665,0.734)||0.690 (0.658,0.723)|
|0.164 (0.116, 0.238)||0.266 (0.199, 0.360)||0.351 (0.274, 0.445)||0.424 (0.346, 0.516)|
|100%||0.695 (0.626, 0.753) /||0.716 (0.663, 0.769) /||0.717 (0.667, 0.767) /||0.738 (0.693, 0.782) /||0.713 (0.679,0.748)||0.708 (0.675,0.742)|
|0.200 (0.142, 0.276)||0.302 (0.230, 0.395)||0.374 (0.297, 0.461)||0.439 (0.363, 0.521)|
|COVID-GBM||12.5%||0.674 (0.612, 0.739) /||0.699 (0.645, 0.751) /||0.710 (0.659, 0.754) /||0.708 (0.661, 0.753) /|
|0.262 (0.180, 0.371)||0.297 (0.228, 0.395)||0.395 (0.318, 0.480)||0.439 (0.362, 0.517)|
|25%||0.688 (0.636, 0.748) /||0.716 (0.667, 0.766) /||0.733 (0.688, 0.777) /||0.739 (0.694, 0.783) /|
|0.175 (0.130, 0.248)||0.319 (0.237, 0.411)||0.385 (0.309, 0.466)||0.476 (0.407, 0.550)|
|50%||0.743 (0.690, 0.787) /||0.752 (0.702, 0.797) /||0.749 (0.703, 0.792) /||0.751 (0.706, 0.791) /|
|0.210 (0.157, 0.301)||0.325 (0.252, 0.425)||0.418 (0.341, 0.510)||0.482 (0.407, 0.568)|
|100%||0.747 (0.692, 0.798) /||0.739 (0.685, 0.791) /||0.750 (0.704, 0.794) /||0.770 (0.728, 0.811) /|
|0.230 (0.167, 0.322)||0.325 (0.253, 0.425)||0.408 (0.334, 0.502)||0.523 (0.439, 0.611)|
We visualize the receiver operating characteristic (ROC) and precision-recall (PR) curves on the test set in Supplementary Figure 4. In a, we group the results based on the predictive models (COVID-GMIC, COVID-GBM, and the ensemble of both), while in b, we group the performances based on the time window of the task (i.e., 24, 48, 72, and 96 hours). In Supplementary Figure 5, we visualize the ROC and PR curves on the test set considered in the reader study.
In Supplementary Table 5, we show the concordance index results across all time intervals for the best DenseNet-121 and COVID-GMIC-DRC models.
|Time (in hours)||3||12||24||48||72||96||144||192||Ave.|
We describe our model selection procedure used throughout the paper in Algorithm 1. For the ablation study in Table 4, we control the size of the dataset by setting the parameter to , and . Specifically, in that experiment, we randomly sampled of the training set as the “universe” that our model used for training and validation.