In the United Kingdom, the care quality commission recently reported that – over the preceding 12 months – a total of 23,000 chest X-rays (CXRs) were not formally reviewed by a radiologist or clinician at Queen Alexandra Hospital alone. Furthermore, three patients with lung cancer suffered significant harm because their CXRs had not been properly assessed 
. The Queen Alexandra Hospital is probably not the only hospital having problems with providing expert readings for every CXR. IncreasingGrowing populations and life expectancies, isincreasing life expectancies are expected to drive an increase in demand for CXR readings.
In computer vision, deep learning has already shown its power for image classification with superhuman accuracy[2, 3, 4, 5]. In addition, the medical image processing field is vividly exploring deep learning. However, one major problem in the medical domain is the availability of large datasets with reliable ground-truth annotation. R2.2Therefore, transfer learning approaches, as proposed by Bar et al., were often considered to overcome such problems.
Two larger X-ray datasets have recently become available: the CXR dataset from Open-i  and ChestX-ray14the ChestX-ray14 dataset from the National Institutes of Health (NIH) Clinical Center . Figure 1 illustrates four selected examples from ChestX-ray14. Due to its size, the ChestX-ray14 consisting of 112,120 frontal CXR images from 30,805 unique patients attracted considerable attention in the deep learning community. Triggered by the work of Wang et al.  using convolution neural networks (CNNs) from the computer vision domain, several research groups have begun to address the application of CNNs for CXR classification. In the work of Yao et al.
, they presented a combination of a CNN and a recurrent neural network to exploit label dependencies. As a CNN backbone, they used a DenseNet model which was adapted and trained entirely on X-ray data. Li et al.  presented a framework for pathology classification and localization using CNNs. More recently, Rajpurkar et al.  proposed transfer-learning with fine tuning, using a DenseNet-121  and, which raised the AUC results on ChestX-ray14 for multi-label classification even higher.
UnfortunatelyUnfortunately, a faithful comparison of approaches remains difficult. Most reported results were obtained with differing experimental setups. This includes (among others) the employed network architecture, loss function and data augmentation. In addition, differing dataset splits were used and only Li et al.  reported 5-fold cross-validated results. In contrast to these results, our experiments (Sec. 3) demonstrate that performance of a network depends significantly on the selected split. R1.5R2.5R2.7To have a fair comparison, Wang et al.  released an official split later. Yao et al. and Guendel et al. reported results for this official split. While Guendel et al. hold the state-of-the-art results in all fourteen classes with a location-aware DenseNet-121.
Henceforth, tTo provide better insights into the effects of distinct design decisions for deep learning, we perform a systematic evaluation using a 5-fold re-sampling scheme. We empirically analyze three major topics:
weight initialization, pre-training and transfer learning (Section 2.1)
network architectures such as ResNet-50 with large input size (Section 2.2)
non-image features such as age, gender, and view position (Section 2.3)
Prior work on ChestX-ray14 has been limited to the analysis of image data. In clinical practice however, radiologists employ a broad range of additional features during the diagnosis. To leverage the complete information of the dataset (i.e. age, gender, and view position), we propose in Section 2.3 a novel architecture integrating this information in addition to the learned image representation.
In the following, we cast pathology detection as a multi-label classification problem. All images are associated with a ground truth label , while we seek a classification function that minimizes a specific loss function using training sample-label pairs
. Here, we encode the label for each image as a binary vector(with labels). We encode “No Finding” as an explicit additional label and hence have labels. After an initial investigation of weighting loss functions such as positive/negative balancing  and class balancing, we noticed no significant difference and decided to employ the class-averaged binary cross entropy (BCE) as our objective:
Prior work on the ChestX-ray14 dataset focusedconcentrates primarily on ResNet-50 and DenseNet-121 architectures. Due to its outstanding performance in the computer vision domain , we focus in our experiments on the ResNet-50 architecture 
. To adapt the network to the new task, we replace the last dense layer of the original architecture with a new dense layer matching the number of labels and add a sigmoid activation function R1.1R1.3for our multi-label problem (see Table1).
2.1 Weight Initialization and Transfer Learning
We investigate two distinct initialization strategies for the ResNet-50. First, we follow the scheme described inby He et al. , where the network parameters are initialized with random values and thus the model is trained from scratch. Second, we initialize the network with pre-trained weights, where knowledge is gained intransferred from a different domain and task. Furthermore, we distinguish between off-the-shelf (OTS) and fine-tuning (FT) in the transfer-learning approach.
A major drawback in medical image processing with deep learning is the limited size of datasets compared to the computer vision domain. Hence, training a CNN from scratch is often not feasible. One solution is transfer-learning. Following the notation in the work of Pan et al. , a source domain with task and a target domain with task are given with and/or . In transfer-learning, the knowledge gained in and is used to help learning of thea prediction function in .
, the pre-trained network is used as a feature extractor, and only the weights of the last (classifier) layer are adapted. In fine-tuning, one chooses to re-train one or more layers with samples from the new domain. For both approaches, we use the weights of a ResNet-50 network trained on ImageNet as a starting point. R1.1R1.3In our fine-tuning experiment, we retrained all conv-layers as shown in Table 1.
|Layer name||Output size||Original 50-layer||Off-the-shelf||Fine-tuned|
, 64-d, stride 2
, 64-d, max pool, stride 2
|pooling2||, 2048-d, average pool, stride 1||same||same|
|dense||1000-d, dense-layer||15-d, dense-layer|
|loss||1000-d, softmax||15-d, sigmoid, BCE|
R1.1R1.3Architecture of the original, off-the-shelf, and fine-tuned ResNet-50. In our experiments, we use the ResNet-50 architecture and this table shows differences between the original architecture and ours (off-the-shelf and fine-tuned ResNet-50). If there is no difference to the original network, the word “same” is written in the table. The violet and bold text emphasizes, which parts of the network are changed for our application. All layers do employ automatic padding (i.e. depending on the kernel size) to keep spatial size the same. The conv3_0, conv4_0, and conv5_0 layers perform a down-sampling of the spatial size with a stride of 2.
In addition to the original ResNet-50 architecture, we employ two variants.: First, we reduce the number of input channels to one (the ResNet-50 is designed for the processing of RGB images from the ImageNet dataset), which should facilitate the training of an X-ray specific CNN. Second, we increase the input size by a factor of two (i.e. ). To keep the model architectures similar, we only add a new poolingmax-pooling layer after the first bottleneck block.R1.1R1.3 This max-pooling layer has the same parameters as the “pooling1” layer (i.e. kernel, stride 2, and padding). In Figure 2, our changes are illustrated at the image branch. A higher effective resolution could be beneficial for the detection of small structures, which could be indicative of a pathology (e.g. masses and nodules).R1.2 In the following, we use the postfix “-1channel” and “-large” to refer to our model changes.
R1.1R1.3R1.6Finally, we investigate different model depths with the best performing setup. First, we implement a shallower ResNet-38 where we reduce the number of bottleneck blocks for conv2_x, conv3_x, and conv4_x down to two, two, and three, respectively. Secondly, we also test the ResNet-101 and increased the number of conv_3 blocks from 5 to 22 compare to the ResNet-50.
2.3 Non-Image Features
ChestX-ray14 contains information about the patient age, gender, and view position (i.e. if the X-ray image is acquired posterior-anterior (PA) or anterior-posterior (AP)). Radiologists use information beyond the image to conclude which pathologies are present or not. The view position changes the expected position of organs in the X-ray images (i.e. PA images are horizontally flipped compared to AP). In addition, organs (e.g. the heart) are magnified in an AP projection as the distance to the detector is increased.
As illustrated in Figure 2, we concatenate the image feature vector (i.e. output of the last pooling layer with dimension ) with the new non-image feature vector (with dimension ). Therefore, view position and gender is encoded as and the age is linearly scaled , in order to avoid a bias towards features with a large range of values.R1.2 In our experiments, we used “-meta” to refer our model architecture with non-image features.
2.4 ChestX-ray14 Dataset
R2.4To evaluate our approaches for multi-label pathology classification, the entire corpus of ChestX-ray14 (Figure 1) is employed. In total, the dataset contains 112,120 frontal chest X-rays from 30,805 patients. The dataset does not include the original DICOM images but Wang et al.  performed a simple preprocessing based on the encoded display settings while the pixel depth was reduced to 8-bit. In addition, each image was resized to pixel without preserving the aspect ratio. In Table 2 and Figure 3, we show the distribution of each class and the statistics for non-image information. The prevalence of individual pathologies are generally low and varies between and as shown in Table 2(a). While, the distribution of patient gender and view position is quite even with a ratio of 1.3 and 1.5, respectively (see Table 2(b)). In Figure 3, the histogram shows the distribution of patient age in ChestX-ray14. The average patient age is
years with a standard deviation ofyears.
R1.4R2.1To determine if the provided non-image features contain information for a disease classification, we performed an initial experiment. We trained a very simple Multi-layer Perceptron (MLP) classifier only with the three non-image feature as input. The MLP classifier has a low average AUC of 0.61 but this still indicates that those non-image features could help to improve classification results when provided to our novel model architecture.
3 Experiments and Results
To evaluate our approaches for multi-label pathology classification, the entire corpus of ChestX-ray14 (Figure 1) is employed. The dataset does not include the original DICOM images but Wang et al.  performed a simple preprocessing where the intensity range was rescaled from a higher bit-depth down to 8-bit. In addition, each image was resized to pixel without preserving the aspect ratio.Moved to Section 2.4 For an assessment of the generalization performance, we perform a 5 times re-sampling scheme 
. Within each split, the data is divided into 70% training, 10% validation, and 20 % testing. When working with deep learning, hyper-parameters, and tuning without a validation set and/or cross-validation can easily result in over-fitting. Since individual patients have multiple follow-up acquisitions, all data from a patient is assigned to a single subset only. This leads to a large patient number diversity (e.g. split two has 5,817 patients and 22,420 images whereas split 5 has 6,245 patients and the same number of images). We estimate the average validation loss over all re-samples to determine the best models. Finally, our results are calculated for each fold on the test set and averaged afterwards.
R1.5-6R2.5-7To have a fair comparison to other groups, we conduct an additional evaluation using the best performing architecture with different depth on the official split of Wang et al. in Section 3.1.
Implementation: In all experiments, we use a fixed setup. To extend ChestX-ray14, we use the same geometric data augmentation as in the work of Szegedy et al. . At training, we sample various sized patches of the image with sizes between and of the image area. The aspect ratio is distributed evenly between and . In addition, we employ random rotationrotations between and horizontal flipping. For validation and testing, we rescale images to and pixels for small and large spatial size, respectively. Afterwards, we use the center crop as input image. As in the work of He et al. , dropout is not employed . As optimizer, we use ADAM  with default parameters for and . The learning rate is set to and for transfer-learning and from scratch, respectively. While training, we reduce the learning rate by a factor of 2 when the validation loss does not improve. Due to model architecture variations, we use batch sizes of 16 and 8 for transfer-learning and from scratch with a large input size, respectively. The models are implemented in CNTK and trained on GTX 1080 GPUs yielding a processing time of around per image.
Results: Table 3 summarizes the outcome of our evaluation and we show state-of-the-art reference results in Figure 5. In total, we evaluate eight different experimental setups with varying weight initialization schemes and network architectures as well as with and without non-image features. We perform aan ROC analysis using the area under the curve (AUC) for all pathologies, compare the classifier scores by Spearman’s pairwise rank correlation coefficient, and employedemploy the state-of-the-art method Gradient-weighted Class Activation Mapping (Grad-CAM)  to getgain more insight into our CNNs. Grad-CAM is a method for visual explanations for predictions of CNN modelsly assessing CNN model predictions. The method highlights important regions in the input image for a specific classification resultsresult by using the gradient of the final convolutional layer.
The results indicate a high variability of the outcome with respect to the selected dataset split. Especially for “Hernia”, which is the class with the smallest number of positive samples, we observe a standard deviation of up to 0.05. As a result, an assessment of existing approaches and comparingcomparison of their performance is difficult, since prior work focusedfocuses mostly on a single (random) split.
With respect to the different initialization schemesschemes, we observe already reasonable results for OTS networks that are optimized on natural images. Using fine-tuning techniques, the results are improved considerably, from to AUC on average. A complete training of the R1.2ResNet-50ResNet-50-1channel using CXRs results in a rather comparable performance. Only the high-resolution variant of the R1.2ResNet-50ResNet-50-large outperforms the FT approach by on average AUC. In particular, for smaller pathologies like masses and nodulesnodules and masses an improvement is observed (i.e. 0.0170.018 and 0.006 AUC increase, respectively), while for other pathologies a similar, or slightly lower performance is estimated.
Finally, all our experiments with non-image features slightly increase the AUC on average to its counterpart (i.e. without non-image feature). Our from scratch trained R1.2ResNet-50 with an enlarged input size and integrated non-imageResNet-50-large-meta yields the best overall performance with average AUC.
R1.4R2.1R2.4To get a better insight why the non-image features only slightly increased the AUC for our fine-tuned and from scratch trained models, we investigated the capability of predict the non-image features based on the extracted image features. We used our from scratch trained model (i.e. ResNet-50-large) as a feature extractor and trained three models to predict the patient age, patient gender, and view position (VP) – i.e. ResNet-50-large-age, ResNet-50-large-gender, ResNet-50-large-VP. We employed the same training setup as in our experiments before. First, our ResNet-50-large-VP model can predict with a very high AUC of the correct VP (i.e. we encoded AP as true and PA as false). After choosing the optimal threshold based on Youden index, we calculated a sensitivity and specificity of and , respectively. Secondly, the ResNet-50-large-gender predicts the patient gender also very precisely with a high AUC of . The sensitivity and specificity with and is also high. Finally, to evaluate the performance of the ResNet-50-large-age we report the mean absolute error (MAE) with standard deviation because age prediction is a regression task. The model achieved a mean absolute error of years. The results show that the image features already encode information about the non-image features. This might be the reason that our proposed model architecture with the non-image features at hand did not increased the performance by a large margin.
Furthermore, the similarity between the trained models in terms of their predictions was investigated. Therefore, Spearman?s rank correlation coefficient was computed for the predictions of all model pairs, and averaged over the folds. The pairwise correlations coefficients for the models are given in Table 4. Based on the degree of correlation, three groups can be identified. First, we note that the “from scratch models” (i.e. “1channel” and “large”) without non-image features have the highest correlation of 0.93 amongst each other, followed by the fine-tuned models with 0.81 and 0.80 for “1channel” and “large”, respectively. Second, the OTS model surprisingly has higher correlation with the from scratch models than the fine-tuned model. Third, for models with non-image feature, no such correlation is observed and their value is between 0.32 to 0.47. This indicates that models which have been trained exclusively on X-ray data achieve not only the highest accuracy, but are furthermore most consistent.
|Without non-image features||With non-image features|
While our proposed network architecture achieves high AUC values in many categories of the ChestX-ray14 dataset, the applicability of such a technology in a clinical environment depends considerably on the availability of data for model training and evaluation. In particular for the NIH dataset the reported label noise and the medical interpretation of the label are an important issue. As mention by Luke Oakden-Rayner , the class “pneumothorax” is often labeled for already treated cases (i.e. a drain is visible in the image which is used to tread the pneumothorax) in the ChestX-ray14 dataset. We employ Grad-CAM to get an insight, if our trained CNN picked up the drain as a main feature for “pneumothorax”. Grad-CAM visualizes the areas which are most responsible for the final prediction as a heatmap. In Figure 4, we show two examples of our testset where the highest activations are around the drain. This indicates that the network learned not only to detect an acute pneumothorax but also the presence of chest drains. Therefore, the utility of the ChestX-ray14 dataset for the development of clinical applications is still an open issue.
3.1 Comparison to other approaches
R1.5-6R2.5-7 In our evaluation, we noticed a considerable spread of the results in terms of AUC values. Next to the employed data splits, this could be attributed to the (random) initialization of the models, and the stochastic nature of the optimization process.
When ChestX-ray14 was made publicly available, only images and no official dataset splitting was released. Hence, researcher started to train and test their proposed methods on their own dataset split. We noticed a large diversity in performance with different splits of our re-sampling. Therefore, a direct comparison to other groups might be miss leading in the sense of state-of-the-art results. For example, Rajpurkar et al.  reported state-of-the-art results for all 14 classes on their own split. In Figure 5, we compare our best performing model architecture (i.e. ResNet-50-large-meta) of the re-sampling experiments to Rajpurkar et al. and other groups. For our model, we plot the minimum and maximum AUC over all re-samplings as error bars to illustrate the effect of random splitting. We achieve state-of-the-art results for “effusion” and “consolidation” when directly comparing our AUC (i.e. averaged over 5 times re-sampling) to former state-of-the-art results. Comparing the maximum AUC over all re-sampling splits results in state-of-the-art performance for “effusion”, “pneumonia”, “consolidation”, “edema”, and “hernia” and indicates that a fair comparison between groups without the same splitting might be non conclusive.
Later, Wang et al. released an official split of the ChestX-ray14 dataset. To have a fair comparison to other groups, we report results on this split for our best performing architecture with different depths – ResNet-38-large-meta, ResNet-50-large-meta, and ResNet-101-large-meta – in Table 5. First, we compare our results to Wang et al. and Yao et al. because Guendel et al. used an additional dataset – PLCO dataset – with 185,000 images. While the ResNet-101-large-meta already has a higher average AUC with and in 12 out of 14 classes a higher individual AUC, the performance is compared to our ResNet-38-large-meta and ResNet-50-larg-meta lower. Reducing the number of layers increased the averaged AUC from to and for ResNet50-large-meta and ResNet38-larg-meta, respectively. Hence, our results indicate that training a model with less parameter on Chest-Xray14 is beneficial for the overall performance. Secondly, Guendel et al. reported state-of-the-art results for the official split in all 14 classes with an averaged AUC of . While our ResNet-38-large-meta is trained with 185,000 images less, it still achieved state-of-the-art results for “Emphysema”, “Edema”, “Hernia”, “Consolidation”, and “Pleural Thicken.” and a slight less average AUC of .
|Pathology||Wang et al.||Yao et al.||Guendel et al.||ResNet-38||ResNet-50||ResNet-101|
4 Discussion and Conclusion
We present a systematic evaluation of different approaches for CNN-based X-ray classification on ChestX-ray14. While satisfactory results are obtained with networks optimized on the ImageNet dataset, the best overall results can be reported for the model that is exclusively trained with CXRs and incorporates non-image data (i.e. view position, patient age, and gender).
Our optimized ResNet-50ResNet-38-large-meta architecture achieves state-of-the art results in fourfive out of fourteen classes compared to Rajpurkar et al.Guendel et al. (who had state-of-the-art results in all fourteen classes). At the same time, a substantial variability in the results can be observed when different splits are considered. This becomes especially apparent for “Hernia”, the class with the fewest samples in the dataset (see also Figure 5on the official split). For other classes even higher scores are reported in the literature (see e.g. Rajpurkar et al.). However, a comparison of the different CNN methods with respect to their performance is inherently difficult, as most evaluations have been performed on individual (random) partitions of the datasets. We observed substantial variability in the results when different splits are considered. This becomes especially apparent for ?Hernia?, the class with the fewest samples in the dataset (see also Figure 5).
While the obtained results suggests that the training of deep neural networks in the medical domain is a viable option as more and more public datasets become available, the practical use of deep learning in clinical practice is still an open issue. In particular for the ChestX-ray14 datasets, the rather high label noise of 10% makes an assessment of the true network performance difficult. Therefore, a clean testset without label noise is needed for clinical impact evaluation. As discussed by Oakden-Rayner , the quality of the (automatically generated) labels and their precise medical interpretation may be a limiting factor addition to the presence of treated findings. Our Grad-CAM results proves Oakden-RaynerRayner’s concerns about the “pneumothorax” label. In a clinical setting, i.e. for the detection of critical findings, the focus would be on the reliably identification of acute cases of pneumothorax, while a network trained on ChestX-ray14 would also respond to cases with a chest drain.
Future work will include investigation of other model architectures, new architectures for leveraging label dependencies and incorporating segmentation information.
-  Commission, C. Q. Queen Alexandra hospital quality report (2017). Available at https://www.cqc.org.uk/location/RHU03.
-  Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097–1105 (2012).
-  Szegedy, C. et al. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–9 (2015). DOI 10.1109/CVPR.2015.7298594.
-  Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
He, K., Zhang, X., Ren,
S. & Sun, J.
Deep residual learning for image recognition.
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778 (2016). DOI 10.1109/CVPR.2016.90.
-  Bar, Y. et al. Chest pathology detection using deep learning with non-medical training. In 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), 294–297 (Citeseer, 2015).
-  Demner-Fushman, D. et al. Preparing a collection of radiology examinations for distribution and retrieval. Journal of the American Medical Informatics Association 23, 304–310 (2015).
-  Wang, X. et al. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3462–3471 (2017). DOI 10.1109/CVPR.2017.369.
-  Yao, L. et al. Learning to diagnose from scratch by exploiting dependencies among labels. arXiv preprint arXiv:1710.10501 (2017).
-  Huang, G., Liu, Z., v. d. Maaten, L. & Weinberger, K. Q. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2261–2269 (2017). DOI 10.1109/CVPR.2017.243.
-  Li, Z. et al. Thoracic disease identification and localization with limited supervision. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8290–8299 (2018). DOI 10.1109/CVPR.2018.00865.
-  Rajpurkar, P. et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225 (2017).
-  Yao, L., Prosky, J., Poblenz, E., Covington, B. & Lyman, K. Weakly supervised medical diagnosis and localization from multiple resolutions. arXiv preprint arXiv:1803.07703 (2018).
-  Guendel, S. et al. Learning to recognize abnormalities in chest x-rays with location-aware dense networks. arXiv preprint arXiv:1803.04565 (2018).
-  He, K., Zhang, X., Ren, S. & Sun, J. Identity mappings in deep residual networks. In Computer Vision – ECCV 2016, 630–645 (Springer International Publishing, 2016).
-  Pan, S. J. & Yang, Q. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22, 1345–1359 (2010). DOI 10.1109/TKDE.2009.191.
-  Yosinski, J., Clune, J., Bengio, Y. & Lipson, H. How transferable are features in deep neural networks? In Advances in neural information processing systems, 3320–3328 (2014).
-  Razavian, A. S., Azizpour, H., Sullivan, J. & Carlsson, S. Cnn features off-the-shelf: An astounding baseline for recognition. In 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 512–519 (2014). DOI 10.1109/CVPRW.2014.131.
-  Russakovsky, O. et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115, 211–252 (2015).
-  Molinaro, A. M., Simon, R. & Pfeiffer, R. M. Prediction error estimation: a comparison of resampling methods. Bioinformatics 21, 3301–3307 (2005). DOI 10.1093/bioinformatics/bti499.
Srivastava, N., Hinton, G.,
Krizhevsky, A., Sutskever, I. &
Dropout: A simple way to prevent
neural networks from overfitting.
Journal of Machine Learning Research15, 1929–1958 (2014).
-  Kingma, D. P. & Ba, J. L. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR) (2015).
-  Selvaraju, R. R. et al. Grad-cam: Visual explanations from deep networks via gradient-based localization. In 2017 IEEE International Conference on Computer Vision (ICCV), 618–626 (2017). DOI 10.1109/ICCV.2017.74.
-  Oakden-Rayner, L. Exploring the ChestXray14 dataset: Problems (2017). Available at https://lukeoakdenrayner.wordpress.com/2017/12/18/.
-  Gohagan, J. K., Prorok, P. C., Hayes, R. B. & Kramer, B.-S. The prostate, lung, colorectal and ovarian (PLCO) cancer screening trial of the national cancer institute: history, organization, and status. Controlled clinical trials 21, 251S–272S (2000).