Deep Interactive Learning: An Efficient Labeling Approach for Deep Learning-Based Osteosarcoma Treatment Response Assessment

Osteosarcoma is the most common malignant primary bone tumor. Standard treatment includes pre-operative chemotherapy followed by surgical resection. The response to treatment as measured by ratio of necrotic tumor area to overall tumor area is a known prognostic factor for overall survival. This assessment is currently done manually by pathologists by looking at glass slides under the microscope which may not be reproducible due to its subjective nature. Convolutional neural networks (CNNs) can be used for automated segmentation of viable and necrotic tumor on osteosarcoma whole slide images. One bottleneck for supervised learning is that large amounts of accurate annotations are required for training which is a time-consuming and expensive process. In this paper, we describe Deep Interactive Learning (DIaL) as an efficient labeling approach for training CNNs. After an initial labeling step is done, annotators only need to correct mislabeled regions from previous segmentation predictions to improve the CNN model until the satisfactory predictions are achieved. Our experiments show that our CNN model trained by only 7 hours of annotation using DIaL can successfully estimate ratios of necrosis within expected inter-observer variation rate for non-standardized manual surgical pathology task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

02/21/2019

A Joint Deep Learning Approach for Automated Liver and Tumor Segmentation

Hepatocellular carcinoma (HCC) is the most common type of primary liver ...
10/18/2018

Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation

Automatic brain tumor segmentation plays an important role for diagnosis...
09/25/2018

Automatic brain tumor grading from MRI data using convolutional neural networks and quality assessment

Glioblastoma Multiforme is a high grade, very aggressive, brain tumor, w...
11/05/2019

Scribble-based Hierarchical Weakly Supervised Learning for Brain Tumor Segmentation

The recent state-of-the-art deep learning methods have significantly imp...
09/17/2021

Primary Tumor and Inter-Organ Augmentations for Supervised Lymph Node Colon Adenocarcinoma Metastasis Detection

The scarcity of labeled data is a major bottleneck for developing accura...
09/10/2021

Medulloblastoma Tumor Classification using Deep Transfer Learning with Multi-Scale EfficientNets

Medulloblastoma (MB) is the most common malignant brain tumor in childho...
11/21/2016

Predicting 1p19q Chromosomal Deletion of Low-Grade Gliomas from MR Images using Deep Learning

Objective: Several studies have associated codeletion of chromosome arms...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Osteosarcoma is the most common bone cancer occurring in adolescents with a second smaller peak in older adults [16]. Pre-operative chemotherapy followed by surgery is a standard treatment for osteosarcoma. The ratio of necrotic tumor to overall tumor post neoadjuvant chemotherapy is a well-known prognostic factor and correlates with patients’ survival [13, 19]. Thus, for patients with localized disease who have undergone complete resection, if the ratio of tumor necrosis is greater than 90%, the 5-year survival is higher than 80%. Currently, the ratio of tumor necrosis is manually estimated by pathologists by microscopic review of multiple glass slides from resected specimens.

Computational pathology has provided automated and reproducible techniques to analyze digitized histopathology images [10], especially with convolutional neural networks (CNNs) [22]. Arunachalam showed a patch-level classification CNN composed of three convolutional layers and two fully-connected layers could be used to identify viable tumor, necrotic tumor, and non-tumor in osteosarcoma [1]. For more accurate analysis, fully convolutional networks were developed for a pixel-wise classification, also known as semantic segmentation [15]. U-Net segmenting subcellular structures in microscopy images was described in [18]. More recently, Deep Multi-Magnification Network (DMMN) was introduced for multi-class tissue segmentation of histopathology images by looking at patches in multiple magnifications and has shown outstanding segmentation performance in breast cancer [12].

Performance of these supervised machine learning methods highly depends on the amount and quality of annotations. Public datasets with annotations are generally available for common cancer types such as breast

[7, 2] and have been widely used for training CNNs [24, 14]. For rare cancers such as osteosarcoma, fresh manual annotations by pathologists with specialized expertise are required. Such annotations require a lot of time from busy professionals and thus optimizing for reduced burden is paramount. To reduce annotation time, interactive learning has been developed. Interactive learning allows annotators to “interact” with a machine learning model by correcting predictions of the model to improve its performance until the predictions are satisfied [8, 20]. An interactive segmentation toolkit for biomedical images, known as ilastik, was introduced in [21, 4]

. Here, random forest classifiers

[6] were used for segmentation. QuPath [3] was developed to interactively analyze giga-pixel whole slide images where segmentation was also done based on random forest classifiers [6].

In this paper, we propose Deep Interactive Learning (DIaL) by integrating the concept of interactive learning into deep learning framework for multi-class tissue segmentation of histopathology images and treatment response assessment for osteosarcoma. To evaluate our segmentation model, we estimate the necrosis ratio in case-level by counting the number of pixels predicted as viable tumor and necrotic tumor by the segmentation model and compare with the ratio from pathology reports. We observe our CNN model can estimate the necrosis ratio within expected inter-observer variation rate for non-standardized manual surgical pathology task. Note that the total labeling time took approximately 7 hours with DIaL.

2 Proposed Method

Figure 1: Block diagram of the proposed method. First of all, initial annotation is done on training whole slide images (WSIs) where characteristic features of each class are partially annotated. The annotated regions are used to train a Deep Multi-Magnification Network [12]. Segmentation is done on the same training WSIs to correct any mislabeled regions containing challenging or rare features. These corrected regions are added to the training set to finetune the model. This training-segmentation-correction iteration, denoted as Deep Interactive Learning (DIaL), is repeated until segmentation predictions are satisfied by annotators. The final model is used to segment testing WSIs to assess treatment responses.

It is necessary to manually label osteosarcoma whole slide images (WSIs) to supervise a segmentation convolutional neural network (CNN) for automated treatment response assessment. Labeling WSIs exhaustively would be ideal but it needs tremendous labeling time. Partial labeling approaches are introduced to reduce labeling time [5, 12], but challenging or rare morphological features can be missed. We propose Deep Interactive Learning (DIaL) to efficiently annotate both characteristic features and challenging features on WSIs to have outstanding segmentation performance. Our block diagram is shown in Figure 1. First of all, initial annotation is partially done mainly on characteristic features of classes. During DIaL, training a CNN, segmentation prediction, and correction on mislabeled regions are repeated to improve segmentation performance until segmentation predictions on training images are satisfied by the annotators. Note that challenging or rare features would be labeled during the correction step. When training the CNN is finalized, the CNN is used to segment viable tumor and necrotic tumor on testing cases to assess treatment responses.

2.1 Initial Annotation

Figure 2: A convolutional neural network we designed in this paper can predict 7 classes: (a) viable tumor, (b) necrosis with bone, (c) necrosis without bone, (d) normal bone, (e) normal tissue, (f) cartilage, and (g) blank. Our goal is to accurately segment viable tumor and necrotic tumor on osteosarcoma whole slide images for automated treatment response assessment.

Initial annotation on characteristic features of each class is done to train an initial CNN model. In this work, annotators label 7 morphologically distinct classes, shown in Figure 2: viable tumor, necrosis with bone, necrosis without bone, normal bone, normal tissue, cartilage, and blank. Note initial annotation is partially done on training images.

2.2 Deep Interactive Learning

During initial annotation, challenging or rare features may not be included in the training set which can lead to mislabeled predictions. These challenging features can be added into the training set through Deep Interactive Learning (DIaL) by repeating training/finetuning, segmentation, and correction. These three steps are repeated until annotators are satisfied with segmentation predictions on training images.

2.2.1 Initial Training

We need an initially trained model to annotate mislabeled regions with challenging features. WSIs are too large to be processed at once. Thus, the labeled regions are extracted into pixels patches only when more than 1% of pixels in the patch are annotated. To balance the number of pixels between classes, patches containing rare classes are deformed to produce additional patches by elastic deformation [18, 9]. Here, we define a class is rare if the number of pixels in the class is less than 70% of the maximum number of pixels among classes. After patch extraction and deformation are done, some cases are separated for validating the CNN model where approximately 20% of pixels in each class are separated. We use a Deep Multi-Magnification Network (DMMN) [12] for multi-class tissue segmentation where the model looks at patches in multiple magnifications for accurate predictions. Specifically, the DMMN is composed of three half-channeled U-Nets, U-Net-, U-Net-, and U-Net-, where input patches of these U-Nets are in , , and magnifications, respectively, with size of pixels centered at the same location. Intermediate feature maps in decoders of U-Net- and U-Net- are center-cropped and concatenated to a decoder of U-Net- to enrich feature maps. The final prediction patch of the DMMN is generated in size of pixels in magnification. To train our model initialized by [11]

, we use weighted cross entropy as our loss function where a weight for class

, , is defined as , where is the total number of classes and is the number of pixels in class

. Note that unlabeled regions do not contribute to the training process. During training, random rotation, vertical and horizontal flip, and color jittering are used as data augmentation. Stochastic gradient descent (SGD) optimizer with a learning rate of

, a momentum of 0.99, and a weight decay of

is used for 30 epochs. In each epoch, a model is validated by mean Intersection-Over-Union (mIOU) and the model with the highest mIOU is selected as an output model.

2.2.2 Segmentation

After training a model is done, all training WSIs are processed to evaluate unlabeled regions. A set of patches with size of pixels in , , and

magnifications centered at the same location is processed using the DMMN. Note that zero-padding is done on the boundary of WSIs. Patch-wise segmentation is repeated in

and

-directions with a stride of 256 pixels until the entire WSI is processed.

2.2.3 Correction

Characteristic features are annotated during initial annotation, but challenging or rare features may not be included. During the correction step, these challenging features that the model could not predict correctly are annotated to be included in the training set to improve the model. In this step, the annotators look at segmentation predictions and correct any mislabeled regions. If the predictions are satisfied throughout training images, the model is finalized.

2.2.4 Finetuning

Assuming the previous CNN model has already learned most features of classes, we finetune the previous model to improve segmentation performance. Corrected regions are extracted into patches and included in the training set to improve the CNN model. Additional patches are generated by deforming the extracted patches to give a higher weight on challenging or rare features to emphasize these features to be learned during finetuning. SGD optimizer and weighted cross entropy with the updated weights are used during training, and we reduced a learning rate to be and the number of epochs to be 10 not to perturb parameters of the CNN model too much from the previous model. Note validation cases can be selected again to utilize the majority of corrected cases for the optimization.

2.3 Treatment Response Assessment

The final CNN model segments viable tumor and necrotic tumor on testing WSIs. Note necrotic tumor is a combination of necrosis with bone and necrosis without bone. The ratio of necrotic tumor to overall tumor in case-level estimated by a deep learning model, , is defined as

(1)

where and are the number of pixels of viable tumor and necrotic tumor in a case, respectively.

3 Experimental Results

Our hematoxylin and eosin (H&E) stained osteosarcoma dataset is digitized in magnification by two Aperio AT2 scanners at Memorial Sloan Kettering Cancer Center where microns per pixel (MPP) for one scanner is 0.5025 and MPP for the other scanner is 0.5031. The osteosarcoma dataset contains 55 cases with 1578 whole slide images (WSIs) where the number of WSIs per case ranges between 1 to 109 with mean of 28.7 and median of 22, and the average width and height of the WSIs are 61022 pixels and 41518 pixels, respectively. We used 13 cases for training and the other 42 cases for testing. Note 8 testing cases do not contain the necrosis ratio on their pathology reports, so they were excluded for evaluation. Two annotators (N.P.A. and M.R.H.) selected 49 WSIs from 13 training cases and independently annotated them without case-level overlaps. The pixel-wise annotation was performed on an in-house WSI viewer, allowing measuring the time taken for annotation. The annotators labeled three iterations using Deep Interactive Learning (DIaL): initial annotation, first correction, and second correction. They annotated 49 WSIs in 4 hours, 37 WSIs in 3 hours, and 13 WSIs in 1 hour during the initial annotation, the first correction, and the second correction, respectively. The annotators also exhaustively labeled the entire WSI which took approximately 1.5 hours. An example of exhaustive annotation and annotation with DIaL is shown in Figure 3. With the same given time, the annotators would be able to exhaustively annotate only 5 WSIs without DIaL. The annotators can annotate more diverse cases with DIaL. The number of pixels annotated and deformed are shown in Figure 4

(a). The implementation was done using PyTorch

[17] and an Nvidia Tesla V100 GPU is used for training and segmentation. Initial training and finetuning took approximately 5 days and 2 days, respectively. Segmentation of one WSI took approximately minutes.

Figure 3: An example of Deep Interactive Learning (DIaL). (a) An original training whole slide image, (b) an exhaustive annotation, (c) an initial annotation, (d) the first prediction from a CNN trained by the initial annotation, (e) the first correction where more regions for necrosis with bone, normal tissue, and blank are labeled to correct the first prediction, (f) the second prediction from a CNN finetuned from the initial model with double-weighted first correction, satisfied by annotators. The annotators spent approximately 1.5 hours to exhaustively label one whole slide image. With DIaL, the annotators are able to efficiently label characteristic features and challenging features on more diverse cases at the same given time. In this experiment, two annotators initially annotated 49 images in 4 hours and corrected 37 images in 3 hours. Note viable tumor, necrosis with bone, necrosis without bone, normal bone, normal tissue, cartilage, and blank are labeled in red, blue, yellow, green, orange, brown, and gray, respectively. White regions in (b), (c), and (e) are unlabeled regions.

For evaluating our segmentation model, 1044 WSIs from 34 cases were segmented to estimate the necrosis ratio. Note all WSIs were segmented as if pathologists look at all glass slides under the microscope to assess the necrosis ratio. To numerically evaluate the estimated necrosis ratio, we compared with the ratio from pathology reports written by experts. Here, the error rate, , is defined as

(2)

where is the ratio from a pathology report and is the ratio estimated by a deep learning model for the -th case, and where is the number of testing cases. Figure 4(b) shows the error rates for our models. Model1, Model2a, Model2b, Model3 denote an initially-trained model, a finetuned model from Model1 with single-weighted first correction, a finetuned model from Model1 with double-weighted first correction, and a finetuned model from Model2b with double-weighted second correction, respectively. Note we tried both single-weighted correction including only extracted correction patches and double-weighted correction including both extracted correction patches and their corresponding deformed patches during the finetuning step. We observed that the error rate decreases after the first correction, especially with a higher weight on correction patches to emphasize challenging features. We selected Model2b as our final model because the error rate stopped reducing after the second correction. Our final model, trained by only 7 hours of annotations done by DIaL, was able to achieve the error rate of 20%. A 20% inter-observer error rate is generally acceptable for non-standardized tasks in surgical pathology such as assessment of percentage of tumor cells has been overestimated by pathologists up to 20% in certain instances [23]. While this cannot be directly transferred to necrosis estimation we have used this data to show that the model is able to achieve this error rate.

The task of manual quantification of the necrosis ratio done by pathologists is challenging because one must make an estimate across multiple glass slides that may differ substantially in the ratio of necrosis. We are convinced that our objective and reproducible deep learning model estimating the necrosis ratio within expected inter-observer variation rate can be superior to manual interpretation.

Figure 4: (a) The number of pixels in a training set for each class. During initial annotation, elastic deformation [18, 9] is used on patches containing necrosis with bone, necrosis without bone, and cartilage to balance the number of pixels between classes. Elastic deformation is used on all correction patches to give a higher weight on them. (b) Error rates of Model1, trained by initial annotation alone, Model2a, finetuned from Model1 with single-weighted first correction, Model2b, finetuned from Model1 with double-weighted first correction, and Model3, finetuned from Model2b with double-weighted second correction. Our final model, Model2b, achieves the error rate of 20% considered as an expected inter-observer variation rate [23].

4 Conclusion

We presented Deep Interactive Learning (DIaL) for an efficient annotation to train a segmentation CNN. With 7 hours of labeling, we achieved a CNN segmenting viable tumor and necrotic tumor on osteosarcoma whole slide images. Our experiments showed that the CNN model can successfully estimate the necrosis ratio known as a prognostic factor for patients’ survival for osteosarcoma in an objective and reproducible way. In the future, we plan for patient stratification based on patients’ survival data using our deep learning model.

5 Acknowledgments/Disclosures

This work was supported by the Warren Alpert Foundation Center for Digital and Computational Pathology at Memorial Sloan Kettering Cancer Center and the NIH/NCI Cancer Center Support Grant P30 CA008748. T.J.F. is the Chief Scientific Officer, co-founder and equity holder of Paige.AI. P.J.S. is a lead machine learning scientist, co-founder and equity holder of Paige.AI. C.M.V. is a consultant for Paige.AI. D.J.H. and T.J.F. have intellectual property interests relevant to the work that is the subject of this paper. MSK has financial interests in Paige.AI. and intellectual property interests relevant to the work that is the subject of this paper.

References

References

  • [1] Arunachalam, H. B., et al.: Viable and necrotic tumor assessment from whole slide images of osteosarcoma using machine-learning and deep-learning models. PLoS ONE 14(4), e0210706 (2019)
  • [2] Bandi, P., et al.: From Detection of Individual Metastases to Classification of Lymph Node Status at the Patient Level: The CAMELYON17 Challenge. IEEE Transactions on Medical Imaging 38(2), 550–560 (2019)
  • [3]

    Bankhead, P., et al.: QuPath: Open source software for digital pathology image analysis. Scientific Reports

    7, 16878 (2017)
  • [4] Berg, S., et al.: ilastik: interactive machine learning for (bio)image analysis. Nature Methods 16, 1226–1232 (2019)
  • [5] Bokhorst, J. M., et al., Learning from sparsely annotated data for semantic segmentation in histopathology images. In: Proceedings of the International Conference on Medical Imaging with Deep Learning, pp. 84–91, (2019)
  • [6] Breiman, L.: Random Forests. Machine Learning 45(1), 5–32 (2001)
  • [7] Ehteshami Bejnordi, B., et al.: Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer. JAMA 318(22), 2199–2210 (2017)
  • [8] Fails, J. A., Olsen, D. R.: Interactive machine learning. In: Proceedings of the International Conference on Intelligent User Interfaces, pp. 39–45 (2003)
  • [9] Fu, C., et al.: Nuclei segmentation of fluorescence microscopy images using convolutional neural networks. In: Proceedings of the IEEE International Symposium on Biomedical Imaging, pp. 704–708 (2017)
  • [10] Fuchs, T. J., Buhmann, J. M.: Computational pathology: Challenges and promises for tissue analysis. Computerized Medical Imaging and Graphics 35(7), 515–530 (2011)
  • [11]

    Glorot. X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)

  • [12] Ho, D. J., et al.: Deep Multi-Magnification Networks for Multi-Class Breast Cancer Image Segmentation. arXiv preprint, arXiv:1910.13042 (2019)
  • [13] Huvos, A. G., Rosen, G., Marcove, R. C.: Primary osteogenic sarcoma: pathologic aspects in 20 patients after treatment with chemotherapy en bloc resection, and prosthetic bone replacement. Archives of Pathology & Laboratory Medicine 101(1), 14–18 (1977)
  • [14] Lee, B., Paeng, K.: A Robust and Effective Approach Towards Accurate Metastasis Detection and pN-stage Classification in Breast Cancer. In: Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 841–850 (2018)
  • [15]

    Long, J., Shelhamer, E., Darrell, T.: Fully Convolutional Networks for Semantic Segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

  • [16] Ottaviani, G., Jaffe, N.: The Epidemiology of Osteosarcoma. Pediatric and Adolescent Osteosarcoma, 3–13 (2009)
  • [17] Paszke, A., et al.: PyTorch: An Imperative Style, High-Performance Deep Learning Library. In: Proceedings of the Neural Information Processing Systems, pp. 8024–8035 (2019)
  • [18] Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241 (2015)
  • [19] Rosen, G., et al.: Preoperative chemotherapy for osteogenic sarcoma: Selection of postoperative adjuvant chemotherapy based on the response of the primary tumor to preoperative chemotherapy. Cancer 49(6), 1221–1230 (1982)
  • [20] Schüffler, P.J., Fuchs, T.J., Ong, C.S., Wild, P., Buhmann, J.M.: TMARKER: A Free Software Toolkit for Histopathological Cell Counting and Staining Estimation. Journal of Pathology Informatics 4(2), (2013)
  • [21] Sommer, C., Straehle, C., Koethe, U., Hamprecht, F. A.: ilastik: Interactive learning and segmentation toolkit. In: Proceedings of the IEEE International Symposium on Biomedical Imaging, pp. 230–233 (2011)
  • [22] Srinidhi, C. L., Ciga, O., Martel, A. L.: Deep neural network models for computational histopathology: A survey. arXiv preprint, arXiv:1912.12378 (2019)
  • [23] Viray, H., et al.: A Prospective, Multi-Institutional Diagnostic Trial to Determine Pathologist Accuracy in Estimation of Percentage of Malignant Cells. Archives of Pathology & Laboratory Medicine 137(11), 1545–1549 (2013)
  • [24] Wang, D., Khosla, A., Gargeya, R., Irshad, H., Beck, A. H.: Deep Learning for Identifying Metastatic Breast Cancer. arXiv preprint, arXiv:1606.05718 (2016)