With advancements in imaging and hardware, machine learning techniques have been well adopted in the medical domain for analysis of highly complex data, however one of the remaining bottlenecks is the task of gathering large volumes of annotated training data. This is largely due to costs of expert annotators and time constraints associated with fine-tooth combing of large datasets. Specifically in medicine, expert annotators require several years of training and are currently bombarded with the large volume of healthcare data generated on a routine basis. Furthermore, due to intra- and inter-rater variability, training instances may need to be annotated by multiple observers afterwhich there is uncertainty surrounding the usage of multiple “ground truth” labels. With the growing success of deep neural networks, there is now a higher demand for large annotated datasets in the medical domain to prevent overfitting. To overcome this problem, we propose a method of using weak labels in a deep learning framework. The goals of our approach is two-fold: 1) identify class labels at the instance-level given labels provided at a much coarser level, and 2) learn features in a deep network with limited expert input.
Here, we tackle the problem of identifying tumor metastases in microscopic images in the publicly available Camelyon challenge dataset Camelyon2016 . In our setup, training instances are patches extracted from images with extremely high dimensions, 200,000 x 100,000 pixels, and weak labels are assigned at the image-level. In reported experiments, this equates to over 80,000 training patch-instances per label, with of these correlating to true positive (i.e. tumor) patch-instances. As such, the class imbalance is extremely high and this trend continues to other modalities such as magnetic resonance imaging Dubey2014 and experimental methods for cancer diagnosis and prognosis such as miRNA Kothandan2015 .
In this paper, we take inspiration from multiple instance learning (MIL) Dietterich1997
to outline an approach for training a convolutional neural network (CNN) with image-level annotations. In the first phase of our pipeline, features are learned in an unsupervised manner via a variational autoencoder, which is subsequently used to identify clusters of patch-instances in feature space. In the case of microscopic images, such features may correlate to stroma, fatty tissue or lumen. We then identify “cluster-classes” in a latent representation to distinguish between patch-instances without labels, and adjust the training loss appropriately. We show that when weak labels are adopted in a deep learning framework, our proposed method, CCE, can achieve performance comparable to a fully supervised approach and, furthermore, maintains performance when distracting training samples are introduced during training.
In the traditional MIL framework, ground truth consists of labels provided at the bag-level, where each bag contains training instances, where . Bags are then labeled as either positive, in which at least one instance within the bag must be positive, or negative in which all instances in the bag are negative. Here, we use a similar framework where large histology images (hereon refered to as digital slides) denote bags, and patches extracted from each slide as patch-instances. Digital slides which contain metastases are labeled as positive, and healthy digital slides as negative. The proposed framework is not limited to binary classification and can also be used in a multi-class schema.
In a typical MIL pipeline, training instances are represented as feature vectors of the original data source which are cross-examined during training. The objective is to estimate a classification function that provides the likelihood that an instance in a positive bag has a true positive labelAmores2013 . However one of key limitations of using MIL in a deep learning framework is that all training instances within each bag cannot be made available during training. Due to memory contraints, when training a deep model, instances are made available in batches and therefore only a subset of each “bag” is made available in a single batch, eliminating the possibility of performing cross-correlations within a bag as is done in previous work Sun2016
. We combat this by using unsupervised learned features to inform the model of patch-instances which share similarities based on appearance and texture, regardless of the grade or progression of tumor present in the image. The model therefore has two types of input when computing the loss of training instances: 1) predictions generated from the model in its current state, and 2) a latent representation of each patch-instance.
A common loss function used in classification tasks is the categorical cross-entropy (CCE) which combines predictions from the output of a network (
), typically a softmax layer, with ground truth labels ():
However when labels are noisy, CCE alone is uninformative and inhibits learning. Here we propose learning an estimated ground truth from an unsupervised representation, which we term a cluster-class (). We opt to train a variational autoencoder (VAE) in which a Gaussian latent representation is learned from image data alone, however any other unsupervised learning technique can be substituted.
To estimate cluster-classes we perform -means clustering in the latent space of our trained VAE. From the weak bag-labels in each cluster, we perform a majority vote to compute a cluster-class label, denoted as . During training, the cluster label of each patch-instance is denoted as the nearest cluster in feature space, giving us an estimated cluster-class, , where is the feature encoding of a given patch and are the cluster centers. Our final loss is then a measure of both traditional CCE, and the CCE of predicted outcomes and cluster-classes, weighted by , set to in reported experiments.
3 Experiments and Results
As a toy example, we created a MNIST-BAG dataset (Figure 1) containing bags of MNIST digits where at least half the instances within each bag were reflective of the same label and the remaining bag contained distracting MNIST digits i.e. digits not equal to the bag label. MNIST-BAG also demonstrates how our method can be applied to a multi-class problem set.
Our experimental setup is shown in Table 1. We compared traditional categorical cross entropy (CCE) loss to our proposed method (CCE) and also varied the number of clusters () learned from the latent representation (Table 2). As expected, as the number of instances in each bag increases, accuracy rates using CCE fell with a significant drop at . CCE was able to maintain performance regardless of the number of instances in each bag.
|Method||Number of instances per bag ()|
Also, as the number of clusters were increased, so did test accuracy performance in MNIST-BAG. Saying this, at and with only clusters, CCE surpassed traditional CCE. With clusters, test accuracy rates were higher than CCE regardless of the size of each bag, suggesting we can use CCE without sacrificing on quality if sufficient number of clusters are extracted from the latent representation. When i.e. when classification was performed based on VAE features alone, we achieve an accuracy rate of with clusters, slightly lower than CCE alone (), suggesting information from both the bag-level label and the VAE latent space is most beneficial.
We used the Camelyon 2016 challenge dataset Camelyon2016 to evaluate our proposed method which is composed of 400 whole slide images (WSIs) of sentinel lymph node acquired from two different sites. ( normal and containing metastases) of these slides make up a fully annotated training set, however here we only used the image-level labels to train our network. Due to the pyramidal data format of histology slides, we use OpenSlide Goode2013 to read digital slides at x10 objective. To validate tumor localization we used WSIs which contained metastases in the independent test set, totalling WSIs.
Experimental Setup: We created bags by extracting 256x256 RGB patches from WSIs and labeled them with their corresponding WSI label: for WSIs which contained metastases and otherwise. patches were randomly extracted from each WSI within regions containing tissue. The background was eliminated by thresholding the hue color channel and eliminating spurious regions. This method was derived from Valkonen2017
and variations on this method has been widely adopted in digital pathology. Patches were randomly selected in each epoch to sufficiently sample from each WSI over the duration of training. During testing, only patches within tissue regions were evaluated. Each bag oftraining instances was also balanced to contain roughly equal numbers of cancerous and healthy structures. Due to small size of the dataset and small ratio of tumor to healthy patch-instances, this step was necessary to ensure cancerous structures were captured in the CNN. We experimented with and in reported experiments.
The VAE outlined in Table 1
was trained in an unsupervised manner using 135,000 patches extracted from the training set, in a similar manner to the CNN patch extraction method described above. We used Adam to optimize the VAE and stochastic gradient descent with a exponential decay function for InceptionNetSzegedy2016 . Both optimizers had an initial learning rate of . All models were trained for epochs which took, on average, 2 days on two Nvidia Titan Xp GPUs.
A visualization of the latent representation in our trained VAE as a t-SNE plot is shown in Figure 2 (left). Training patch-instances containing metastases are projected into this space and shown in orange and healthy patch-instances in blue. The t-SNE plot shows some clear clusters differentiating between the two classes indicating the VAE was able to capture features specific to each class without the need for expert annotations.
ROC curves comparing a fully supervised approach (with accuracy labels per training instances), categorical cross-entropy (CCE) with weak labels and the proposed method, CCE, is shown in Figure 2 (right). Whilst CCE performed slightly better at lower sensitivities, our method was superior overall, reaching accuracy rates matching a fully supervised approach at higher sensitivities. This shows great promise for using weak labels for analysing large histology images.
In this paper, we described a novel weakly-supervised technique inspired by MIL for learning and leveraging a deep latent representation during training of a CNN. We used cluster-classes in a novel loss function to delineate between patches using weakly labeled bags. Our results show that our adapted loss function, CCE, can overcome issues with traditional CCE, particularly when large weakly bags are used. We also showed that this method can be used in digital pathology to analyse extremely high resolutional pathology images, and could potentially be used to automatically generate annotations in medicine where costs of experts annotators is extremely high.
This research is funded by the Canadian Cancer Society (grant ) and the National Cancer Institute of the National Institutes of Health (grant U24CA199374-01), and was enabled in part by support provided by ComputeCanada.
-  Camelyon16: ISBI challenge on cancer metastases detection in lymph nodes. https://camelyon16.grand-challenge.org/, 2016. [Online; accessed 16-June-2018].
-  Jaume Amores. Multiple instance classification: Review, taxonomy and comparative study. Artificial Intelligence, 201:81–105, 2013.
-  T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89:31–71, 1997.
-  R. Dubey, J. Zhou, Y. Wang, P.M. Thompson, and J. Ye. Analysis of sampling techniques for imbalanced data: An n = 648 ADNI study. NeuroImage, 87:220–241, 2014.
-  A. Goode, B. Gilbert, J. Harkes, D. Jukic, and M. Satyanarayanan. OpenSlide: A vendor-neutral software foundation for digital pathology. Journal of Pathology Informatics, 4, 2013.
-  R. Kothandan. Handling class imbalance problem in miRNA dataset associated with cancer. Bioinformation, 11(1):6–10, 2015.
Miao Sun, Tony X. Han, Ming-Chang Liu, and Ahmad Khodayari-Rostamabad.
Multiple instance learning convolutional neural networks for object
23rd International Conference on Pattern Recognition (ICPR), 2016.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna.
Rethinking the inception architecture for computer vision.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826, 2016.
M. Valkonen, K. Kartasalo, K. Liimateainen, M. Nykter, L. Latonen, and
Metastasis detection from whole slide images using local features and random forests.Cytometry A, 91A:555–565, 2017.