Technological advancements allow for both longer life expectancy and higher quality of life. These both increase demand on medical personnel, who are also expected more and more to perform personalized and patient-specific procedures, such as surgical planning via morphological approaches Fürnstahl et al. (2016) or functional simulation Péan et al. (2017). To that end, even when it is possible to see target anatomical structures in an imaging modality such as MRI, CT, or ultrasound, it is still often the bottleneck to automatically identify and delineate (segment) them. Due to limited resources for manual annotations, patient-specific procedures are still not a common practice for most clinical applications.
In the recent years, deep learning (DL) has shown encouraging performance for segmentation when sufficient amount of annotated data for the anatomical structure of interest is available. Annotating a sufficiently large dataset by medical experts is a time- and hence cost-intensive undertaking. The idea of active learning is to identify the samples that, once annotated, will bring the most value, which can be defined, e.g., as the gain in segmentation performance of the learned model. In an iterative process, the developed framework selects a new set of samples – also referred to as batch-mode active learning – to be manually annotated at each active learning iteration. This is inherently feasible in the clinical environment, where medical experts anyhow annotate small batches of images at different intervals based on their availability between daily clinical responsibilities. In a clinical setting, typically at each annotation session, image data to be annotated is loaded from a picture archiving and communication system (PACS). A pool-based active learning system can thus intervene at that stage, in order to intelligently determine which volumes or which image slices to display and request the user to annotate.
Active learning with DL remains a challenging problem, since DL solutions do not typically generalize well to unseen samples.
Hence, there have been a wide range of approaches in the literature to improve sample selection in active learning.
Most of these works can be grouped under uncertainty and representation based sampling methodologies.
Uncertainty Sampling. In Gal et al. (2017); Gal and Ghahramani (2016), it was shown that dropout
layers can be used at inference time to sample from the approximate posterior, so-called Monte Carlo (MC) Dropouts. This gives flexibility for sampling as many posteriors as desired, with virtually zero cost added during training; i.e., a similar cost for training a single model as opposed to an ensemble. Then, the disagreement among posteriors, e.g., variance, can be used to quantify the uncertainty. InMatthias et al. (2018) a classificiation approach was proposed, where “pseudo-labels” are assigned on non-annotated samples using a network trained on a small annotated sample set. The objective at an active learning iteration is then to keep prediction accuracy as high as possible on the annotated sample set while using MC Dropouts to query for the most uncertain non-annotated samples to be annotated. In Konyushkova et al. (2019), the proposed method queries a patch from 3D volumes using a combination of geometric smoothness priors and entropy-based novel uncertainty measures.
Representation Sampling. Uncertainty quantification with DL models can lead to out-of-distribution samples being ignored in the active learning process Sener and Savarese (2017b). Consequently, population coverage for active learning is widely investigated. In Sener and Savarese (2017a)
, the authors propose a greedy sample selection algorithm using the last fully connected layer of a Convolutional Neural Network (CNN) to solve maximum set-coverFeige (1998) between the pool of all images and the unison of currently annotated samples and the next sample to be queried. In Yang et al. (2017), a similar representation sampling method is coupled with uncertainty sampling when tackling active learning for semantic segmentation. The authors compute uncertainty measure as the variance of predictions from multiple CNNs, where each CNN is trained with a bootstrap of the available dataset. Next, a representative subset of the most uncertain samples are sought by computing the angle between image descriptorvectors
, defined as the spatially averaged activation tensor from the CNN where the spatial resolution is the coarsest. Distance metric approaches in high-dimensional spaces suffer from the so-calleddistance concentration François (2008), which is a limitation of both works Sener and Savarese (2017a) and Yang et al. (2017) above.
Note that with the methods described above, the not-yet-annotated dataset is only weakly integrated at any stage prior to quantifying a fitness metric of samples from that dataset. In other words, a posterior estimated from the relatively small annotated dataset is taken to be a good predictor of the complete dataset distribution. This is a strong assumption, especially at early active learning iterations when the annotated set size is still small. Powerful tools are proposed for unsupervised DL, such as Autoencoders (AEs) Bengio et al. (2007), which learn to map (encode) the high dimensional input space onto a manifold of substantially lower dimensions, such that it can reconstruct the high dimensional input with a second mapping function (decoder). Variational autoencoders (VAEs) Kingma and Welling (2013)
build on AEs, with additional regularization enforced in a latent space. This regularization constraint penalizes the encoder part of the network such that the training dataset is mapped onto a known prior distribution of some random variables, often modelled as standard normal distribution. Intuitively, this regularization promotes the creation of a continuous latent space of the observed samples. In ideal conditions, this means that traversing the manifold from the latent vector of one image to another, one can generate realistic samples that change from the former image to the latter. In active learning, having such an embedding space explaining the non-annotated data is a formidable source of information that can readily be exploited in order to ensure that key samples from the given population are queried for annotation early on. InZhao et al. (2017) the authors show that latent space of VAE can be suboptimal, hence they propose the method infoVAE, which uses Maximum-Mean Discrepancy (MMD) Gretton et al. (2006); Li et al. (2015) instead of the KL-divergence measure as MMD learns a more continuous and informative latent space representation. Recently the authors of Sinha et al. (2019)
presented an active learning framework where they train a VAE on all available images in an adversarial fashion with a discriminator classifying between annotated and non-annotated samples. Then, sample selection can be done with the discriminator using the latent space of the trained VAE, potentially solving the distance concentration problem in high-dimensionality of earlier works.
In the medical field, UNet Ronneberger et al. (2015) and DCAN Chen et al. (2016) are some of the most popular neural network architectures for segmentation. Most work in the field of medical image analysis have adopted the UNet approach, thanks to its intuitive structure and consistently high performance in pixel-level tasks, such as Zeng et al. (2017); Milletari et al. (2016); Ozdemir et al. (2018a); Salehi et al. (2017). On the other hand, DCAN won the 2015 MICCAI Gland Segmentation Challenge Sirinukunwattana et al. (2017). Thanks to its deeply supervised Lee et al. (2015); Guo et al. (2019) architecture, DCAN can be trained faster, thereby being particularly attractive in active learning Yang et al. (2017).
In earlier work Ozdemir et al. (2018b), we achieved state of the art results in active learning for the segmentation of a shoulder MR dataset. Inspired by Yang et al. (2017), we proposed to have metrics quantifying both uncertainty and representativeness for selecting the next batch of samples. In contrast to Yang et al. (2017), we used variance from MC Dropout samples Gal and Ghahramani (2016) as an uncertainty metric, experimented with different representativeness metrics, explored different means to combine uncertainty and representativeness measures, and proposed a latent space regularization term that promotes maximizing its information content during training of the segmentation network. Although the optimization of maximum entropy in the latent space can be counter-intuitive for segmentation, our results in Ozdemir et al. (2018b) showed that it can help ensure to generate a discriminative representation of the image dataset.
In this work, we approach the representativeness measure from a probabilistic point of view, where we optimize for the MMD Gretton et al. (2006)
divergence using VAEs to learn meaningful latent features which follow a Gaussian distribution. This is herein studied for a segmentation task using a Bayesian approach for an efficient coverage of the entire set of images with the significantly smaller set of annotated images. Our representation sampling is agnostic to the current and future tasks, i.e., independent of the task. Similarly toSinha et al. (2019), we herein adopt the idea of VAEs for a low dimensional representation for sampling. Additionally, we herein incorporate an uncertainty-based sampling criterion to further promote relevant sample selection. We utilize VAEs particularly with MMD, which was shown in Zhao et al. (2017) to improve latent space representations. Note that for our purposes, in contrast to earlier works, an additional training for representation sampling is not needed at new active learning iterations, which is an important advantage since the pool of medical image datasets can be vast and prohibitive for regular additional training in the clinical setting.
Below we define the notations used in this manuscript.
Dataset. Let the pool of all images consist of images and their annotations , the latter of which in an active learning iteration would be partially inaccessible for images not yet annotated. At a given active learning iteration , there would then be a readily annotated dataset . The not-yet-annotated dataset is referred to as , which in practice have only images available. For brevity, we will omit the active learning iteration representation for descriptions within an iteration and use this only for formulations that affect multiple iterations. Note that typically , since active learning would be redundant if the set sizes were of similar cardinality. We will treat these sets as random variables, hence observations from the annotated and the pool image sets then become and , respectively. At an active learning iteration, i.e., prior to each manual annotation session, a method should select a set of samples to be annotated, where . Once annotated by the user, these samples will be appended to the annotated dataset along with their manual annotations, yielding for the next iteration of active learning.
Architecture. The architecture of our fully convolutional networks (FCNs) for segmentation follows a DCAN-like structure Yang et al. (2017)
, where the receptive field of the convolutional kernels increase through max-pooling operation, creating spatially coarser feature maps while increasing the number of feature channels being learned. We call the spatially coarsest level of the network asabstraction layer Ozdemir et al. (2018b), which is relevant for the baseline method we will be comparing against. Segmentation models are trained using pairs of images and annotations . For all VAE-based methods, the learned embedding space is defined by latent variables. VAE models are trained using only images . Without loss of generality different network architectures can also be envisioned for our active learning approach proposed in this work. What is essential is to accommodate necessary modules in the segmentation model to be able to quantify uncertainty, and estimate a latent space that can represent the image population in the form of a normal distribution for representativeness quantification.
2.2 Quantifying Uncertainty
Model uncertainty expected from segmenting a non-annotated image is undoubtedly one of the most important cues to aim for in active learning. However, uncertainty is not inherently quantified in most CNNs. Consider a conventional supervised segmentation task using dataset . For an observation , the task can be formulated as computing the maximum a posteriori , where is the set of learned model parameters using and . This can be formulated as
where the maximum a posteriori for is instead learned due to the impracticalities for integrating over high dimensional . This then leads to deterministic predictions for . In order to approximate , MC Dropouts is proposed in Gal et al. (2017)
to sample from model parameters, aggregating desired number of posterior predictions with only additional inference operations.
In order to leverage the benefits of MC Dropout, we modify the DCAN architecture Yang et al. (2017) with additional spatial dropout layers Tompson et al. (2015), similarly to Ozdemir et al. (2018b). First, we infer a tensor of segmentation predictions for label given each draw of model parameters depending on the random dropouts, where is the number of MC Dropout samples, and is the number of the input image pixels. Next, we compute the uncertainty map for label as the variance of each pixel prediction over inferences. Finally, we compute a scalar uncertainty measure as the spatial average of this uncertainty map, yielding
where is the vector of predictions at pixel . In the multi-class setting where each anatomy is similarly important, we estimate the model uncertainty for the segmentation task as the mean of scalar uncertainty measures for each segmentation label.
2.3 Maximum Likelihood Sampling in Latent Space
Note that the above quantification of model uncertainty for an observation is conditioned on the annotated dataset but not on
. The latter is ideally needed for a good sample prediction for the image population. Below, we describe an approach to take into account the potential domain shift from the already-annotated to the entire dataset using unsupervised learning.
The goal is to populate an image set such that it provides a sufficiently good representative summary of . For this purpose, consider a mapping function , where each observation from is mapped onto
, a continuously defined latent space with a desired probability distribution, e.g., a multivariate normal.
Intuitively, a batch of new queries for manual annotation from after an active learning iteration should represent the distribution statistics of with an emphasis on the space that is unlikely for the distribution of . In other words, queried samples should not be redundant due to readily existing samples in . Provided that the mode of the latent space will encode the most frequent attributes of , the ideal sample can be queried as
Over iterations, samples queried based on will align the posteriors and , making representations of observations from cover both breadth and mode of , hence achieving the desired objective. To compute Eq. (3), we utilize Bayesian inference as
The right hand side contains the equivalent of the posterior , allowing for a simpler representation as,
In order to approximate , we train an infoVAE Zhao et al. (2017) with the complete pool of images using MMD for latent space regularization as , where
is the prior, is the posterior inference in the latent space via the encoder, and is the distance in a kernel space. We choose to have a standard normal distribution and use a Gaussian as the kernel mapping , where . Thereon, the true posterior inference is approximated with , where is the learned parameter set of the infoVAE encoder. Hence, we can approximate Eq. (3) as
and Eq. (5) as . Accordingly, we project samples from (i) and (ii) onto the latent space of the infoVAE. Next, to compute , we fit a multivariate diagonal Gaussian to both projections, separately. Finally, we estimate the likelihoods and using the error function , as follows:
are the parameters of the fitted Gaussians. In other words, we use the first half of the cumulative distribution function of the fitted Gaussian since it is symmetric around its expected value.
Fig. 1 illustrates this for a toy example, where would be selected based on maximizing the likelihood of being sampled from the non-normalized distribution shown with the dashed red Gaussian. Additional experiments corroborating our intuition are provided in the Appendix.
2.4 Comparative Evaluation
We define 5 methods for analysis and comparison:
; a simplistic baseline approach of randomly selecting the samples to annotate; i.e., random querying of samples.
; the most uncertain samples based on Sec. 2.2 are queried in each active learning iteration.
; a baseline similar to Yang et al. (2017), with the main difference being additional spatial dropout layers in the architecture, and using the uncertainty metric described in Sec. 2.2 (instead of training 3 FCNs with different bootstrapped subsets of the available and using variance across FCNs as in Ozdemir et al. (2018b)). Consequently, the computational cost is reduced by a third and the entire is observed by the trained model. To be precise, first a set of the most uncertain elements from the non-annotated dataset are selected. Next, image descriptor of each sample in is computed as the global average pooling applied at the coarsest layer activations, where is the number of feature channels of the corresponding layer. The representativeness metric can then be computed using the following similarity measure
between the two vectors for any two images and . In an iterative manner, we populate a representative sample set by adding the currently most representative sample via Yang et al. (2017)
This maximizes maximum set-cover Feige (1998) on based on the metric.
; Bayesian sample querying, our proposed method, selects samples to be annotated based on the intersection of the most uncertain (Sec. 2.2) and representative samples following Eq. (7). Specifically, we first select the most uncertain samples from . Then, we form following Eq. (7) with samples to be queried for annotation for the next active learning iteration.
; the upper bound using as a reference in our quantitative analysis. The upper bound uses the same segmentation architecture as the above compared methods, but is trained on the complete in a supervised setting; i.e., assuming we already know all annotations at each sample query iteration.
For all compared methods, we used a modified DCAN architecture Ozdemir et al. (2018b) trained on 2D image input for the segmentation network using inverse frequency weighted cross entropy loss. Data augmentation of horizontal flip was randomly applied to images with 0.5 uniform probability during training. When training, Adam optimizer was used with a learning rate of and a mini-batch size of 8 images. For both training and inference, dropout rate was set to 0.5, with for MC samples. At each active learning iteration including the initial training, models were trained for 8000 steps. We trained an infoVAE with 5 convolutional blocks in both encoder and decoder on downsampled images of size , and we assigned dimensionality of the latent space as . For infoVAE training, Adam optimizer with learning rate of and a mini-batch size of 32 images was used. Preprocessing of image-wise normalization was applied for the infoVAE training. We used -norm for the reconstruction loss
. The methods were implemented and tested with the Tensorflow library on a cluster of NVIDIA Titan X GPUs.
|Setting||#volumes||vox res. [mm]||digital res. [px]|
|#1||20||0.91 x 0.91 x 3.0||192 x 192 x 64|
|#2||16||0.83 x 0.83 x 3.0||144 x 144 x 56|
|Total||36||0.91 x 0.91 x 3.0||192 x 192 x|
Dataset. We have conducted experiments on an magnetic resonance imaging (MRI) dataset of 36 shoulders acquired with Dixon sequence with two slightly varying acquisition settings, resulting in the specifications shown in Table 1
. For a more uniform dataset, images of the higher resolution setting #2 were bilinearly interpolated to match the voxel resolution of the coarser dataset, and then zero padded to match the digital resolution of the images of that latter setting #1. The data has expert annotations of two bones (humerus & scapula) and two muscle groups (supraspinatus & infraspinatus + teres minor). A cross-sectional view of two subjects along with the superimposed expert annotations are shown in Fig.2. Ground truth annotations of setting #2 were resized to match setting #1 using nearest neighbor interpolation. All experiments were conducted on the Total dataset listed in Table 1.
Evaluation Metrics. For quantitative results, we evaluated Dice coefficient score and mean surface distance (MSD) metrics as commonly used metrics in medical image segmentation. Dice score is , where is the binary predicted segmentation mask and the ground truth mask for label . MSD is computed symmetrically between the contours of segmentation prediction () and ground truth () for each label as,
where is the closest Euclidean distance from point to surface . To compute the contour for a binary mask, we subtract its morphologically eroded version from itself using an erosion kernel of . Average Dice and MSD scores over the four given anatomical structures of interest are reported herein.
Experimental Setup. Typically, expert annotations on MR volumes are conducted for all image slices of a volume at once when this is fetched manually from the PACS. However, this may lead to suboptimal use of limited annotation resources due to redundancy of annotating potentially similar images in a volume. A PACS compatible software can indeed fetch only the desired slices (2D images) from various volumes for annotation. Therefore, we conducted experiments for both slice-based and volume-based active learning. The former assumes the feasibility of random slice access and annotation query within whereas the latter treats each subject volume as an indivisible entity.
In an effort to efficiently utilize the available dataset, we generated 5 holdout sets using a pseudo random number generator where dataset splits were performed to roughly respect a /validation/test ratio of , with each subject being strictly in a single set. This yields to the following number of subjects: 25/2/9. Then, slices of roughly one volume (i.e., 64 slices for slice-based and all slices of a single subject for volume-based experiments) were randomly picked for each holdout set, to define the initial training set and this initial set was kept constant across tests of different methods to ensure comparability.
2D Image Slices. All slice-based experiments were initially trained on 64 slices. For every active learning iteration, and is used. In Fig. 3, we show the Dice score and MSD of different methods over active learning iterations evaluated using the test set over 11 iterations, representing annotations from up to of the complete set .
One can see that all compared methods achieve higher segmentation performance than randomly querying samples (). While the holdout set averages of Dice and MSD of and sometimes intersect, our proposed () clearly outperforms all compared methods, shown as the purple curve in Fig. 3. To highlight the improvement that our proposed method brings over the baseline, we also present in Fig. 4 the Dice score difference of the two methods with the highest quantitative performance, i.e., and .
Dice score difference for each corresponding holdout set represented as box plot of their quartiles between the top two competing methods;and . Red lines show the median value, the blue boxes range from 25th to 75th percentiles, and purple stars show the mean values. Overall positive values show superiority of over , especially at earlier iterations.
, we list the mean and standard deviation of Dice score differences from the upper bound at different active learning iterations for the top two performing methods,and . Therein, one can see the percentage of the dataset that was annotated for these two methods in order to reach a segmentation performance within different tolerance limits from the upper bound.
|19.8 (4.5)||12.2 (2.7)||8.8 (3.2)||5.9 (2.8)||4.8 (2.3)||3.6 (1.8)||3.1 (1.7)||2.6 (1.6)||1.9 (1.5)||1.5 (1.6)||2.7 (1.3)||1.1 (1.6)|
|18.5 (5.3)||10.6 (2.6)||7.4 (2.0)||5.3 (1.7)||3.7 (1.4)||2.8 (1.3)||2.5 (1.3)||2.0 (1.2)||1.5 (1.2)||1.2 (1.1)||1.2 (1.1)||0.7 (1.3)|
3D Image Volumes. In these experiments, the networks were initially trained on the slices of a single random subject (), and an active learning iteration consists of evaluating the respective scores of each method as an aggregation over a complete subject volume. For and , the set sizes of and are fixed to volumes and volume.
Dice score and MSD of the compared methods for volume-based experiments are shown in Fig. 5. Segmentation performance is evaluated at every active learning iteration, for a total of 11 iterations, using the same test set that was used for slice-based experiments. In the volume-based experiment results, where the annotation of entire volumes are added at each active learning iteration, the advantage of the compared methods appear more subtly, due to the larger range of Dice scores; e.g., Dice scores ranging approximately from 0.3 to 0.9 as opposed to 0.65 to 0.9 in Fig. 3. Dice score improvement using over can be seen in Fig. 3(b), where we show their Dice score differences between each holdout as boxplots. One can see that has improved average Dice score over on every evaluation point (cf. 3(b) purple stars). In order to have a precise understanding of the Dice score gap of the competing two methods from the upper bound, we present the mean and standard deviation of Dice score differences of and in Table 3.
|55.9 (6.0)||37.4 (4.1)||27.4 (5.5)||18.7 (8.7)||14.5 (7.3)||12.3 (5.1)||10.5 (5.3)||8.7 (2.8)||6.3 (2.1)||5.2 (1.8)||5.6 (1.4)||4.0 (1.4)|
|52.4 (5.0)||34.5 (2.1)||25.2 (6.4)||17.2 (4.1)||12.6 (3.7)||9.5 (4.1)||7.5 (2.4)||5.4 (1.9)||4.9 (1.8)||3.7 (1.6)||3.2 (1.5)||2.8 (0.9)|
Our preliminary experiments with VAE compared to infoVAE corroborate the claims in Zhao et al. (2017) where the variance of the latent space was overestimated. Furthermore, active learning of segmentation through Bayesian sample querying using the above-mentioned VAE network trained on showed lower performance compared to . Since we are strictly interested in the representational power of the latent variables for a given image, poorer performance on active learning evaluations indirectly support the claim of “learning un-informative latent variables” when using KL divergence Zhao et al. (2017).
The advantage of over only becomes evident after a sufficiently large is achieved ( in Fig. 3). One can also draw a similar conclusion by looking at Fig. 3(a), where the superiority of our proposed method is most prominent in early iterations of active learning, and it almost monotonically decreases over time. This is inline with our previous findings in Ozdemir et al. (2018b) that an image descriptor based representativeness metric for may be redundant, if not adverse, until an adequate portion of the complete set is annotated.
All methods for the 3D volume experiment approach upper bound at a slower rate compared to the slice-based experiment (cf. Tables 2 & 3). This can be due to having less options to select from (i.e., total of 24 volumes at first active learning iteration) when compared to slice-based. This hypothesis would be in line with the reasonable expectation that certain slices have significant importance for the segmentation task while others (e.g., at the borders of the field-of-view) are less important in a given volume, whereas when a volume is given as a whole, the utilizable information therein is more uniform. Another point of interest is that after 6 volumes, achieves performance closer to . This can possibly be due to the fact that our dataset consisted of 2 different settings (cf. Table 1); where may have early on queried for key sample volumes from both settings to represent the Total dataset, while may have eventually seen key samples only after roughly 5 iterations of active learning. Another explanation can come from the design choice of assigning volumes and volume, heavily restricting the sequential representativeness metric to pick one of the two options.
Upon comparison of the slice-based versus volume-based experimental setups, one can see the importance of querying slices as opposed to full volumes (e.g., Dice score gap from the upper bound in early iterations of active learning on Tables 2 & 3), i.e., achieving better outcomes with less effort from experts. Furthermore, a Dice score gap of approximately from the upper bound is achieved with after merely 8 volumes ( of ) in volume-based experiments, whereas a similar score is reached as early as for slice-based experiments. In slice-based experiments, this gap drops to when less than a fifth () of the images are annotated; which equals to only 2D image annotations, yielding a sufficiently high performance, compared to annotations necessitated for the upper bound scenario.
In this work, we have proposed a novel method to quantify representativeness of a sample from a large unsupervised dataset using Bayesian inference in the latent space of MMD VAEs. We have shown that by using a learned mapping function onto a simple latent space and sample selection to align probability distributions in this space, the representational power of a subset of samples approach to that of the complete set, for the complex case of MR imaging.
Our results support the proposed approach being a suitable candidate for sample querying for the segmentation task in active learning. Although our experimental dataset already harbors domain variation from two different acquisition settings, additional diversity is common in the clinical setting. Consequently, the advantage of our proposed sample picking approach is expected to be more pronounced, by achieving a good coverage of the complete pool of images with only few active learning iterations and annotations. The main hypothesis herein is the representability of a dataset in the latent space as a continuous and Gaussian distribution. Future work shall investigate other means of dataset representations.
- Bengio et al. (2007) Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H., 2007. Greedy layer-wise training of deep networks, in: Int Conf on Neural Information Processing Systems (NeurIPS), pp. 153–160.
- Bromiley (2003) Bromiley, P., 2003. Products and convolutions of gaussian probability density functions.
- Chen et al. (2016) Chen, H., Qi, X., Yu, L., Dou, Q., Qin, J., Heng, P.A., 2016. DCAN: Deep contour-aware networks for object instance segmentation from histology images. Medical Image Analysis 36, 135–146.
- Cohen et al. (2017) Cohen, G., Afshar, S., Tapson, J., van Schaik, A., 2017. Emnist: Extending mnist to handwritten letters, in: IEEE Int Joint Conf on Neural Networks (IJCNN), pp. 2921–2926.
- Feige (1998) Feige, U., 1998. A threshold of ln n for approximating set cover. J. ACM 45, 634–652.
- François (2008) François, D., 2008. High-dimensional data analysis, in: From Optimal Metric to Feature Selection. VDM Verlag Saarbrucken, Germany, pp. 54–55.
- Fürnstahl et al. (2016) Fürnstahl, P., Schweizer, A., Graf, M., Vlachopoulos, L., Fucentese, S., Wirth, S., Nagy, L., Szekely, G., Goksel, O., 2016. Surgical treatment of long-bone deformities: 3D preoperative planning and patient-specific instrumentation, in: Computational radiology for orthopaedic interventions. Springer, pp. 123–149.
- Gal and Ghahramani (2016) Gal, Y., Ghahramani, Z., 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning, in: Int Conf on Machine Learning (ICML), pp. 1050–1059.
- Gal et al. (2017) Gal, Y., Islam, R., Ghahramani, Z., 2017. Deep bayesian active learning with image data, in: Int Conference on Machine Learning, pp. 1183–1192.
- Gretton et al. (2006) Gretton, A., Borgwardt, K.M., Rasch, M., Schölkopf, B., Smola, A.J., 2006. A kernel method for the two-sample-problem, in: Int Conf on Neural Information Processing Systems (NeurIPS), pp. 513–520.
- Guo et al. (2019) Guo, S., Wang, K., Kang, H., Zhang, Y., Gao, Y., Li, T., 2019. BTS-DSN: Deeply supervised neural network with short connections for retinal vessel segmentation. International Journal of Medical Informatics 126, 105–113.
- Kingma and Welling (2013) Kingma, D.P., Welling, M., 2013. Auto-encoding variational bayes. arXiv preprint:1312.6114 .
- Konyushkova et al. (2019) Konyushkova, K., Sznitman, R., Fua, P., 2019. Geometry in active learning for binary and multi-class image segmentation. Computer Vision and Image Understanding 182, 1–16.
- LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al., 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 2278–2324.
Lee et al. (2015)
Lee, C.Y., Xie, S.,
Gallagher, P., Zhang, Z.,
Tu, Z., 2015.
Deeply-supervised nets, in: Artificial intelligence and statistics, pp. 562–570.
Li et al. (2015)
Li, Y., Swersky, K.,
Zemel, R., 2015.
Generative moment matching networks, in: Int Conf on Machine Learning (ICML), pp. 1718–1727.
Matthias et al. (2018)
Matthias, R., Karsten, K.,
Hanno, G., 2018.
Deep bayesian active semi-supervised learning, in: IEEE Int Conference on Machine Learning and Applications (ICMLA), IEEE. pp. 158–164.
- Milletari et al. (2016) Milletari, F., Navab, N., Ahmadi, S.A., 2016. V-Net: Fully convolutional neural networks for volumetric medical image segmentation, in: IEEE Int Conf on 3D Vision (3DV), pp. 565–571.
- Ozdemir et al. (2018a) Ozdemir, F., Fuernstahl, P., Goksel, O., 2018a. Learn the new, keep the old: Extending pretrained models with new anatomy and images, in: Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 361–369.
- Ozdemir et al. (2018b) Ozdemir, F., Peng, Z., Tanner, C., Fuernstahl, P., Goksel, O., 2018b. Active learning for segmentation by optimizing content information for maximal entropy, in: MICCAI Workshop on Deep Learning in Medical Image Analysis. Springer, pp. 183–191.
- Péan et al. (2017) Péan, F., Carrillo, F., Fürnstahl, P., Goksel, O., 2017. Physical simulation of the interosseous ligaments during forearm rotation. EPiC Series in Health Sciences 1, 181–188.
- Ronneberger et al. (2015) Ronneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 234–241.
- Salehi et al. (2017) Salehi, M., Prevost, R., Moctezuma, J.L., Navab, N., Wein, W., 2017. Precise ultrasound bone registration with learning-based segmentation and speed of sound calibration, in: Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 682–690.
- Sener and Savarese (2017a) Sener, O., Savarese, S., 2017a. Active learning for convolutional neural networks: A core-set approach. arXiv preprint:1708.00489 .
- Sener and Savarese (2017b) Sener, O., Savarese, S., 2017b. A geometric approach to active learning for convolutional neural networks. arXiv preprint:1708.00489v1 .
- Sinha et al. (2019) Sinha, S., Ebrahimi, S., Darrell, T., 2019. Variational adversarial active learning, in: IEEE Int Conf on Computer Vision (ICCV).
- Sirinukunwattana et al. (2017) Sirinukunwattana, K., Pluim, J.P., Chen, H., Qi, X., Heng, P.A., 2017. Gland segmentation in colon histology images: The glas challenge contest. Medical Image Analysis 35, 489 – 502.
Tompson et al. (2015)
Tompson, J., Goroshin, R.,
Jain, A., LeCun, Y.,
Bregler, C., 2015.
Efficient object localization using convolutional networks, in: IEEE Conf on Computer Vision and Pattern Recognition (CVPR), pp. 648–656.
- Yang et al. (2017) Yang, L., Zhang, Y., Chen, J., Zhang, S., Chen, D.Z., 2017. Suggestive annotation: A deep active learning framework for biomedical image segmentation, in: Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 399–407.
- Zeng et al. (2017) Zeng, G., Yang, X., Li, J., Yu, L., Heng, P.A., Zheng, G., 2017. 3D U-Net with multi-level deep supervision: Fully automatic segmentation of proximal femur in 3D MR images, in: Machine Learning in Medical Imaging (MLMI), pp. 274–282.
- Zhao et al. (2017) Zhao, S., Song, J., Ermon, S., 2017. Infovae: Information maximizing variational autoencoders. arXiv preprint:1706.02262 .
Appendix A Further on Bayesian Sample Querying
In this section, we conduct additional experiments to both visualize and quantify representativeness power of our Bayesian Sample Querying (BSQ) approach. Although this work has investigated active learning for segmentation, experiments on simpler image-level classification tasks can clearer convey the merits of BSQ. For instance, there are datasets of handwritten digits (e.g., MNIST LeCun et al. (1998) with 10 classes) and additional upper- and lower-case letters (e.g., EMNIST Cohen et al. (2017) with 62 classes) that contain grayscale images. With the assumption that each character has different representative attributes, one can observe the entropy change over the proportion of class labels for the queried samples. In the ideal case, the distribution over the proportion of class labels should become uniform within the annotated dataset over time, causing the entropy to increase. However, the categorically defined class labels can only be used as an auxiliary measure, since visual attributes, e.g., number of strokes, are not equally distant between classes. Furthermore, the degree of variation within samples of different digits and letters varies heavily. Consequently, one can attempt to observe and interpret the learned latent space under different constraints such as initially class-imbalanced annotation sets.
a.1 Experiment Setup & Results
Assuming that a latent space can capture all necessary degrees of variation using a few dimensions, we train an infoVAE for two setups; (1) MNIST dataset (, k samples) using dimensional latent space and (2) MNIST and EMNIST dataset (, k samples) using dimensional latent space, with a simple and mostly convolutional architecture similar to Zhao et al. (2017). Next, we conduct 5 experiments, each simulating another imbalanced draw of the initial set of annotated data. Accordingly, samples are drawn from the dataset randomly with probability depending on the class , where the priors for 3 randomly selected classes are reduced in each experiment by an order of magnitude. This is followed by iterations of representative sample queries, where the queried sample indices are determined using Eq. (7). Note that there is no uncertainty measure, since we are not assessing a classifier performance. Code will be publicly available111https://github.com/firatozdemir/AL-BSQ.
Class Entropy. As aforementioned, one can compute the entropy across classes; i.e., , where , for the samples in the annotated dataset at each sample query iteration. Accordingly, the entropy values as new representative samples are queried for setups and , respectively, are shown in Fig. 6. It can be observed that for each experiment in both dataset setups, the entropy value, as expected, is steadily increasing over iterations as new samples are queried.
Latent Space Coverage. Alternatively, one can also observe the evolution of fitted normal distributions in the latent space. Let be the learned mapping function onto the latent space of a corresponding experimental setup (). For a query iteration , we map all samples from the annotated set to the respective learned latent space and calculate the mean and standard deviation for each dimension to fit a multivariate diagonal Gaussian. In order to qualitatively present the parameters of the fitted Gaussians in 1D, we treat each dimension of the multivariate Gaussian as a univariate Gaussian, and compute the parameters of the product of univariate Gaussians following Bromiley (2003). In Fig. 7, the evolution of the fitted Gaussians are shown for all five experiments throughout 30 query iterations for both setups & , where the y-axis is shifted to the mean value of the entire pool of images . It can be seen that the first few iterations counter the shifted mean of the imbalanced initial annotated set. The following iterations evolve around the mean of the . Eq. (7) promotes selecting samples away from already annotated ones, which encourages covering the same range of representative attributes as with substantially fewer samples. Consequently, has a higher standard deviation in the representative space, which is also observed in Fig. 7. These empirical results corroborate the hypothesis that Eq. (7) promotes selecting samples that cover the mode and breadth of the distribution of in the representational space.