While deep learning has achieved great success in computer vision tasks ranging from image classification to detection and segmentation, these successes continue to require large amounts of labeled training examples. This is a significant challenge as deep learning practitioners increasingly apply deep learning models to solve new problems in diverse domains ranging from medicine to sustainability , placing a suboptimal and sometimes prohibitive burden on domain experts to label large amounts of training data. Active learning, where training examples are incrementally selected for labeling to yield high classification accuracy at low labeling budgets, has therefore emerged as an exciting paradigm with significant potential for democratizing the use of deep learning .
Many of the most successful approaches for active learning thus far have been based on pool-based active learning   , where small subsets of examples from a large unlabeled pool are iteratively selected for labeling, based on an acquisition function that assesses how informative the subset is expected to be for the training process. As the selected subsets are labeled by an oracle (i.e., a human annotator), they are added to a labeled dataset and used to update a classifier trained on the dataset. Much work in active learning has focused on developing effective acquisition functions, including those which select examples that produce high classifier uncertainty , that are expected to lead to a high improvement in a Bayesian framework , or that are not well represented in the labeled set . However, all of these approaches, at small labeling budgets, still suffer from a significant gap in performance when compared with training on large labeled datasets .
In this work, we introduce an active learning model based on the key observation that pool-based active learning approaches, which have access to a large unlabeled pool of data, can more effectively use this unlabeled data in the incremental training of the classifier itself. Particularly at small labeling budgets, this unlabeled data can provide valuable information about the underlying structure of the data, which is difficult to obtain from very few training examples, when directly integrated with the training of the classifier. We propose a model that utilizes a state-of-the-art variational adversarial acquisition function  (which aims to select examples not well represented in the training set), but within a framework for efficiently training the classifier using both labeled and unlabeled data. Importantly, we perform the semi-supervised training by embedding the classifier within a GAN model that allows modeling the underlying structure of the unlabeled data to infer additional class labels for generated images that can be used to train the classifier. We share the encoder and decoder of the variational adversarial acquisition function as the encoder and generator of the conditional (bidirectional) GAN, and co-train the acquisition function and conditional GAN jointly. This allows, for example, the classifier in the GAN to improve the shared generator/decoder and hence the acquisition function. Vice versa, the improved acquisition function can then also better select training examples that most improve the classifier in the GAN.
We evaluate our proposed model on both standard image classification benchmarks for active learning—MNIST, SVHN, and CIFAR-10, as well as on more complex datasets—CelebA and ImageNet. Our model significantly outperforms prior active learning approaches on all these datasets, providing strong substantiation that active learning greatly benefits from more effective utilization of the unlabeled pool of data.
2 Related Work
Active Learning. Active learning has been widely studied and most of the early works can be found in the classical survey of . Current approaches can be categorized as query-acquiring (pool-based) and query-synthesizing algorithms. Pool-based methods use various acquisition functions for selecting the most informative examples, while query-synthesizing algorithms use generative models to generate informative examples.
. These methods are proposed in both Bayesian and non-Bayesian frameworks. In non-Bayesian classical active learning approaches, uncertainty heuristics such as distance from the decision boundary , highest entropy , and expected risk minimization  have been widely investigated. Bayesian active learning methods use probabilistic models such as Gaussian processes  or Bayesian neural networks 
to estimate uncertainty. proposed the Bayesian active learning by disagreement (BALD) method in which the acquisition function is measured by the mutual information of the training examples with respect to the model parameters.  showed the relationship between uncertainty and dropout to estimate uncertainty in prediction in neural networks and applied it in active learning.  proposed to use Core-set for selecting the subset of unlabeled images, where they minimize the Euclidean distance between the sampled points and the points that were not sampled in the feature space of the trained model. Also recently, , , and  proposed the batch active learning where they optimize for diversity as well as uncertainty.
Instead of querying the most informative instances from an unlabeled pool,  introduced a generative adversarial active learning (GAAL) model to produce new synthetic examples that are informative for the current model. As they mainly rely on the generated samples for training the classifier, their performance highly depends on the quality and diversity of generated images. Also, the GAN model in  is not fine-tuned as training progresses, therefore the generator and discriminator do not co-evolve.
A few recent works also use generative models for active learning.  propose a Bayesian generative active deep learning approach where they combine active learning and data augmentation. However, they only use the labeled dataset for training their model and they use BALD  acquisition function for selecting new data points.
Also,  suggests variational adversarial active learning, where they train a variational auto-encoder (VAE) and a discriminator to learn a latent representation on both labeled and unlabeled data. Then they use the output of the discriminator (between labeled and unlabeled images) as a measure of uncertainty for selecting from unlabeled data. In contrast to our work, they only use labeled images to train the classifier, and they use the VAE only for acquiring unlabeled images. Contemporaneously with our own work, 
also suggest using unlabeled data at model training, however, they do not show a significant improvement for using different sampling strategies as they perform active learning and semi-supervised learning separately. A key strength of our model is the idea that it learns the classifier and the acquisition function jointly using both labeled and unlabeled images.
Generative Adversarial Networks. Recent applications of GANs have shown that they can produce excellent samples.  introduced BigGAN which uses class conditional generators trained on ImageNet to generate high-fidelity natural images.  later proposed S3GAN, where they use self- and semi-supervised learning methods to achieve the state-of-the-art sample quality in generating high resolution and diverse natural images. Their model matches the BigGAN in terms of quality of image generation using only of labels.  and  proposed an extension to the GAN model called bidirectional GAN (BiGAN) that augments the standard GAN with an encoder module mapping real data to the latent space, and the inverse of the mapping is learned by the generator. They showed that this model forms a good representation learner and can capture complex data distributions. Recently,  combined BiGAN and BigGAN to introduce BigBiGAN which achieves state-of-the-art representation learning on ImageNet. We leverage these ideas for learning a common representation for both labeled and unlabeled images; however, the main focus of our method is training the classifier and selecting the most informative examples.
Our method addresses the standard active learning problem for image classification, where we have image space , label space , and is the number of classes. Let denote a sample (image, label) pair belonging to labeled data , and denote a sample image belonging to the pool of unlabeled data . Our goal is to train the most label-efficient classifier, i.e., with the highest classification accuracy for any given labeled dataset size . The active learner is allowed to iteratively select a fixed sampling budget number of samples from the unlabeled pool (), to be annotated by an oracle and added to the labeled dataset for the next iteration, using a sample acquisition function .
In a nutshell, our model introduces an approach for utilizing the unlabeled pool of data in addition to the active-labeled dataset for training the target classifier. The model uses a variational adversarial acquisition function as the sampling function. Our key contribution is the integration of this sampling function within a semi-supervised framework for training a classifier, which allows incorporation of the unlabeled data pool. Specifically, we use a semi-supervised conditional GAN, where the encoder and generator can be shared with the acquisition function and co-trained such that both the acquisition function and conditional GAN functions can mutually benefit from each others’ improvement. Following, we first describe the variational adversarial acquisition function in Sec. 3.1. We then present our semi-supervised framework based on a conditional GAN for incorporating unlabeled data, in Sec. 3.2. Finally, in Sec. 3.3, we describe the co-training of our full model.
3.1 Variational adversarial acquisition function
We use the variational adversarial active learning (VAAL) acquisition function 
in our active learning method. This approach selects data examples for labeling that are not already well-represented in the labeled training set, by using a variational autoencoder with both reconstruction and adversarial losses. Specifically, the core of VAAL is a variational autoencoder consisting of encoderwhich maps images to a latent representation, and decoder (i.e., generator) which reconstructs images from latent representation . A -VAE reconstruction loss is used to perform transductive learning of a representation space from both labeled data and unlabeled data :
where and are the encoder and generator parameterized by and , respectively. is the Lagrangian parameter for the optimization problem.
In addition to the VAE (i.e., the encoder-decoder), VAAL uses a discriminator which takes a latent representation
as input and attempts to estimate the probability that the corresponding data comes from the labeled data. Once trained, this serves as the sampling function. We, therefore, denote the discriminator as S(z): if
is low, then the discriminator is very confident that the data point is unlabeled, and it is likely unrepresentative of the labeled set and a good candidate for labeling. The discriminator is trained together with the VAE in an adversarial manner, such that the VAE encoder tries to map the labeled and unlabeled data into the same latent space with a similar probability distribution, while the discriminator tries to distinguish labeled from unlabeled data. The encoder loss, therefore, has an additional term
such that the total encoder and generator loss is
while the discriminator (sampler) loss is
3.2 Semi-supervised framework for incorporating unlabeled data
Our framework for incorporating unlabeled data is based on the observation that the decoder in the VAAL acquisition function can be repurposed to additionally provide information about the unlabeled data and its underlying structure directly to a target classifier that we wish to train, by adapting it to simultaneously serve as the generator in a semi-supervised, class conditional GAN. The use of a class conditional GAN is key for several reasons. The first is that a class conditional GAN (as opposed to an unconditional GAN) contains a classifier component in the discriminator that can naturally serve as the target classifier for active learning. The second reason is that while the unconditional decoder in VAAL can only generate images from the unlabeled data distribution without any knowledge of classes, by adapting the decoder to simultaneously be the generator of a conditional GAN, it can generate class-conditional images (leveraging its exposure to the large unlabeled dataset) to improve training of the target classifier in the conditional GAN’s discriminator. Therefore, the discriminator is decomposed into a learned discriminator representation which is fed into linear classifier for predicting real/fake and linear classifier for predicting the class label. Using a similar approach as , we also take into account the encoded latent variable and the real class label or inferred class label in linear classifier for better predicting real/fake images. We denote for real/fake predictions and for class label predictions. All the modules in our model (Generator, Discriminator, Encoder, and Sampler) are depicted in Fig. 2.
To adapt the VAAL acquisition function for our semi-supervised framework, we make the encoder and generator (decoder) class-conditional. The encoder, generator, and sampler (discriminator) losses for the acquisition function that we use (Eq. 3 and Eq. 3.1) therefore become:
Next, we describe the generator and decoder of our semi-supervised, conditional GAN framework. We also add an encoder as in BiGAN , which has been shown to improve classification performance for more complex data  and can also be shared with the acquisition function.
Generator. The objective of the generator in the conditional GAN framework is to generate class-conditional images that can fool the discriminator into predicting them as real images. Since it learns to generate by playing a min-max game with the discriminator (and therefore learns from both labeled and unlabeled data), the generated images convey information about the structure in the unlabeled data that augments the labeled data when training the target classifier in the discriminator (described in the next section). The generator loss is the standard loss for conditional GANs:
Discriminator. The discriminator (Fig. 2) structure also follows that of standard conditional GANs, containing both a real/fake discriminator network and a classification network, however, we use it in the semi-supervised setting. Importantly, the classification network serves as the target classifier for which we will try to maximize accuracy within the active learning problem. The loss for the real/fake discriminator network is:
where the first term corresponds to the standard discriminator loss for labeled data and the third term corresponds to the standard loss for generated data. The second term corresponds to the discriminator loss for unlabeled data, where the labels for these examples are inferred through the classifier in the descriminator, described next. Note that the real/fake discriminator is a function of the data or , the class or due to being a conditional GAN, as well as the latent representation , , or due to the BiGAN structure with a concurrently learned encoder.
The loss for the classification network in the discriminator is:
Here the first term is the cross-entropy loss for real images that have ground truth labels, and the second term is the cross-entropy loss for generated images that have corresponding labels that were used for the generation. This second term allows the classifier to benefit from the unlabeled data used to train the generator. Since the classification network and real/fake discriminator network have a shared trunk, the real/fake supervision additionally enables learning a stronger shared feature representation that can further improve classification performance.
Encoder. In addition to the generator and discriminator, we add an encoder (shown in Fig. 2) in our conditional GAN following BiGAN . This has been shown to improve classification performance for more complex data  and is a natural choice since our acquisition function already has an encoder that can be shared. Our encoder loss following BiGAN is:
where the first term corresponds to the discriminator loss for labeled images and the second term correspond to unlabeled images.
3.3 Co-training of full model
To perform active learning, the acquisition function and the conditional GAN presented above are jointly co-trained, where the losses for the full model are:
Note that the generator and encoder are shared between the acquisition function and the conditional GAN, while the discriminator and the sampler are not. After every selection and labeling of new samples, all components of the model are updated using the new labeled dataset and unlabeled pool . The full algorithm is presented in Alg. 1.
We evaluate the performance of our model on a wide range of datasets and compare it to prior state-of-the-art active learning methods. We assess performance by measuring the accuracy of the classifier trained during the active learning procedure versus the number of labeled images used in the training.
Baselines: We compare our results for actively training our classifier against the following baselines: 1) Uncertainty-based methods: In these approaches, unlabeled images will be labeled based on the uncertainty in the classifier’s prediction for them. We perform active learning using Max-Entropy , Variation Ratios , and Mean STD  among the methods in this class. We observed that these achieve similar performance; thus, we only compare against the Max-Entropy method. 2) Bayesian methods: In Bayesian frameworks, probabilistic models such as Gaussian processes and Bayesian neural networks are used to estimate the expected improvement by each query. 
used dropout as an approximation to Bayesian inference and used Bayesian Active Learning by Disagreement (BALD) for Bayesian active learning on image data. We report the performance of this method in our experiments. Another recent approach in this class is Bayesian Generative Active Deep Learning (BGADL). As the authors reported, their model does not converge for very few numbers of labels (which is the setting in our experiments), so we do not report the results for this method. 3) Variational adversarial active learning: We also compare our model with the recent state-of-the-art method Variational Adversarial Active Learning (VAAL) . We use a sampling strategy similar to theirs in our method; however, VAAL trains the classifier separately only on the selected labeled images. 4) Random: We show results using random sampling, in which samples are uniformly sampled from the unlabeled pool, and the classifier is then trained on the labeled data. 5) Full training of our model: As another baseline, we compare the performance of the model during the learning iterations with the fully trained model (when we use all the labels.) This serves as an upper bound for our performance and shows how fast our method converges to the best accuracy.
Following, we first evaluate our model versus the baselines on classic active learning benchmarks (MNIST, SVHN, CIFAR-10) in Sec. 4.1, and then on more complex datasets (CelebA, ImageNet) in Sec. 4.2. Finally, in Sec. 4.3, we perform ablation studies on our model.
4.1 Performance on classic active learning benchmarks
Datasets: The MNIST  dataset contains images of handwritten digits of 10 classes (with training samples and test samples). The SVHN  and CIFAR-10  datasets have images of size of 10 classes (with and training samples, and and test samples, respectively).
Architecture and hyper-parameters
: For the MNIST dataset, we used a 3-layer MLP for both the encoder and generator and a 5-layer MLP with added Gaussian noise (with a standard deviation of) between the layers for the discriminator. Our sampler module for this dataset is also a 5-layer MLP. Adam  with a learning rate of is chosen as the optimizer for the encoder and generator modules, and with a learning rate of for the discriminator and sampler modules. For the SVHN and CIFAR-10 datasets, we use a CNN with 3 hidden layers for both the encoder and generator and 3 convolutional blocks, each with 3 layers and dropout (with a rate of ) between the blocks, for the discriminator. Our sampler module for this dataset is the same 5-layer MLP. Adam with a learning rate of is chosen as the optimizer for the encoder and generator modules and with a learning rate of for the discriminator and sampler modules. We also use and
as our hyperparameters. For these datasets we use a latent space withdimensions. We also use the feature matching technique which is proposed in  for matching the features in the discriminator for generated and unlabeled images. In this way, generated images will be a better representative of the unlabeled parts of the dataset.
Implementation details: We begin our experiments with an initial labeled pool with only labels for the MNIST dataset, labels for SVHN, and labels for CIFAR-10. We add the same number of labels at each iteration of active learning and report results for the first iterations. We also report the accuracy after full training of our model.
Our model performance: Fig. 3 shows the performance of our model compared to the baselines on MNIST, SVHN, and CIFAR-10 datasets. Our model significantly outperforms all the baselines as it uses the unlabeled images as well as generated images in the training procedure. The performance gap is more clear especially for very few numbers of labels when unlabeled and generated images are more useful in the training. The performance of our model saturates quickly on the classic benchmarks as it achieves accuracy close to the full training accuracy with the size of the labeled dataset on the order of of labels for MNIST, for SVHN, and for CIFAR-10. By looking at the number of labels required to reach a specific accuracy, for instance, on CIFAR-10 dataset, our model only needs images to be labeled while this number is approximately for VAAL. This shows the importance of using unlabeled data when training the model in an active manner, which ultimately results in more label efficient learning.
4.2 Performance on more complex datasets
Datasets: The large-scale CelebFaces Attributes (CelebA)  dataset is a more challenging dataset with more than celebrity images, each annotated with attributes. We split the dataset into images for training and images for testing. Then we resize all the images into pixels and use max normalization in the prepossessing phase. Our task is to classify the images into two classes (male/female) based on the gender attribute annotation in the dataset. ImageNet  is a large dataset with more than images of classes. The validation set for this dataset contains images. We augment our dataset by horizontally flipping the images. Then we resize all the images into pixels and normalize them before feeding them into the model.
Architecture and hyper-parameters: For the CelebA dataset, we used 4 residual blocks each with two convolutional layers for the encoder and generator, and another 4 residual blocks with added dropout layer (with a rate of ) for the discriminator. Adam with a learning rate of is chosen as the optimizer for all parts. For ImageNet, we use the same architecture and hyper-parameters as S3GAN in , which is the state-of-the-art in image generation on the ImageNet dataset. The rest of hyper-parameters are the same as the previous experiments except for the latent space which has dimensions.
Implementation details: For the CelebA dataset, we begin our experiments with an initial labeled pool of only labels. We add the same number of labels for iterations. For experiments on ImageNet, due to our limitation in accessing a huge computation power, we trained a relatively smaller (compared to BigGAN ) generator-encoder module as well as a sampler using a similar approach as , to select the most informative examples in each iteration by mapping both labeled and unlabeled data into the latent space. Then we added the high-quality fake images generated by a pretrained model (with pixels resolution)  on of labels to augment our labeled data, and finally, train the classifier network () using the labeled and generated images. Therefore, our experiments here utilize a variant of our model where we are training the encoder and sampler modules (for selecting the most informative examples) separately from the generator and discriminator (for generating high-quality fake images and training the classifier on labeled and generated images).
Our model performance: As can be observed from Fig. 4, our model significantly outperforms all the baselines on these datasets as well. Similar to what we observed on the previous datasets, on CelebA there is only a small gap between many active learning methods and the Random baselines. However, the performance gap between our model and the other baselines is significant and steady over the entire training procedure. This shows that our generative model approach can mitigate the lack of real labeled images. Fig. 4 also shows the performance of our model on the ImageNet dataset compared to other methods. As we are not co-training all the parts together, the performance gap from the model trained on all of the labels is bigger compared to previous datasets, however, it still significantly outperforms all the baselines.
4.3 Ablation study
We perform an ablation study on the CIFAR-10 dataset. For showing the effectiveness of each part of our model, we consider the following variants of ablation and compare the performances: 1) No active learning: we remove the sampler and adversarial loss for the encoder and use random sampling at each iteration of training the model. 2) No encoder: we only have the generator and discriminator modules and we use as our sampling strategy. 3) No co-training: similar to our experiment on ImageNet, we again use VAAL as our acquisition function. However, instead of utilizing the unlabeled data via co-training the encoder-generator pair with the discriminator, we add generated images from a pretrained model to train the classifier. 4) Random: samples are uniformly sampled from the unlabeled pool and the classifier is trained on the labeled data.
As shown in Fig. 5, each module contributes to the final performance of the model. In the first variant, we use random sampling as our sampling strategy, therefore, it can be seen as the performance for using unlabeled images in addition to labeled images in our adversarial learning method without using active learning algorithms. Although our purpose in this work is not training semi-supervised generative models, we are also outperforming related semi-supervised learning with generative models approaches such as , which achieves accuracy using labels. Our second variant captures the effect of the encoder in training our model. In this setting, the model utilizes the generator and discriminator modules, but it does not perform the representation learning with labeled and unlabeled images (and therefore uses BALD instead of VAAL as the acquisition function), and cannot perform as well as our model. We also conduct an experiment similar to what we performed for the ImageNet dataset to better understand the effect of co-training all parts of the model together. Although we are using the same sampling strategy (VAAL) here and adding the class-conditional generated images in order to utilize the unlabeled data, its performance is still significantly below our model in which all parts are co-trained with each other. Finally, we have the Random baseline which is the fundamental baseline in active learning literature.
In this work, we proposed a new active learning method using deep generative models that takes advantage of both labeled and unlabeled images, for learning a representation that is used not only for selecting the most informative examples but also for utilizing unlabeled data in training the classifier. We demonstrated that our model significantly outperforms the previous state-of-the-art on a wide range of datasets (MNIST, CIFAR-10, SVHN, CelebA, ImageNet).
-  Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. arXiv preprint arXiv:1906.03671, 2019.
Incorporating diversity in active learning with support vector machines.In
Proceedings of the 20th international conference on machine learning (ICML-03), pages 59–66, 2003.
-  Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.
Imagenet: A large-scale hierarchical image database.
2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
-  Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
-  Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. arXiv preprint arXiv:1907.02544, 2019.
-  Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
-  Sayna Ebrahimi, Anna Rohrbach, and Trevor Darrell. Gradient-free policy architecture search and adaptation. arXiv preprint arXiv:1710.05958, 2017.
-  Linton C Freeman. Elementary applied statistics: for students in behavioral science. John Wiley & Sons, 1965.
-  Yoav Freund, H Sebastian Seung, Eli Shamir, and Naftali Tishby. Selective sampling using the query by committee algorithm. Machine learning, 28(2-3):133–168, 1997.
-  Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1183–1192. JMLR. org, 2017.
-  Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011.
-  Neal Jean, Marshall Burke, Michael Xie, W Matthew Davis, David B Lobell, and Stefano Ermon. Combining satellite imagery and machine learning to predict poverty. Science, 353(6301):790–794, 2016.
-  Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. Multi-class active learning for image classification. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 2372–2379. IEEE, 2009.
-  Ashish Kapoor, Kristen Grauman, Raquel Urtasun, and Trevor Darrell. Active learning with gaussian processes for object categorization. In 2007 IEEE 11th International Conference on Computer Vision, pages 1–8. IEEE, 2007.
-  Alex Kendall, Vijay Badrinarayanan, and Roberto Cipolla. Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680, 2015.
-  Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning, 2019.
-  Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
-  Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Large-scale celebfaces attributes (celeba) dataset. Retrieved August, 15:2018, 2018.
-  Mario Lucic, Michael Tschannen, Marvin Ritter, Xiaohua Zhai, Olivier Bachem, and Sylvain Gelly. High-fidelity image generation with fewer labels. arXiv preprint arXiv:1903.02271, 2019.
-  David JC MacKay. Information-based objective functions for active data selection. Neural computation, 4(4):590–604, 1992.
-  Andrew Kachites McCallumzy and Kamal Nigamy. Employing em and pool-based active learning for text classification. In Proc. International Conference on Machine Learning (ICML), pages 359–367. Citeseer, 1998.
-  Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.
-  Robert Pinsler, Jonathan Gordon, Eric Nalisnick, and José Miguel Hernández-Lobato. Bayesian batch active learning as sparse subset approximation. arXiv preprint arXiv:1908.02144, 2019.
-  Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in neural information processing systems, pages 2234–2242, 2016.
-  Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017.
-  Burr Settles. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009.
-  Claude Elwood Shannon. A mathematical theory of communication. Bell system technical journal, 27(3):379–423, 1948.
-  Oriane Siméoni, Mateusz Budnik, Yannis Avrithis, and Guillaume Gravier. Rethinking deep active learning: Using unlabeled data at model training. arXiv preprint arXiv:1911.08177, 2019.
-  Samarth Sinha, Sayna Ebrahimi, and Trevor Darrell. Variational adversarial active learning. arXiv preprint arXiv:1904.00370, 2019.
-  Simon Tong and Daphne Koller. Support vector machine active learning with applications to text classification. Journal of machine learning research, 2(Nov):45–66, 2001.
-  Toan Tran, Thanh-Toan Do, Ian Reid, and Gustavo Carneiro. Bayesian generative active deep learning. arXiv preprint arXiv:1904.11643, 2019.
-  Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, and Liang Lin. Cost-effective active learning for deep image classification. IEEE Transactions on Circuits and Systems for Video Technology, 27(12):2591–2600, 2016.
-  Serena Yeung, Francesca Rinaldo, Jeffrey Jopling, Bingbin Liu, Rishab Mehra, N Lance Downing, Michelle Guo, Gabriel M Bianconi, Alexandre Alahi, Julia Lee, et al. A computer vision system for deep learning-based detection of patient mobilization activities in the icu. NPJ digital medicine, 2(1):1–5, 2019.
-  Jia-Jie Zhu and José Bento. Generative adversarial active learning. arXiv preprint arXiv:1702.07956, 2017.