Visual Transformer for Task-aware Active Learning

by   Razvan Caramalau, et al.
Imperial College London

Pool-based sampling in active learning (AL) represents a key framework for an-notating informative data when dealing with deep learning models. In this paper, we present a novel pipeline for pool-based Active Learning. Unlike most previous works, our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples. Another contribution of this paper is to adapt Visual Transformer as a sampler in the AL pipeline. Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples, which is crucial to identifying the influencing unlabelled examples. Also, compared to existing methods where the learner and the sampler are trained in a multi-stage manner, we propose to train them in a task-aware jointly manner which enables transforming the latent space into two separate tasks: one that classifies the labelled examples; the other that distinguishes the labelling direction. We evaluated our work on four different challenging benchmarks of classification and detection tasks viz. CIFAR10, CIFAR100,FashionMNIST, RaFD, and Pascal VOC 2007. Our extensive empirical and qualitative evaluations demonstrate the superiority of our method compared to the existing methods. Code available:


page 1

page 2

page 3

page 4


Sequential Graph Convolutional Network for Active Learning

We propose a novel generic sequential Graph Convolution Network (GCN) tr...

Picking groups instead of samples: A close look at Static Pool-based Meta-Active Learning

Active Learning techniques are used to tackle learning problems where ob...

Task-Aware Active Learning for Endoscopic Image Analysis

Semantic segmentation of polyps and depth estimation are two important r...

Multi-task Active Learning for Pre-trained Transformer-based Models

Multi-task learning, in which several tasks are jointly learned by a sin...

Video Relation Detection via Tracklet based Visual Transformer

Video Visual Relation Detection (VidVRD), has received significant atten...

Deep Indexed Active Learning for Matching Heterogeneous Entity Representations

Given two large lists of records, the task in entity resolution (ER) is ...

Active-Learning-as-a-Service: An Efficient MLOps System for Data-Centric AI

The success of today's AI applications requires not only model training ...

1 Introduction

In the recent success stories of Deep Learning in image classification Krizhevsky et al. (2012); He et al. (2016); Dosovitskiy et al. (2020) and object detection  Liu et al. (2016); Zhang et al. (2018); Ghiasi et al. (2020) the large-scale labelled data sets have been crucial. Data annotation is a time-consuming task, needs experts and is also expensive. Active Learning Kontorovich et al. (2016); Pinsler et al. (2019); Pu et al. (2016); Caramalau et al. (2021b, a) is getting popular to select a subset of discriminative examples incrementally to learn the model for downstream tasks. In the AL frameworks usually, a learner, a sampler, and an annotator complete a loop and repeat the cycle. In brief, a learner minimizes the objective of the downstream task and the sampler selects the representative unlabelled examples given a fixed annotation budget. And, an annotator queries the labels of the unlabelled data recommended by the sampler. Based on the category of the employed sampler, the whole paradigm of AL can be broadly dissected into uncertainty based Houlsby et al. (2011); Gal et al. (2017); Kirsch et al. (2019), geometric based Wolf (2011); Sener and Savarese (2018), model based Yoo and Kweon (2019); Sinha et al. (2019); Agarwal et al. (2020); Gao et al. (2020), and so on.

In this paper, we focus on model based Sinha et al. (2019); Caramalau et al. (2021b); Yoo and Kweon (2019) active learning pipelines for pool-based sampling. The importance of this type of pipeline is growing and getting more relevant than ever before due to the increasing use of deep learning algorithms. In this scenario, given a large volume of unlabelled data, an initial model is trained on a small subset of randomly annotated examples. In the later stages, the samples are annotated in the guidance of the model trained in the previous stage. The fate of this method category is determined by the performance of initial model which is commonly known as cold-start problem Gao et al. (2020). To tackle such a problem and improve the performance of the model in its early stage, training the model in a semi-supervised fashion is slowly getting into attention Gao et al. (2020). Exploiting unlabelled data along with the labelled data in joint or a multi-task setup improves the generalization of the model Caruana (1997)

. However, the previous work minimizes the loss functions such as consistency loss 

Gao et al. (2020) which is indirect to the downstream task i.e. class category loss. Also, these methods rely only on exploiting unlabelled data to improve the performance of the model. Unlike these methods, our approach tackles this problem from both aspects i.e. making use of unlabelled data as well as engineering the model architecture. To this end, we adapted Visual Transformer Wu et al. (2020) in the pipeline with the learner and exploited the unlabelled data for better generalisation. Finally, we train the model in a joint-learning framework by minimizing a labelled vs unlabelled discriminator (sampler) as well as the downstream task-aware loss.

Visual Transformers (VT) Wu et al. (2020) are attaining state-of-the-art results on various tasks such as image classification Dosovitskiy et al. (2020); Wu et al. (2020), detection Carion et al. (2020), and so on. To the best of our knowledge, this is the first work adapting VT for active learning framework. Figure 1

demonstrates the pipeline of the proposed method. A batch of both the unlabelled and labelled examples are passed through few convolutional layers sequentially. The output batch from these layers are fed into the VT layers at once. CNN layers are myopic in nature by extracting the statistics of local information as image features. Uncertainty on such feature space helps us to select images with variations on blurriness, contrasts, textures etc. However, a non-local interaction between the unlabelled and labelled examples is essential to identify the complimentary examples to query their labels. To this end, we propose to integrate VT in between CNN layers and the output layer. Previous works on VT for computer vision 

Dosovitskiy et al. (2020) divided the images by a regular grid to extract local patches and fed them to VT to extract the non-local interactions between parts of an image. As our task is to identify the most discriminative images, hence, we consider each level representation of both labelled and unlabelled examples as an input channel to this module. Thus, VT extracts non-local interactions between the labelled-unlabelled examples while uncertainty on such feature space allows us to select the images which are sufficiently different on a visual concept. The output of the VT is feed to output which is bifurcated into labelled vs unlabelled discriminator and task-specific auxiliary classifier. Uncertainties on feature space to minimizing these two losses help us to find the unlabelled examples which are sufficiently different to labelled examples and relevant to the downstream task. This address the problem of earlier methods Sener and Savarese (2018) selecting examples from high-density regions irrespective of the decision boundary.

We summarise the contributions of this paper in the following bullet points:

  • We propose a novel task-aware joint-learning framework for active learning.

  • We adapted Visual Transformer for the first time in the pipeline of active learning.

  • We evaluated our methods for sub-sampling real and synthetic examples for four different image classification and one object detection benchmarks.

  • We outperformed existing methods by a large margin and attain a new state-of-the-art performance.

2 Related Works

The current taxonomy of active learning is founded on the extensive survey of Settles Settles (2009)

. This gathers all the classical approaches together with the three scenarios of active learning. Most deep learning works, including ours, rely on the pool-based scenario. Depending on the mechanisms used for sampling or deriving heuristics of the unlabelled data, we can categorise methods as uncertainty-aware, geometric representation and model-based. The first category has been initially applied with the Monte Carlo (MC) Dropout approximation for deep Bayesian models of Gal

et al.Gal and Ghahramani (2016). Thus, the selection in the active learning study Gal et al. (2017) is inspired from classical approaches as maximum entropy Shannon (1948) or BALD Houlsby et al. (2011). Another approach of gathering the uncertainty from models is by querying a committee machine Settles (2009). A recent work that expanded to deep learning this classic principle has been presented by Beluch et al.Beluch Bcai et al. (2018). This method overcame in performance the works centred on the MC Dropout mechanism. However, with the complexity increase of current deep learning models, both iterative approaches have proven to be hardly applicable.

On this concern, the second category tackles this issue by evaluating geometrically the representations of the downstream task. We acknowledge Senner and Savarese Sener and Savarese (2018) as the most representative work with the CoreSet algorithm. Their methodology evaluates a global fixed radius to cover the feature space by selecting a subset of unlabelled. This has been succeeded by other learners of Kontorovich et al. (2016); Tsang et al. (2005); Har-Peled and Kushal (2005). We include this competitive baseline for comparison in our experiment section.

The third category, and the most recent one, deploys dedicated learning models, also referred to as samplers, for querying new data. The first proposed module, Learning Loss Yoo and Kweon (2019), tracks uncertainty by training end-to-end and estimating the predictive loss of the unlabelled. The modular aspect of this category permits deploying the samplers to diverse applications. Our method inherits this advantage. Sinha et al.Sinha et al. (2019) defined the sampler training framework VAAL separately where a variational auto-encoder (VAE) maps all the available data in a latent space. The selection principle is based on the adversarially trained discriminator between labelled and unlabelled. The fallback of this method, lack of task-awareness, has been addressed in Zhang et al. (2020); Agarwal et al. (2020); Caramalau et al. (2021a). Hence, Agarwal et al.Agarwal et al. (2020) proposes CDAL that combines the sampler with the contextual diversity while enlarging the receptive feature domain. Following similar trends, Caramalau et al.Caramalau et al. (2021a) deploys Graph Convolutional Networks (GCNs) for feature propagation between labelled and unlabelled images. These works are close to ours and are evaluated in the experiment section.

Because our proposed AL framework combines the semi-supervised learning (SSL) strategy with a visual transformer, we further investigate related literature. SSL and AL have been recently considered for deep learning in a few works

Gao et al. (2020); Sener and Savarese (2018); Drugman et al. (2016); Li et al. (2019). The first attempt is noted in CoreSet Sener and Savarese (2018) during AL cycles. A more elaborated method CSAL analyses the consistency loss of unlabelled data when trained end-to-end. Thus, it achieves state-of-the-art on the image classification datasets. Furthermore, we outperform this baseline in our analysis under their experiment settings. On the other end, the visual transformer has been initially designed for vision applications by Dosovitskiy et al.Dosovitskiy et al. (2020)

. They considered the natural language processing BERT

Devlin et al. (2019) approach on patches of the images to learn the non-local representations. Following this work, transformers have successfully replaced convolutional layers in Wu et al. (2020) while boosting the accuracy under a similar number of parameters. Our methodology is supported by the insights from this research.

3 Method

Figure 1: This diagram depicts the proposed pipeline. We pass both the labelled and unlabelled examples through the same CNN and extract the visual features of each image encoded within the batch. These representations are fed to the Visual Transformer. Its outputs are passed to the bifurcated branches to minimize both the learner’s (class cross-entropy ) and the sampler’s objectives (binary cross-entropy).

In this Section, we start with the formal definition of pool-based Active Learning in general followed by our contributions on introducing VT in the pipeline and the task-aware joint-learning objective. Given a large pool of unlabelled data , the pipeline begins with a cold-start training of the model by randomly selecting a small subset and labelling an initial set . The performance of the initial model is crucial for the end outcome of the framework Gao et al. (2020) for model-based active learning. Our contribution lies in addressing this problem which is also commonly known as cold-start problem. To this end, we took two approaches: jointly learning the parameters of the learner and sampler utilising the accessible unlabelled examples and adapting the Visual Transformer as a bottleneck of the pipeline. If b is the budget to sample unlabelled examples over multiple selection stages, the main objective of the pool-based AL scenario is to obtain fast generalisation of the learner with the least number of labelled subsets n in order to keep minimum samples. , represents the annotated examples in every subset.

Figure 1 outlines the proposed pipeline. From the Figure, we can see that both the labelled and unlabelled examples are fed to the image feature extractor, . For most vision applications, the backbone of the learner is a feature extractor commonly formed of CNNs such as ResNet He et al. (2016) and VGG architectures Simonyan and Zisserman (2015). From the initial labelled set, and , unlabelled examples, we infer them to the feature extractors. In our case, we take an equal number of labelled and unlabelled examples to balance the number of training examples. In a batch of examples, we choose both unlabelled and labelled examples and feed them into where . Here, are height and width of the images, is channels of input images, are width and height of channels from the last convolutional layer, is the total number of filters on the last convolutional layer of feature extractor. Earlier methods Caramalau et al. (2021b); Yoo and Kweon (2019) feed the output of the convolutional layers to the output layers to minimize the loss. Instead, we feed the output of these layers to the Visual Transformer before feeding to the output layers. CNN layers handle only local dependencies, but non-local dependencies between the labelled and unlabelled examples are essential to select the complementary unlabelled examples. Visual Transformer has been quite successful in modelling non-local dependencies. UncertainGCN Caramalau et al. (2021b) uses GCN to handle long-range dependencies in Active Learning. A comparative study on GCN and self attention Wu et al. (2020) has been shown that the self-attention aggregation function retains better the diverse concepts than the GCN.

Visual Transformer. Dosovitskiy et al. (2020) is the first work to apply Visual Transformer successfully for image classification. In this work, each image is divided by regular grids into patches and each of the patches is considered input tokens to the transformer. Another following work Wu et al. (2020) compressed the features of CNN backbones to visual tokens while the transformer acted as the final convolutional layer. Inspired by these works, we plugged a visual transformer as a neck between the feature extractor and the output layer. In our case, the inputs to the transformer are batches of feature maps that we obtained from the feature extractors as .

Different from Dosovitskiy et al. (2020); Wu et al. (2020), we foresee the AL framework to be least intrusive to the learner’s architecture. This allows the methodology to be plugged into various designs and applications. We, therefore, do not further post-process the output of CNN feature extractor. Similarly to GCNs Kipf and Welling (2017), we want to explore the relationships between the nodes of the graph. However, given our input ( are the parameters of the feature extractor), we propose to deploy the transformer blocks within the batch . Consequently, all the channels of the feature maps from each batch are considered as tokens to VT. Although in the standard architecture Vaswani et al. (2017), the inputs to the transformer are positional encoded, this does not apply in our scenario where the order of the images is irrelevant. As mentioned in the Introduction, our objective is to extract non-local relationships between the images not within the image. To simplify furthermore the transformer’s architecture, we exclude the decoder part as the target domain is absent in our case.

From the seminal work on self-attention Vaswani et al. (2017), the main building blocks of the transformer’s encoder are a batch self-attention block and a point-wise feed-forward network with residuals and layer normalisation. For the self-attention, we transpose to the batch and concatenate the feature maps so that . The key, query and value matrices () are packed together to model the interactions between the features. This also favours inner-domain relationships as batch normalisation is commonly used in CNNs for regularisation. The operations of the batch self-attention are summarised as follows:


where and softmax

are the output of the batch self-attention layer and its activation function. While keeping the dynamics within the batch, we pass

through the point-wise feed-forward network. We define the output of this block, with as all VT parameters. The following equation underlines its processes:


where are the weights from the feed-forward network and represents the sigmoid activation function.

Task-Aware Joint-Learning. Recently, the state-of-the-art for deep active learning has been gained by model-based methods like Learning LossYoo and Kweon (2019), VAALSinha et al. (2019), CDALAgarwal et al. (2020) and UncertainGCNCaramalau et al. (2021b). These fundamentally comprise dedicated trainable models to sample unlabelled data. Apart from Learning Loss, the downfall of these methods is the sub-optimal multi-stage training processes. Also, in limited budget scenarios, the risk of over-fitting appears from the initial cold-start sampling. Unlike the approaches of these methods, we proposed to optimise the parameters jointly. Also, customize the objective depending upon the downstream tasks. In our case, we have considered image classification and object detection. But, our method can be easily extended to other tasks.

The representations of the labelled examples and unlabelled examples from the transformer are fed into the bifurcated branch of the network. One of them minimizes the binary cross-entropy loss to distinguish labelled examples from unlabelled examples. Another branch minimizes the task-specific loss. For the classification task, we minimize class categorical loss. Similarly, for object detection, we minimize the combination of confidence and localisation loss as stated in Liu et al. (2016). Thus the overall objective of the network when the downstream task is classification task is as shown in the Equation 4. In the same manner, the overall objective becomes as stated in Equation 3 when the downstream task is object detection. In the Equations, are the weighting factors corresponding to each task. and are the learnable parameters of the main task and the sampler branch, respectively.

To learn them, we apply gradient back-propagation. We alternate the gradient between the sampler and the task branch for every batch of data as shown by the backward head arrow in Figure 1. This adds an inductive bias while avoiding over-fitting the data or the random noise. An elaborated deduction was presented by Goyal et Goyal and Bengio (2020).


Sampling the unlabelled data. To recap, the proposed AL framework trains jointly both learner and sampler. As a bottleneck between feature extractor and output layers, we add a visual transformer between the feature extractor and the two task branches. Combining the unlabelled and labelled data helps in preserving the most meaningful features during the learning stage. Moreover, the transformer will be exposed also to the unlabelled data dependencies. This happens as the task of the sampler is to classify the labelled from the unlabelled. The two are categorised as 1 or 0, respectively. If the unlabelled examples are easily differentiated by the sampler, we want to target for selection the most uncertain .

Similarly, it has been done in an adversarial manner by VAAL Sinha et al. (2019), although their AL framework is not linked with the main task. Therefore, we derive our selection principle from UncertainGCNCaramalau et al. (2021b). Given a budget of b

points, we infer the entire unlabelled pool and select the samples with the lowest posterior probability of the discriminator branch. We notate

as the confidence score of the posterior. Considering the first selection stage, we can evaluate the new labelled set with:


We compute the because the highest confidence score for the unlabelled is when is closer to 0. This selection process along with re-training is repeated until the targeted performance of the downstream task is reached.

For convenience, we denote the proposed AL framework as TJLS( Transformer with Joint-Learning Sampler). In the next part, we thoroughly quantify the stated method and motivations. Furthermore, to observe the impact of the transformer bottleneck, we also investigate the pipeline JLS without it.

4 Experiments

Here we present both the quantitative (including ablation studies) and qualitative evaluations in a detailed manner. We employed our method TJLS (visual transformer in the joint-learning sampler) for two different tasks: image classification and objection.


We choose state-of-the arts from different categories like uncertainty-based (MC Dropout Gal and Ghahramani (2016), DBAL Gal et al. (2017)), geometric (CoreSet Sener and Savarese (2018)) and the most recent model-based (Learning Loss Yoo and Kweon (2019), VAALSinha et al. (2019), CDALAgarwal et al. (2020), CSALGao et al. (2020)).

The standard practice to acquire datasets is through random sampling from the uniform distribution. This does not require any active learning mechanism for selection. The first methods to explore the uncertainties in deep learning models have been MC Dropout and, its extension, DBAL. Both approximate the learner in a Bayesian fashion. The two uncertainty-based methods differ by their selection criteria. MC Dropout relies on maximum entropy, while DBAL maximises information with BALD

Houlsby et al. (2011). From a geometric perspective, the most successful work, CoreSet Sener and Savarese (2018), estimates risk minimisation between the labelled set and a core points of unlabelled. Fundamentally, a k-centre Greedy Wolf (2011) algorithm measures the distances between labelled and unlabelled learner’s features.

As our method falls in the third category, model-based, we assess four recent state-of-the-art: Learning Loss Yoo and Kweon (2019), VAALSinha et al. (2019), CDALAgarwal et al. (2020), CSALGao et al. (2020)

. The first work to introduce learnable parameters for sampling is Learning Loss. Similarly, to our work, they train end-to-end the learner with the sampler, but the extra module predicts the downstream task loss. A more complex sampler has been proposed by VAAL where a variational auto-encoder is trained in an adversarial manner with both labelled and unlabelled data. The selection principle is close to ours by picking the hardly discriminated samples. However, the sampler is not task-aware with the main objective. On the other hand, CDAL provides a reinforcement learning module for a Bi-LSTM

Hochreiter and Schmidhuber (1997) that captures the contextual diversity of the learner. Their selection criteria keeps the unlabelled batches with the highest reward. The last baseline that we tackle in our experiments is CSAL. This method also optimizes the task model with unlabelled samples as in JLS. The main difference to ours consists in using augmented unlabelled data to compute a consistency loss that does not overlap with the end task.

4.1 Image Classification

Datasets and Implementation Details.

For image classification, we took three well known data sets: CIFAR-10, CIFAR-100

Krizhevsky (2012) and FashionMNISTXiao et al. (2017). The training set of each of the dataset makes . CIFAR-10 and CIFAR-100 consists of RGB images whereas FashionMNIST is grey-scale.

From an architectural perspective, we swap the main deep CNN model between VGG-16 Simonyan and Zisserman (2015) and ResNet-18 He et al. (2016). These learners are combined with the visual transformer and the sampler of our method TJDS. We set the hidden dimension of the transformer block for both self-attention and feed-forward module to 128. The label discriminator part of the sampler is composed of two layers. The hidden units of both of them are 512. We fixed these values for our pipeline for these experiments. Part of the hyper-parameter tuning behind this configuration is discussed later in this section. In Equation 4, we weight the impact of the two losses by and . From cross-validation, we notice that setting the weight of the sampler () to 50% of the auxiliary loss () brings more stability in joint-learning. However,

is set to 1 when labelled images are inferred through the downstream task. During training, we fix the batch size to 128 for all the methods. We optimize the gradient of the joint-learning with SGD by setting a learning rate of 0.01, a weight decay of 5e-4, and epochs to 200. To measure the performance of the AL framework, we evaluate the mean accuracy over 5 trials for 7 selection stages.

Quantitative analysis. We deploy VGG-16 for the CIFAR-10 and CIFAR-100 experiments. We start with an initial labelled set with 10% of the original training set. For the selection phase, we allocate a budget b of 5% every time identical to that of VAALSinha et al. (2019) or CDALAgarwal et al. (2020). Our TJLS or JLS algorithm requires a subset of unlabelled examples to train the sampler branch. In these benchmarks, we vary this set equal to the percentage of labelled. Following this, the sampler is not biased towards any group.

Figure 2: Quantitative evaluation on the CIFAR-10 left, CIFAR-100 middle sets with VGG-16 and FashionMNIST (right) with ResNet-18(right) [Zoom in for better view]

The Figures from 2 demonstrate quantitatively the top performance that our two versions (JLS and TJLS) achieve on CIFAR-10 (left) and CIFAR-100 (middle). The important thing to notice is that the proposed AL outperforms the baselines by a large margin from the very early stage. Also, the performance gain over the baselines is sustained even in the later stages. This highlights the need and importance of addressing cold-start problem in model-based AL frameworks. Among the two variants of our model, TJLS and JLS, the former is more effective than the latter one. This highlights the key role played by the transformer in non-local dependency modelling. We observe a similar trend on FashionMNIST (See Figure 2 (right)) which is another popular grey-scale image classification benchmark.

Comparison with SSL 25% 30% 35% 40%
CSAL [28] 67.93 68.97 69.8 70.51
JLS 70.2 71.3 72.18 71.56
TJLS 70.22 71.62 72.45 72.14
Table 1: Comparison with the semi-supervised CSAL method on CIFAR-100 with a Wide ResNet-28 learner

In addition to previous figures, we compare the contemporary state-of-the-art SSL method CSAL Gao et al. (2020). We explicitly present in Table 1 the quantitative results under the configuration of their work where a Wide ResNet-28 Zagoruyko and Komodakis (2016) backbone is plugged. Beginning the AL selection at 25% of CIFAR-100 data, our frameworks outperform by at least 2% in testing accuracy over four stages without adding any augmented data. The results have been averaged from 3 trials.

Intrinsic discussions. For a better understanding of our AL framework, we analyse the behaviour and visualise the latent features of the learner at the first selection stage through t-SNEvan der Maaten and Hinton (2008) distributions. Therefore, we run a fixed labelled set experiment on CIFAR-10 to represent both labelled and unlabelled after the second cycle of active learning. We deploy the ResNet-18 backbone for both JLS and TJLS. However, we also include the latent space evaluation for the learner without joint-learning. In this case, we apply CoreSet during the AL selection so that we can qualitatively compare it with our approach.

Figure 3 displays the three t-SNE latent spaces from the specified models: ResNet-18, JLS ResNet-18 and TJLS ResNet-18. All the images come from the available unlabelled pool. However, we already assign the 10 labels to visualise better the clusters. On this note, we also sub-sample both selected (marked with crosses) and unlabelled sets. The green hexagons added to the Figure mark the cluttered areas where some clusters are adjacent. The dotted lines also delimit presumed boundaries between classes. With these highlights, we can observe that JLS and TJLS provide more robust representations than the naive baseline. Moreover, in the cluttered areas, the sampler’s uncertainty principle draws more samples than CoreSet. This will further boost the accuracy in the next training cycle.

Figure 3: Intrinsic analysis of the latent space and the active learning selection [Zoom in for view]
Selection criteria for TJLS 10% 15% 20% 25% 30% 35% 40%
TJLS + Random sampling 71.1 70.44 68.1 66.47 62.41 62.36 62.15
TJLS + CoreSet [18] - 72.35 73.55 74.3 74.5 74.6 75.3
TJLS Uncertainty sampling - 73.72 74.3 74.8 75.3 75.6 75.67
Table 2: Evaluation of different selection functions for TJLS on CIFAR-100 with ResNet-18 backbone

Ablation studies. Although the main methodology, TJLS, relies on the visual transformer, throughout the paper, we also test the variant without it, JLS. The combined analysis leverages our motivation from the methodology. Furthermore, we re-iterate the comparison between the two and we explore the possibility of replacing the transformer with a GCN. In Table 3, we present the results of the three joint-learning architectures. Compared to Figure 2 results, we follow the same settings and CNN backbone but we increase the number of unlabelled examples used for training by 50%. The GCN is replacing the transformer bottleneck in the second row of Table 3. Its design is inspired from Kipf and Welling (2017) where the nodes of the graph change with input batch. Similarly to the transformer bottleneck, we want to model the higher order of representation that CNN lacks. The results in Table 3 confirm the TJLS proposal by achieving the best accuracy with every labelled subset. The selection criteria of uncertainty sampling has been kept for all three variants.

Ablation study 10% 15% 20% 25% 30% 35% 40%
JLS 43 50 55.7 59.1 63 65 67.1
JLS + GCN 45.6 54.23 59.1 61.9 64.51 67.4 69.6
JLS + Transformer (TJLS) 48.97 56.67 61.89 65.54 67.77 69.9 71.7
Table 3: Ablation study - CIFAR-100 testing performance of the joint-learning sampling scheme (JLS), with GCN bottleneck and with TJLS (Learner VGG-16)
batch (B)
10% 20% 30% 40%
16 57.5 66.2 69.68 72.1
32 64.4 71.42 73.8 75.3
64 70.12 74.3 75.4 75.74
128 71.1 74.3 75.3 75.67
10% 20% 30% 40%
depth heads units
1 1 512 43.05 53.9 62.06 65.59
1 2 128 44.41 56.05 61.83 66.19
2 1 128 42.46 57.04 63.45 66.36
Table 4: Hyper-parameter study. (Left) ResNet-18 backbone, batch size variation. (Right) VGG-16 backbone, Transformer architectural configuration. Dataset: CIFAR-100, [mean of 3 trials], % of labelled data

Sampler hyper-parameter study. The sampler of our pipeline TJLS consists of two building blocks, the visual transformer and a fully-connected discriminator. We empirically evaluated both models by grid-searching the optimal architectures. However, in the discriminator case, we maintain a similar structure to the downstream task branch so that features fall under the same domain. Thus, the focus of the parameter tuning is mainly on the transformer block. Table 4 presents the most meaningful results where the batch size is varied (on the left) together with depth, heads and hidden units. With the increase of the batch input size, Table 4 (left), we observe that the feature’s relationships are better explored. This justifies the pre-defined settings. On the right side, when changing the hidden units to 512, the gains of TJLS drop at the first selection stages. Despite this, we acknowledge that by increasing the depth or the number of heads Vaswani et al. (2017), our pipeline can achieve robust performance with different amounts of data.

4.2 Object Detection

Our method is generic and can be extended to other tasks simply customizing the task-specific auxiliary loss in the pipeline. Hence, we replace the categorical cross-entropy loss with SSD Liu et al. (2016) loss and employed it on the object detection benchmark.
Dataset and Implementation Details. The first work to tackle active learning for object detection is Learning Loss Yoo and Kweon (2019). In this regard, we follow the same dataset, learner and parameter settings. Briefly, the unlabelled pool consists in 16551 images from PASCAL VOC 2007 and 2012 Everingham et al. (2010). Compared to image classification, we start a first randomly labelled set of 1000 and we increase the budget with the same rate. However, we apply the AL selection process within 10 stages. As in Yoo and Kweon (2019); Agarwal et al. (2020), the learner’s architecture is SSDLiu et al. (2016) with a VGG-16Simonyan and Zisserman (2015) backbone. The visual transformer bottleneck from our joint-learning selection is positioned only on the confidence head of the SSD network. For every AL stage evaluation, we compute the mAP metric of PASCAL VOC 2007 testing set while averaging over 5 trials.

Figure 4: Quantitative evaluation on Pascal VOC 0712 dataset with SSD (left) and StarGAN synthetic data set (right) [Zoom in for better view]

Quantitative analysis. Figure 4 (left) illustrates the comparisons of the proposed baselines on the object detection experiment. The trends of JLS and TJLS behave similarly to the image classification point. They show a great level of generalisation from the cold-start at 62.5% mAP while having a 10% performance gain over the other baselines. The uncertainty-based selection is appropriate with the learner’s representation through all the 10 selection stages. Therefore, both JLS and TJLS saturate with top performance from the 7th cycle at over 76% mAP. Our method outperforms the previous state-of-the-art Learning Loss and CDAL. However, between the two proposed variants, TJLS provides non-local interactions within the batch. This advantage is reflected quantitatively against JLS in this task as well.

4.3 Subsampling synthetic data

Sub-sampling synthetic data to augment the real data is an active research Caramalau et al. (2021b); Bhattarai et al. (2020). Following the experimental setup of Caramalau et alCaramalau et al. (2021b) for sub-setting synthetic data, we employed our pipeline to select the face expression synthetic data generated by StarGAN Choi et al. (2018). The selected synthetic data were augmented with the RaFD111 real training data to train to a model for face expression classification task. Figure 4 (right) shows the performance comparison. Our method surpasses the performance of existing arts by a large margin.

5 Conclusions and Limitations

In this paper, we present a novel model-based active learning. Our contributions are the adaptation of Visual Transformer to address non-local dependencies between all the examples and exploitation of the unlabelled data by jointly minimizing the task-aware objective. Our extensive empirical and qualitative analysis on multiple benchmarks demonstrate the efficacy of the proposed method compared to the existing method. The main fallback to address in the proposed method is scalability. We acknowledge that introducing the visual transformer in TJLS increases the number of target model parameters. Despite this, our proposal relies on the current and future technological advancements where there has already been continuous growth. Another caveat consists of the restriction of the batch self-attention block. Some architectures might require smaller batch sizes where the benefits of TJLS can be affected. From recent works Wu et al. (2020); Dosovitskiy et al. (2020), it has been shown that visual transformers demand a big corpus of data. However, by including the unlabelled examples as part of the TJLS training, we satisfy this requirement. In future work, we would like to explore our method to efficiently handle high-resolution data.

Broader impact: Active learning is a dynamic and important research topic. Our contribution lies in the methodology of active learning. We believe that the research proposed in this paper can be further applied where large-scale data and annotation presents an issue. Thus, this might serve to fields like medical imaging, robotics and many other. Moreover, integrating TJLS as a sampling framework would yield greater performance in a limited labelled data scenario. Our method opens a new direction in the active learning research being sustained by state-of-the-art results.


  • S. Agarwal, H. Arora, S. Anand, and C. Arora (2020) Contextual diversity for active learning. In ECCV, Cited by: §1, §2, §3, §4, §4, §4.1, §4.2.
  • W. H. Beluch Bcai, A. Nürnberger, and J. M. K. Bcai (2018) The power of ensembles for active learning in image classification. In CVPR, Cited by: §2.
  • B. Bhattarai, S. Baek, R. Bodur, and T. Kim (2020) Sampling strategies for gan synthetic data. In ICASSP, Cited by: §4.3.
  • R. Caramalau, B. Bhattarai, and T. Kim (2021a)

    Active learning for bayesian 3d hand pose estimation

    In WACV, Cited by: §1, §2.
  • R. Caramalau, B. Bhattarai, and T. Kim (2021b) Sequential graph convolutional network for active learning. In CVPR, Cited by: §1, §1, §3, §3, §3, §4.3.
  • N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko (2020) End-to-end object detection with transformers. In ECCV, Cited by: §1.
  • R. Caruana (1997) Multitask learning. Machine learning 28 (1), pp. 41–75. Cited by: §1.
  • Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo (2018)

    Stargan: unified generative adversarial networks for multi-domain image-to-image translation

    In CVPR, Cited by: §4.3.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Cited by: §2.
  • A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby (2020) An image is worth 16x16 words: transformers for image recognition at scale. External Links: 2010.11929 Cited by: §1, §1, §2, §3, §3, §5.
  • T. Drugman, J. Pylkkönen, and R. Kneser (2016)

    Active and semi-supervised learning in asr: benefits on the acoustic and language models

    In INTERSPEECH, Cited by: §2.
  • M. Everingham, L. Gool, C. K. Williams, J. Winn, and A. Zisserman (2010) The pascal visual object classes (voc) challenge. International Journal of Computer Vision. Cited by: §4.2.
  • Y. Gal and Z. Ghahramani (2016) Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In ICML, Cited by: §2, §4.
  • Y. Gal, R. Islam, and Z. Ghahramani (2017) Deep Bayesian Active Learning with Image Data. In ICML, Cited by: §1, §2, §4.
  • M. Gao, Z. Zhang, G. Yu, S. Arık, L. Davis, and T. Pfister (2020) Consistency-based semi-supervised active learning: towards minimizing labeling cost. In ECCV, pp. 510–526. Cited by: §1, §1, §2, §3, §4, §4, §4.1.
  • G. Ghiasi, Y. Cui, A. Srinivas, R. Qian, T. Lin, E. D. Cubuk, Q. V. Le, and B. Zoph (2020) Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation. arXiv e-prints. Cited by: §1.
  • A. Goyal and Y. Bengio (2020) Inductive Biases for Deep Learning of Higher-Level Cognition. arXiv e-prints. Cited by: §3.
  • S. Har-Peled and A. Kushal (2005)

    Smaller coresets for k-median and k-means clustering

    In SCG, Cited by: §2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1, §3, §4.1.
  • S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural Comput.. Cited by: §4.
  • N. Houlsby, F. Huszár, Z. Ghahramani, and M. Lengyel (2011) Bayesian Active Learning for Classification and Preference Learning. Note: 1112.5745v1 Cited by: §1, §2, §4.
  • T. N. Kipf and M. Welling (2017) Semi-supervised classification with graph convolutional networks. In ICLR, Cited by: §3, §4.1.
  • A. Kirsch, J. Van Amersfoort, and Y. Gal (2019) BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning. In NeurIPS, Cited by: §1.
  • A. Kontorovich, S. Sabato, and R. Urner (2016) Active nearest-neighbor learning in metric spaces. In NeurIPS, Cited by: §1, §2.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In NeurIPS, Cited by: §1.
  • A. Krizhevsky (2012) Learning multiple layers of features from tiny images. University of Toronto, pp. . Cited by: §4.1.
  • C. Li, X. Wang, W. Dong, J. Yan, Q. Liu, and H. Zha (2019)

    Joint active learning with feature selection via cur matrix decomposition

    IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §2.
  • W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. Berg (2016) SSD: single shot multibox detector. In ECCV, Cited by: §1, §3, §4.2.
  • R. Pinsler, J. Gordon, E. Nalisnick, and J. Miguel Hernandez-Lobato (2019) Bayesian Batch Active Learning as Sparse Subset Approximation. In NeurIPS, Cited by: §1.
  • Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin (2016)

    Variational autoencoder for deep learning of images, labels and captions

    In NeurIPS, Cited by: §1.
  • O. Sener and S. Savarese (2018)

    Active Learning for Convolutional Neural Networks: A Core-set approach

    In ICLR, Cited by: §1, §1, §2, §2, §4, §4.
  • B. Settles (2009) Active learning literature survey. Computer Sciences Technical Report Technical Report 1648, University of Wisconsin–Madison. Cited by: §2.
  • C. E. Shannon (1948) A mathematical theory of communication. The Bell System Technical Journal. Cited by: §2.
  • K. Simonyan and A. Zisserman (2015) Very Deep Convolutional Network for Large-scale image recognition. In ICLR, Cited by: §3, §4.1, §4.2.
  • S. Sinha, S. Ebrahimi, and T. Darrell (2019) Variational Adversarial Active Learning. In ICCV, Cited by: §1, §1, §2, §3, §3, §4, §4, §4.1.
  • I. W. Tsang, J. T. Kwok, and P. Cheung (2005)

    Core vector machines: fast svm training on very large data sets

    JMLR.. Cited by: §2.
  • L. van der Maaten and G. Hinton (2008) Visualizing data using t-sne. Note: JMLR Cited by: §4.1.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In NeurIPS, Cited by: §3, §3, §4.1.
  • G. Wolf (2011) Facility location: concepts, models, algorithms and case studies. In Contributions to Management Science, Cited by: §1, §4.
  • B. Wu, C. Xu, X. Dai, A. Wan, P. Zhang, Z. Yan, M. Tomizuka, J. Gonzalez, K. Keutzer, and P. Vajda (2020) Visual transformers: token-based image representation and processing for computer vision. External Links: 2006.03677 Cited by: §1, §1, §2, §3, §3, §3, §5.
  • H. Xiao, K. Rasul, and R. Vollgraf (2017)

    Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

    Note: 1708.07747v2 Cited by: §4.1.
  • D. Yoo and I. S. Kweon (2019) Learning Loss for Active Learning. In CVPR, Cited by: §1, §1, §2, §3, §3, §4, §4, §4.2.
  • S. Zagoruyko and N. Komodakis (2016) Wide residual networks. In BMVC, Cited by: §4.1.
  • B. Zhang, L. Li, S. Yang, S. Wang, Z. Zha, and Q. Huang (2020) State-Relabeling Adversarial Active Learning. In CVPR, Cited by: §2.
  • S. Zhang, L. Wen, X. Bian, Z. Lei, and S. Li (2018) Single-shot refinement neural network for object detection. In CVPR, Cited by: §1.

Appendix A Supplementary Material

a.1 Standard deviations in the quantitative evaluations

For a better clarity, in our figures of image classification (Figure 2) and object detection (Figure 4

left) we excluded the standard deviation representation. We kept only the mean value to avoid overlapping. However, we present these values in Table


Selection cycle 1 2 3 4 5 6 7
[CIFAR-10] JLS .21 .09 .3 .11 .2 .19 .33
[CIFAR-10] TJLS .16 .2 .23 .12 .1 .09 .15
[CIFAR-100] JLS .04 .01 .22 .18 .22 .19 .18
[CIFAR-100] TJLS .1 .16 .33 .17 .31 .15 .31
[FashionMNIST] JLS .44 .64 .55 .13 .47 .25 .75
[FashionMNIST] TJLS .95 .75 .46 .4 .19 .58 .7
[PASCAL VOC] JLS .06 .04 .03 .06 .02 .01 .05
[PASCAL VOC] TJLS .1 .02 .04 .08 .04 .02 .07
Table A.1: Standard deviation of the JLS/TJLS qualitative results on CIFAR-10/100, FashionMNIST and Pascal VOC

We can observe that the deviations in most experiments are relatively low. This robustness happens due to the high degree of generalisation while training with our proposed pipeline. The measurements are in decimals of the testing accuracy/mAP percentages.

a.2 Experiments compute resources

We conduct all our experiments in Python3 with the PyTorch deep learning library. To speed up the running process, we train the models on Graphical Processing Units (GPUs). For image classification, we can fit any of the presented architecture on a single NVIDIA 1080Ti GPU with 11GB memory. However, the object detection models are larger and we parallelised the processes on two GPUs. More details regarding the increase of parameters in our JLS and TJLS frameworks are enlisted in Table


Model / Number
of parameters
Baseline JLS TJLS
VGG-16 14,765,988 15,029,157 15,620,005
ResNet-18 11,220,132 11,483,301 12,074,149
SSD 26,285,486 26,468,859 29,982,859
Table A.2: Number of parameters of JLS and TJLS samplers added to the VGG-16, ResNet-18, SSD backbones