TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot classification

03/14/2020 ∙ by Moshe Lichtenstein, et al. ∙ 12

The field of Few-Shot Learning (FSL), or learning from very few (typically 1 or 5) examples per novel class (unseen during training), has received a lot of attention and significant performance advances in the recent literature. While number of techniques have been proposed for FSL, several factors have emerged as most important for FSL performance, awarding SOTA even to the simplest of techniques. These are: the backbone architecture (bigger is better), type of pre-training on the base classes (meta-training vs regular multi-class, currently regular wins), quantity and diversity of the base classes set (the more the merrier, resulting in richer and better adaptive features), and the use of self-supervised tasks during pre-training (serving as a proxy for increasing the diversity of the base set). In this paper we propose yet another simple technique that is important for the few shot learning performance - a search for a compact feature sub-space that is discriminative for a given few-shot test task. We show that the Task-Adaptive Feature Sub-Space Learning (TAFSSL) can significantly boost the performance in FSL scenarios when some additional unlabeled data accompanies the novel few-shot task, be it either the set of unlabeled queries (transductive FSL) or some additional set of unlabeled data samples (semi-supervised FSL). Specifically, we show that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than 5%, while increasing the benefit of using unlabeled data in FSL to above 10% performance gain.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: (a) TAFSSL overview: red and blue pathways are for semi-supervised and transductive FSL respectively. - few-shot task; - support set; - query set; - optional set of additional unlabeled examples (semi-supervised FSL); - original feature space; - task adapted feature sub-space. (b) Improved SNR in A: the normalized (by min entropy) Mutual Information (MI) between either train or test classes and the features in (of dimension ) or in (-dim) provides the motivation to use over . Computed on miniImageNet.

The great success of Deep Learning (DL) methods to solve complex computer vision problems can be attributed in part to the emergence of large labeled datasets

[28, 41] and strong parallel hardware. Yet in many practical situations, the amount of data and/or labels available for training or adapting the DL model to a new target task is prohibitively small. In extreme cases, we might be interested in learning from as little as one example per novel class. This is the typical scenario of Few-Shot Learning (FSL), a very active and exciting research topic of many concurrent works [23, 47, 51, 52]. While many great techniques have been proposed to improve FSL performance, recent studies [4, 16, 52] have shown that there exist a number of important factors that can improve the FSL performance, largely regardless of the model and the learning algorithm used. These include: (i) significant performance gains observed while increasing the size and the number of parameters of the backbone generating the feature representations of the images [4, 52]

; (ii) gains while pre-training the FSL model on the base classes dataset as a regular multi-class classifier (to all base classes at once)

[52], as opposed to the popular meta-training by generating a lot of synthetic few-shot tasks from random small groups of base classes [23, 51]; (iii) gains when pre-training on more (diverse) base classes (e.g. higher empirical FSL performance on seemingly more difficult tieredImageNet benchmark than on supposedly simpler miniImageNet benchmark [23, 52]; (iv) gains when artificially increasing the diversity and complexity of the base classes dataset by introducing additional self-supervised tasks during pre-training [16]. Correctly using these factors allows the simple Nearest Neighbor classifier to attain state-of-the-art FSL performance [52] improving upon more sophisticated FSL techniques.

All the aforementioned factors and gains concern the base classes pre-training stage of the FSL methods backbones. Much less attention has been given to adapting the feature spaces resulting from these backbones to the novel classes few-shot tasks during test time. It has been shown that some moderate gains can be obtained from using the few training examples (support set) of the novel tasks to fine-tune the backbones (changing the feature spaces slightly), with best gains obtained for higher resolution and higher ’shots’ (support examples per class) regimes [32]. Fine-tuning was also shown to be effective for semi-supervised setting [25], where additional unlabeled examples accompany each novel few-shot task. It has also been shown that label propagation and clustering operating in the pre-trained backbone’s original feature space provide some gains for transductive FSL (allowing unlabeled queries to be used jointly to predict their labels in a bulk) and semi-supervised FSL [29, 39]. Finally, meta-learned backbone architecture adaptation mechanics were proposed in [11] allowing for slight backbone architecture transformations adaptive to the few-shot test task.

However, slight adaptation of the backbone’s feature space to a given task, using few iterations of fine-tuning on the support set or other techniques, might not be sufficient to bridge over the generalization gap introduced by the FSL backbone observing completely novel classes unseen during training (as confirmed by the relatively moderate performance gains obtained from these techniques). Intuitively, we could attribute this in part to many of the feature space dimensions (feature vector entries) becoming ’useless’ for a given set of novel classes in the test few-shot task. Indeed, every feature vector entry can be seen as a certain ’pattern detector’ which fires strongly when a certain visual pattern is observed on the input image. The SGD (or other) backbone training is making sure all of these patterns are discriminative for the classes used for pre-training. But, due to likely over-fitting, many of these patterns are base classes specific, and do not fire for the novel test classes. Hence, their corresponding feature vector entries will mainly produce ’noise values’ corresponding to ’pattern not observed’. In other words, the ratio of feature vector entries that can be used for recognition of novel classes to ones which mainly output ’noise’ significantly decreases for test few-shot task (Figure

1b). And it is unlikely that small modifications to the feature space recovers a significant portion of the ’noise producing’ feature entries. The high level of noise in the feature vectors intuitively has significant adverse implications on the performance of the FSL classifier operating on this vector, especially the popular distance based classifiers like nearest-neighbor [52] and Prototypical Networks (PN) [47] are affected. In light of this intuition, we conjecture that for a significant performance boost, we need to concentrate our efforts on the so-called Task-Adaptive Feature Sub-Space Learning (TAFSSL) - seeking sub-spaces of the backbone’s feature space that are discriminative for the novel classes of the test few-shot task and which are ’as noise free as possible’, that is most of the sub-space dimensions indeed ’find’ the patterns they represent in the images of the novel categories belonging to the task.

In this paper we set to explore TAFSSL under the transductive and the semi-supervised few-shot settings. In many practical applications of FSL, alongside the few labeled training examples (the support set) of the few shot task, additional unlabeled examples containing instances of the target novel classes are available. Such is the situation in transductive FSL which assumes that the query samples arrive in a ’bulk’ and not one-by-one, and hence we can answer all the queries ’at once’ while using the query set as unlabeled data. Similar situation exists in semi-supervised FSL, where unlabeled set of images simply accompanies the few-shot task. As can be observed from our experiments, TAFSSL, and especially TAFSSL accompanied by specific (proposed) forms of clustering based approaches, provides very significant boost to FSL performance under the transductive and semi-supervised setting. Specifically, we obtain over and over absolute improvement of the popular -shot miniImageNet [51] and tieredImageNet [39] few-shot benchmarks in the transductive FSL setting, and and over absolute improvement in semi-supervised FSL setting (over corresponding state-of-the-art while using their respective evaluation protocols). Figure 1a illustrates an overview of the proposed approach.

To summarize, we offer the following contributions: (i) we highlight the Task-Adaptive Feature Sub-Space Learning (TAFSSL) as an important factor for Few-Shot Learning (FSL), we explore several TAFSSL methods and demonstrate significant performance gains obtained using TAFSSL for transductive and semi-supervised FSL; (ii) we propose two variants of clustering that can be used in conjunction with TAFSSL to obtain even greater performance improvements; (iii) we obtain new state-of-the-art transductive and semi-supervised FSL results on two popular benchmarks: miniImageNet and tiered

ImageNet; (iv) we offer an extensive ablation study of the various aspects of our approach, including sub-space dimension, unlabeled data quantity, effects of out-of-distribution noise in unlabeled data, backbone architectures, and finally - effect of class imbalance (skew) in unlabeled data (so-far unexplored in all previous works).

2 Related work

In this section we briefly review the modern Few-Shot Learning (FSL) approaches, and focus in more detail on the transductive and semi-supervised FSL methods that leverage unlabeled data. The meta-learning methods [51, 47, 50, 24, 57] learn from few-shot tasks (or episodes) rather then from individual labeled samples. Each such task is a small dataset, with few labeled training examples (a.k.a. support), and a few test examples (a.k.a. query). The goal is to learn a model that can adapt to new tasks with novel categories unseen during training. The gradient-based meta learners [13, 26, 58, 37, 31, 42, 4, 23, 33] search for models that are good initialization for transfer to novel few-shot tasks. Typically, in these methods higher order derivatives are used for meta-training, optimizing the loss the model would have after applying one or several gradient steps. At test time, the model is fine-tuned to the novel few shot-tasks. In [12] ensemble methods for few-shot learning are proposed. The few-shot learning by metric learning methods [54, 47, 40, 15, 44, 49, 33, 21, 55, 8] learn a non-linear embedding into a metric space where nearest neighbor (or similar) is used to classify instances of new categories according to their proximity to the few support examples embedded in the same space. In [8, 2] distance to class prototype is replaced by distance to a class sub-space. As opposed to [8] and [2] that try to optimize a sub-space for each class (according to that class support examples), in TAFSSL we seek a single sub-space optimally adapted to the entire data of the few-shot task - labeled and unlabeled. Notably, in [52] regular (non-meta-learning) pre-training was used in combination with ’large’ backbones (e.g. DenseNet [20]) and a nearest-neighbor classifier to achieve state-of-the-art results, highlighting the importance of diverse pre-training and backbone size to FSL performance. The generative and augmentation-based few-shot approaches [34, 10, 48, 17, 7, 27, 38, 3, 18, 46, 53, 5, 56, 45, 1] are methods that (learn to) generate more samples from the one or a few examples available for training in a given few-shot learning task.

Transductive and semi-supervised FSL: In many practical applications, in addition to the labeled support set, we have additional unlabeled data accompanying the few-shot task. In transductive FSL [9, 29, 22, 36] we assume the set of task queries arrives in a bulk and we can simply use it as a source of unlabeled data, allowing query samples to ’learn’ from each other. In [9] the query samples are used in fine-tuning in conjunction with entropy minimization loss in order to maximize the certainty of their predictions. In semi-supervised FSL [25, 39, 2, 29] the unlabeled data comes in addition to the support set and is assumed to have a similar distribution to the target classes (although some unrelated samples noise is also allowed). In the LST [25] self-labeling and soft attention are used on the unlabeled samples intermittently with fine-tuning on the labeled and self-labeled data. Similarly to LST, [39]

updates the class prototypes using k-means like iterations initialized from the PN prototypes. Their method also includes down-weighting the potential distractor samples (likely not to belong to the target classes) in the unlabeled data. In

[2] unlabeled examples are used through soft-label propagation. In [43] semi-supervised few-shot domain adaptation is considered. In [15, 29, 22]

graph neural networks are used for sharing information between labeled and unlabeled examples in semi-supervised

[15, 29] and transductive [22] FSL setting. Notably, in [29] a Graph Construction network is used to predict the task specific graph for propagating labels between samples of semi-supervised FSL task.

3 Method

In this section we derive the formal definition of TAFSSL and examine several approaches for it. In addition, we propose several ways to combine TAFSSL with clustering followed by Bayesian inference, which is shown to be very beneficial to the performance in the Results section



Let a CNN backbone (e.g. ResNet [19] or DenseNet [20]) pre-trained for FSL on a (large) dataset with a set of base (training) classes . Here for simplicity, we equally refer to all different forms of pre-training proposed for FSL in the literature, be it meta-training [51] or ’regular’ training of a multi-class classifier for all the classes [52]. Denote by to be a feature vector corresponding to an input image represented in the feature space by the backbone . Under this notation, we define the goal of linear Feature Sub-Space Learning (FSSL) to find an ’optimal’ (for a certain task) linear sub-space of and a linear mapping of size (typically with ) such that:


is the new representation of an input image as a vector in the feature sub-space (spanned by rows of ).

Now, consider an -way + -shot few-shot test task with a query set , and a support set: , where is the class label of image , so in we have training samples (shots) for each of the classes in the task . Using the PN [47] paradigm we assume (otherwise support examples of the same class are averaged to a single class prototype) and that each is classified using Nearest Neighbor (NN) in :


Then, in the context of this given task , we can define linear Task-Adaptive FSSL (TAFSSL) as a search for a linear sub-space of the feature space defined by a -specific projection matrix

, such that the probability:


of predicting to belong to the same class as the ’correct’ support is maximized, while of course the true label is unknown at test time (here in eq. 3 is a temperature parameter, we used ).

3.1.1 Discussion.

Using the ’pattern detectors’ intuition from section 1, lets consider the activations of each dimension of

as a random variable with a Mixture of (two) Gaussians (MoG) distribution:


where and

are the expectation and variance of the

’s distribution of activations when does not detect (noise) or detects (signal) the pattern respectively. The and

are the noise and the signal prior probabilities respectively (

). For brevity, we drop the index from the distribution parameters. Naturally, for the training classes , for most dimensions the implying that the dimension is ’useful’ and does not produce only noise (Figure 1b, top). However, for the new (unseen during training) classes of a test task this is no longer the case, and it is likely that for the majority of dimensions (Figure 1b, middle). Assuming (for the time being) that are conditionally independent, the square Euclidean distance could be seen as an aggregation of votes for the ’still useful’ (for the classes of ) patterns, and a sum of squares of i.i.d (zero mean) Gaussian samples for the patterns that are ’noise only’ on the classes of . The latter ’noise dimensions’ randomly increase the distance on the expected order of , where is the number of noise features of the feature space for the classes of task . Using this intuition, if we could find such a TAFSSL sub-space adapted to the task so that is reduced (Figure 1b, bottom), we would improve the performance of the NN classifier on . With only few labeled samples in the support set , we cannot expect to effectively learn the projection to the sub-space using SGD on . Yet, when unlabeled data accompanies the task ( in transductive FSL, or an additional set of unlabeled samples in semi-supervised FSL), we can use this data to find such that: (a) the dimensions of are ’disentangled’, meaning their pairwise independence is maximized; (b) after the ’disentanglement’ we choose the dimensions that are expected to ’exhibit the least noise’ or in our previous MoG notation have the largest values.

Luckily, simple classical methods can be used for TAFSSL approximating the requirements (a) and (b). Both Principle Component Analysis (PCA) [35]

and Independent Component Analysis (ICA)

[6] applied in on the set of samples: (transductive FSL) or (semi-supervised FSL) can approximate (a). PCA under the approximate joint Gaussianity assumption of , and ICA under approximate non-Gaussianity assumption. In addition, if after the PCA rotation we subtract the mean, the variance of the (zero-mean) MoG mixtures for the transformed (independent) dimensions would be:


Then assuming and

are roughly the same for all dimensions (which is reasonable due to heavy use of Batch Normalization (BN) in the modern backbones), choosing the dimensions with higher variance in PCA would lead to larger

, , and

- all of which are likely to increase the signal-to-noise ratio of the NN classifier. Larger

leads to patterns with stronger ’votes’, larger means wider range of values that may better discriminate multiple classes, and larger means patterns that are more frequent for classes of . Similarly, the dimensions with bigger exhibit stronger departure from Gaussianity and hence would be chosen by ICA.

3.1.2 TAFSSL summary.

To summarize, following the discussion above, both PCA and ICA are good simple approximations for TAFSSL using unlabeled data and therefore we simply use them to perform the ’unsupervised low-dimensional projection’ in the first step of our proposed approach (Figure 1a). As we show in the Results section 4, even on their own (when directly followed by an NN classifier) they lead to significant FSL performance boosts (Tables 1 and 2).

3.2 Clustering

It was shown that clustering is a useful tool for transductive and semi-supervised FSL [39]. There, it was assumed that modes of the task data distribution (including both labeled and unlabeled image samples) correspond classes. However, in the presence of feature ’noise’ in , as discussed is section 3.1, the ’class’ modes may become mixed with the noise distribution modes, that may blur the class modes boundaries or swallow the class modes altogether. Indeed, the performance gains in [39] were not very high.

In contrast, after applying PCA or ICA based TAFSSL, the feature noise levels are usually significantly reduced (Figure 1b) making the task-adapted feature sub-space of the original feature space to be much more effective for clustering. We propose two clustering-based algorithms, the Bayesian K-Means (BKM) and Mean-Shift Propagation (MSP). In the Results section 4 we show that following PCA or ICA based TAFSSL, these clustering techniques add about to the performance. They are used to perform the ’unsupervised clustering’ + ’bayesian inference’ steps of our approach (Figure 1a).

The BKM is a soft k-means [30] variant accompanied with Bayesian inference for computing class probabilities for the queries. In BKM, each k-means cluster, obtained for the entire set of (labeled + unlabeled) task data, is treated as a Gaussian mixture distribution with a mode for each class. The BKM directly computes the class probability for each query by averaging the posterior of in each of the mixtures with weights corresponding to ’s association probability to each cluster. The details of BKM are summarized in Algorithm 3.2 box.

algorithmBayesian K-Means (BKM)

Cluster the samples of task ( or in transductive or semi-supervised FSL respectively) into clusters, associating each to - the centroid of cluster .
for each , , and  do

The MSP is a mean-shift [14] based approach, that is used to update the prototype of each class. In MSP we perform a number of mean-shift like iterations on the prototypes [47] of the classes taken within the distribution of all the (labeled and unlabeled) samples of . In each iteration, for each prototype (of class ), we compute a set of most confident samples within a certain confidence radius and use the mean of this set as the next prototype (of class ). The itself is balanced among the classes. The details of MSP are summarized in Algorithm 3.2 box. Following MSP, the updated prototypes are used in standard NN classifier fashion to obtain the class probabilities.

algorithmMean-Shift Propagation (MSP)

Compute prototypes: , where is # of shots in task
for N times do
     Compute , (or )
     Compute predictions
     , where is a threshold parameter
     K =
     Compute the new prototypes: , where are the top samples that have sorted in decreasing order of
return labels

3.3 Implementation details

All the proposed TAFSSL approaches were implemented in PyTorch. Our code will be released upon acceptance. We have used the PyTorch native version of SVD for PCA, and FastICA from sklearn for ICA. The k-means from sklearn was used for BKM. The sub-space dimensions were

for PCA based TAFSSL, and for ICA based TAFSSL. These were set using validation (section 4.4.3). The and were used for MSP, and for BKM, all set using validation. We used the backbones implementations from [52]. Unless otherwise specified, DenseNet backbone was used (for backbones ablation, please see section 4.4.4). Using the most time consuming of the proposed TAFSSL approaches (ICA + BKM) our running time was measured as below seconds (CPU) for a typical -shot and -way episode with queries per class.

4 Results

We have evaluated our approach on the popular few-shot classification benchmarks, namely miniImageNet [51] and tieredImageNet [39], used in all transductive and semi-supervised FSL works [9, 29, 22, 36, 25, 39, 2, 29]. On these benchmarks, we used the standard evaluation protocols, exactly as in corresponding (compared) works. The results of the transductive and semi-supervised FSL evaluation, together with comparison to previous methods, are summarized in tables 1 and 2 respectively and are detailed and discussed in the following sections. All the performance numbers are given in accuracy and the confidence intervals are reported. The tests are performed on random -way episodes, with or shots (number of support examples per class), and with queries per episode (unless otherwise specified). For each dataset, the standard train / validation / test splits were used. For each dataset, training subset was used to pre-train the backbone (from scratch) as a regular multi-class classifier to all the train classes, same as in [52]

; the validation data was used to select the best model along the training epochs and to choose the hyper-parameters; and episodes generated from the test data (with test categories unseen during training and validation) were used for meta-testing to obtain the final performance. In all experiments not involving BKM, the class probabilities were computed using the NN classifier to the class prototypes.

4.1 FSL benchmarks used in our experiments

The miniImageNet benchmark (Mini) [51] is a standard benchmark for few-shot image classification, that has randomly chosen classes from ILSVRC-2012 [41]. They are randomly split into disjoint subsets of meta-training, meta-validation, and meta-testing classes. Each class has 600 images of size . We use the same splits as [23] and prior works.

The tieredImageNet benchmark (Tiered) [39] is a larger subset of ILSVRC-2012 [41], consisted of classes grouped into high-level classes. These are divided into disjoint meta-training high-level classes, meta-validation classes, and meta-testing classes. This corresponds to , , and classes for meta-training, meta-validation, and meta-testing respectively. Splitting using higher level classes effectively minimizes the semantic similarity between classes belonging to the different splits. All images are of size .

4.2 Transductive FSL setting

In these experiments we consider the transductive FSL setting, where the set of queries is used as the source of the unlabeled data. This setting is typical for cases when an FSL classifier is submitted a bulk of query data for offline evaluation. In Table 1 we report the performance of our proposed TAFSSL (PCA, ICA), clustering (BKM, MSP), and TAFSSL+clustering (PCA/ICA + BKM/MSP) approaches and compare them to a set of baselines and state-of-the-art (SOTA) transductive FSL methods from the literature: TPN [29] and Transductive Fine-Tuning [9]. We also compare to SOTA regular FSL result of [52] in order to highlight the effect of using the unlabeled queries for prediction. As baselines, we try to maximally adapt the method of [52] to the transductive FSL setting. These are the so-called ”trans-mean-sub” that on each test episode subtracts the mean of all the samples () from all the samples followed by L2 normalization (in order reduce the episode bias); and the ”trans-mean-sub(*)” where we do the same but computing and subtracting the means of the and sample sets separately (in order to better align their distributions). As can be seen from Table 1, on both the Mini and the Tiered transductive FSL benchmarks, the top performing of our proposed TAFSSL based approaches (ICA+MSP) consistently outperforms all the previous (transductive and non-transductive) SOTA and the baselines by more then in the more challenging -shot setting and by more then in the -shot setting, underlining the benefits of using the transductive setting, and the importance of TAFSSL to this setting. In the following section, we only evaluate the ICA based TAFSSL variants as it was found to consistently outperform the PCA based variant under all settings.

Mini 1-shot Mini 5-shot Tiered 1-shot Tiered 5-shot
Simple shot [52] 64.30 0.20 81.48 0.14 71.26 0.21 86.59 0.15
TPN [29] 55.51 0.86 69.86 0.65 59.91 0.94 73.30 0.75
TEAM [36] 60.07 N.A. 75.90 N.A. - -
EGNN + trans. [22] - 76.37 N.A. - 80.15 N.A.
Trans. Fine-Tuning [9] 65.73 0.68 78.40 0.52 73.34 0.71 85.50 0.50
Trans-mean-sub 65.58 0.20 81.45 0.14 73.49 0.21 86.56 0.15
Trans-mean-sub(*) 65.88 0.20 82.20 0.14 73.75 0.21 87.16 0.15
PCA 70.53 0.25 80.71 0.16 80.07 0.25 86.42 0.17
ICA 72.10 0.25 81.85 0.16 80.82 0.25 86.97 0.17
BKM 72.05 0.24 80.34 0.17 79.82 0.25 85.67 0.18
PCA + BKM 75.11 0.26 82.24 0.17 83.19 0.25 87.83 0.17
ICA + BKM 75.79 0.26 82.83 0.16 83.39 0.25 88.00 0.17
MSP 71.39 0.27 82.67 0.15 76.01 0.27 87.13 0.15
PCA + MSP 76.31 0.26 84.54 0.14 84.06 0.25 89.13 0.15
ICA + MSP 77.06 0.26 84.99 0.14 84.29 0.25 89.31 0.15
Table 1: Transductive setting

4.3 Semi-supervised FSL setting

In this section we evaluate our proposed approaches in the semi-supervised FSL setting. In this setting, we have an additional set of unlabeled samples that accompanies the test task . In we usually expect to have additional samples from the ’s target classes distribution, possibly mixed with additional unrelated samples from some number of distracting classes (please see section 4.4.2 for an ablation on this). In Table 2 we summarize the performance of our proposed TAFSSL based approaches, and compare them to the SOTA semi-supervised FSL methods of [39, 29, 25, 2]. In addition, we also present results for varying number of additional unlabeled samples in (where available). As can be seen from Table 2, in the semi-supervised setting, the TAFSSL-based approaches outperform all competing methods by a large margins of over and accuracy gain in both the Mini and the Tiered benchmarks in -shot and -shot settings respectively. Interestingly, same as for the transductive FSL, for the semi-supervised FSL the ICA+MSP approach is the best performing.

# Unlabeled Mini 1-shot Mini 5-shot Tiered 1-shot Tiered 5-shot
TPN [29] 360 52.78 0.27 66.42 0.21 - -
PSN [2] 100 - 68.12 0.67 - 71.15 0.67
TPN [29] 1170 - - 55.74 0.29 71.01 0.23
LST [25] 30 65.00 1.90 - 75.40 1.60 -
SKM [39] 100 62.10 N.A. 73.60 N.A. 68.60 N.A. 81.00 N.A.
TPN [29] 100 62.70 N.A. 74.20 N.A. 72.10 N.A. 83.30 N.A.
LST [25] 50 - 77.80 0.80 - 83.40 0.80
LST [25] 100 70.10 1.90 78.70 0.80 77.70 1.60 85.20 0.80
ICA 30 72.00 0.24 81.31 0.16 80.24 0.24 86.57 0.17
ICA 50 72.66 0.24 81.96 0.16 80.86 0.24 87.03 0.17
ICA 100 72.80 0.24 82.27 0.16 80.91 0.25 87.14 0.17
ICA + BKM 30 75.70 0.22 83.59 0.14 82.97 0.23 88.34 0.15
ICA + BKM 50 76.46 0.22 84.36 0.14 83.51 0.22 88.81 0.15
ICA + BKM 100 76.83 0.22 84.83 0.14 83.73 0.22 88.95 0.15
ICA + MSP 30 78.55 0.25 84.84 0.14 85.04 0.24 88.94 0.15
ICA + MSP 50 79.58 0.25 85.41 0.13 85.75 0.24 89.32 0.15
ICA + MSP 100 80.11 0.25 85.78 0.13 86.00 0.23 89.39 0.15
Table 2: Semi supervised setting. For clarity, results are sorted according to increasing order of -shot ”Mini” performance where available, and according to -shot ”Mini” otherwise

4.4 Ablation study

Here we describe the ablation experiments analyzing the different design choices and parameters of the proposed approaches, and of the problem setting itself.

4.4.1 Number of queries in transductive FSL.

Since the unlabelled data in transductive FSL is comprised entirely from the query samples, the size of the query set in the meta-testing episodes affects the performance. To test this we have evaluated the proposed TAFSSL ICA-based methods, as well as two baselines, namely SimpleShot [52], and its adaptation to transductive setting ”trans-mean-sub*” (sub). All methods were tested varying the number of queries from to . The results of this ablation on both the Tiered and Mini benchmarks are shown on figure 2. As can be seen from the figure, already for as little as queries a substantial gap can be observed (for both the benchmarks) between the proposed best performing ICA+MSP technique and the best of the baselines.

Figure 2: Number of queries in transductive FSL setting: (a) miniImageNet (Mini); (b) tieredImageNet (Tiered)

4.4.2 Out of distribution noise (distraction classes) in unlabeled data.

In many applications, the unlabeled data may become contaminated with samples ”unrelated” to the few-shot task target classes distribution. This situation is most likely to arise in the semi-supervised FSL setting, as in transductive FSL the unlabeled samples are the queries and unless we are interested in open-set FSL mode (to the best of our knowledge not explored yet), these are commonly expected to belong only to the target classes distribution. In the semi-supervised FSL literature [39, 29, 25], this type of noise is evaluated using additional random samples from random ”distracting” classes added to the unlabeled set. In figure 3 we compare our proposed ICA-based TAFSSL approaches to SOTA semi-supervised FSL methods [39, 29, 25]. By varying the number of distracting classes from to , we see that about accuracy gap is maintained between top TAFSSL method and the top baseline across all the tested noise levels.

Figure 3: Noise: The figure shows the affect of the unlabeled data noise on the performance. Plots for LST [25], TPN [29], and SKM [39] are extrapolated from their original publications. (a) miniImageNet (Mini); (b) tieredImageNet (Tiered)

4.4.3 The number of TAFSSL sub-space dimensions.

An important parameter for TAFSSL is the number of the dimensions of the sub-space selected by the TAFSSL approach. In figure 4 we explore the effect of the number of chosen dimensions in ICA-based TAFFSL on both the Mini and the Tiered benchmarks. As can be seen from the figure, the optimal number of dimensions for ICA-based TAFSSL approaches is , which is consistent between both test and validation sets. Interestingly, the same number is consistent between the two benchmarks. Similarly, using validation, the optimal dimension for PCA-based TAFSSL was found to be (also consistently on the two benchmarks).

4.4.4 Backbone architectures.

The choice of backbone turned out to be an important factor for FSL methods performance [4, 52]. In Table 3 we evaluate the performance of one of the proposed TAFSSL approaches, namely PCA+MSP while using different backbones pre-trained on the training set to compute the base feature space . We used the -shot transductive FSL setting on both Mini and Tiered benchmarks for this evaluation. As can be seen from the table, larger backbones produce better performance for the TAFSSL approach. In addition, we list the reported performance of the competing SOTA transductive FSL methods in the same table for direct comparison using the same backbones. As can be seen, above accuracy advantage is maintained by our proposed approach above the top previous method using the corresponding WRN architecture.

Backbone Mini 1-shot Tiered 1-shot
TPN [29] Conv-4 55.51 0.86 59.91 0.94
TPN [29] ResNet-12 59.46 N.A. -
Transductive Fine-Tuning [9] WRN 65.73 0.68 73.34 0.71
PCA + MSP Conv-4 56.63 0.27 60.27 0.29
PCA + MSP ResNet-10 70.93 0.28 76.27 0.28
PCA + MSP ResNet-18 73.73 0.27 80.60 0.27
PCA + MSP WRN 73.72 0.27 81.61 0.26
PCA + MSP DenseNet 76.31 0.26 84.06 0.25
Table 3: Backbones comparison. The -shot transductive FSL setting for miniImageNet (Mini) and tieredImageNet (Tiered) was used for this comparison
Figure 4: ICA dimension vs accuracy: (a) miniImageNet (Mini) (b) tieredImageNet (Tiered)

4.4.5 Unbalanced (long-tail) test classes distribution in unlabeled data.

In all previous transductive FSL works, the test tasks (episodes) were balanced in terms of the number of queries corresponding to each of the test classes. While this is fine for experimental evaluation purposes, in practical applications there is no guarantee that the bulk of queries sent for offline evaluation will be balanced in terms of classes. In fact, it is more likely that it will have some skew. To test the effect of query set skew (lack of balance) in terms of number of query samples per class, we have evaluated the proposed ICA-based TAFSSL approaches, as well as the Simple-Shot [52] and its transductive adaptation ”trans-mean-sub*” (sub) baselines, under varying levels of query set skew. The level of skew was controlled through the so-called ”unbalanced factor” parameter : in each test episode, for each class query samples were randomly chosen (here

refers to a uniform distribution). Figure

5 shows the effect of varying from to , while at the extreme setting () above factor skew is possible between the classes in terms of the number of associated queries. Nevertheless, as can be seen from the figure, the effect of lack of balance on the performance of the TAFSSL based approaches is minimal, leading to at most performance loss at . Since no prior work offered a similar evaluation design, we believe that the proposed protocol may become an additional important tool for evaluating transductive FSL methods under lack of query set balance in the future.

Figure 5: Unbalanced: (a) miniImageNet (Mini) (b) tieredImageNet (Tiered)

5 Summary and Conclusions

In this paper we have highlighted an additional important factor on FSL classification performance - the Feature Sub-Space Learning (FSSL), and specifically it’s Task Adaptive variant (TAFSSL). We have explored different methods and their combinations for benefiting from TAFSSL in few-shot classification and have shown great promise for this kind of techniques by achieving large margin improvements over transductive and semi-supervised FSL state-of-the-art, as well as over the more classical FSL that does not use additional unlabeled data, thus highlighting the benefit of the latter. Potential future work directions include incorporating TAFSSL into the meta-training (pre-training) process (e.g. by propagating training episodes gradients through pyTorch PCA/ICA implementations, and the proposed clustering techniques BKM/MSP); exploring non-linear TAFSSL variants (e.g. kernel TAFSSL, or using a small DNN); further exploring the effect of TAFSSL in any-shot learning and the significance of the way parameter of the task; exploring the benefits of TAFSSL in cross-domain few-shot learning where the FSL backbone pre-training occurs in different visual domain from the one test classes are sampled from.


  • [1] A. Alfassy, L. Karlinsky, A. Aides, J. Shtok, S. Harary, R. Feris, R. Giryes, and A. M. Bronstein (2019) LaSO: Label-Set Operations networks for multi-label few-shot learning. In CVPR, Cited by: §2.
  • [2] A. Anonymous Projective Sub-Space Networks For Few-Sot Learning. In ICLR 2019 OpenReview, External Links: Link Cited by: §2, §2, §4.3, Table 2, §4.
  • [3] A. Antoniou, A. Storkey, and H. Edwards (2017) Data Augmentation Generative Adversarial Networks. arXiv:1711.04340. External Links: Link Cited by: §2.
  • [4] W. Chen, Y. Liu, Z. Kira, Y. Wang, and J. Huang (2019) A Closer Look At Few-Shot Classification. In ICLR, Cited by: §1, §2, §4.4.4.
  • [5] Z. Chen, Y. Fu, Y. Zhang, Y. Jiang, X. Xue, and L. Sigal (2019) Multi-Level Semantic Feature Augmentation for One-Shot Learning. IEEE Transactions on Image Processing 28 (9), pp. 4594–4605. External Links: Document, ISSN 1057-7149 Cited by: §2.
  • [6] P. Comon (1994) Independent component analysis, A new concept?. Technical report Vol. 36. Cited by: §3.1.1.
  • [7] E. D. Cubuk, B. Zoph, D. Mané, V. Vasudevan, and Q. V. Le AutoAugment: Learning Augmentation Policies from Data. External Links: Link Cited by: §2.
  • [8] A. Devos and M. Grossglauser (2019) Subspace Networks for Few-shot Classification. Technical report Cited by: §2.
  • [9] G. S. Dhillon, P. Chaudhari, A. Ravichandran, and S. Soatto (2019) A Baseline For Few-Shot Image Classification. Technical report Cited by: §2, §4.2, Table 1, Table 3, §4.
  • [10] A. Dosovitskiy, J. T. Springenberg, M. Tatarchenko, and T. Brox (2017) Learning to Generate Chairs, Tables and Cars with Convolutional Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (4), pp. 692–705. External Links: ISBN 9781467369640, Document, ISSN 01628828 Cited by: §2.
  • [11] S. Doveh, E. Schwartz, C. Xue, R. Feris, A. Bronstein, R. Giryes, and L. Karlinsky (2019) MetAdapt: Meta-Learned Task-Adaptive Architecture for Few-Shot Classification. Technical report Cited by: §1.
  • [12] N. Dvornik, C. Schmid, and J. Mairal (2019) Diversity with Cooperation: Ensemble Methods for Few-Shot Classification. The IEEE International Conference on Computer Vision (ICCV). External Links: Link Cited by: §2.
  • [13] C. Finn, P. Abbeel, and S. Levine (2017) Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. arXiv:1703.03400. External Links: Link, ISSN 1938-7228 Cited by: §2.
  • [14] K. Fukunaga and L. D. Hostetler (1975)

    The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition

    IEEE Transactions on Information Theory 21 (1), pp. 32–40. External Links: Document, ISSN 15579654 Cited by: §3.2.
  • [15] V. Garcia and J. Bruna (2017) Few-Shot Learning with Graph Neural Networks. arXiv:1711.04043, pp. 1–13. External Links: Link Cited by: §2, §2.
  • [16] S. Gidaris, A. Bursuc, N. Komodakis, P. Pérez, and M. Cord (2019-06) Boosting Few-Shot Visual Learning with Self-Supervision. External Links: Link Cited by: §1.
  • [17] K. Guu, T. B. Hashimoto, Y. Oren, and P. Liang (2017) Generating Sentences by Editing Prototypes. Arxiv:1709.08878. External Links: Link Cited by: §2.
  • [18] B. Hariharan and R. Girshick (2017) Low-shot Visual Recognition by Shrinking and Hallucinating Features. IEEE International Conference on Computer Vision (ICCV). External Links: Link Cited by: §2.
  • [19] K. He, X. Zhang, S. Ren, and J. Sun (2015) Deep Residual Learning for Image Recognition. arXiv:1512.03385. External Links: Link Cited by: §3.1.
  • [20] G. Huang, Z. Liu, L. v. d. Maaten, and K. Q. Weinberger (2017) Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269. External Links: Link, ISBN 978-1-5386-0457-1, Document, ISSN 1063-6919 Cited by: §2, §3.1.
  • [21] X. Jiang, M. Havaei, F. Varno, and G. Chartrand (2019) Learning To Learn With Conditional Class Dependencies. pp. 1–11. Cited by: §2.
  • [22] J. Kim, T. Kim, S. Kim, and C. D. Yoo Edge-Labeling Graph Neural Network for Few-shot Learning. Technical report Cited by: §2, Table 1, §4.
  • [23] K. Lee, S. Maji, A. Ravichandran, S. Soatto, W. Services, U. C. San Diego, and U. Amherst (2019) Meta-Learning with Differentiable Convex Optimization. In CVPR, External Links: Link Cited by: §1, §2, §4.1.
  • [24] H. Li, D. Eigen, S. Dodge, M. Zeiler, and X. Wang (2019) Finding Task-Relevant Features for Few-Shot Learning by Category Traversal. 1. External Links: Link Cited by: §2.
  • [25] X. Li, Q. Sun, Y. Liu, S. Zheng, Q. Zhou, T. Chua, and B. Schiele (2019-06) Learning to Self-Train for Semi-Supervised Few-Shot Classification. External Links: Link Cited by: §1, §2, Figure 3, §4.3, §4.4.2, Table 2, §4.
  • [26] Z. Li, F. Zhou, F. Chen, and H. Li (2017) Meta-SGD: Learning to Learn Quickly for Few-Shot Learning. arXiv:1707.09835. External Links: Link Cited by: §2.
  • [27] S. Lim, I. Kim, T. Kim, C. Kim, K. Brain, and S. Kim (2019) Fast AutoAugment. Technical report Cited by: §2.
  • [28] T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft COCO: Common objects in context. In Lecture Notes in Computer Science, Vol. 8693 LNCS, pp. 740–755. External Links: ISBN 978-3-319-10601-4, Document, ISSN 16113349 Cited by: §1.
  • [29] Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. J. Hwang, and Y. Yang (2019) Learning To Propagate Labels: Transductive Propagation Networ For Few-Shot Learning. External Links: ISBN 1805.10002v5 Cited by: §1, §2, Figure 3, §4.2, §4.3, §4.4.2, Table 1, Table 2, Table 3, §4.
  • [30] S. P. Lloyd and S. P. Lloyd (1982) Least squares quantization in pcm. IEEE TRANSACTIONS ON INFORMATION THEORY 28, pp. 129–137. External Links: Link Cited by: §3.2.
  • [31] T. Munkhdalai and H. Yu (2017) Meta Networks. arXiv:1703.00837. External Links: Link, Document, ISSN 1938-7228 Cited by: §2.
  • [32] A. Nakamura and T. Harada REVISITING FINE-TUNING FOR FEW-SHOT LEARNING. Technical report Cited by: §1.
  • [33] B. N. Oreshkin, P. Rodriguez, and A. Lacoste (2018-05) TADAM: Task dependent adaptive metric for improved few-shot learning. NeurIPS. External Links: Link Cited by: §2.
  • [34] D. Park and D. Ramanan (2015)

    Articulated pose estimation with tiny synthetic videos

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015-Octob, pp. 58–66. External Links: ISBN 9781467367592, Document, ISSN 21607516 Cited by: §2.
  • [35] K. Pearson (1901-11) On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2 (11), pp. 559–572. External Links: Document, ISSN 1941-5982 Cited by: §3.1.1.
  • [36] L. Qiao, Y. Shi, J. Li, Y. Wang, T. Huang, and Y. Tian (2019) Transductive Episodic-Wise Adaptive Metric for Few-Shot Learning. External Links: Link Cited by: §2, Table 1, §4.
  • [37] S. Ravi and H. Larochelle (2017) Optimization As a Model for Few-Shot Learning. ICLR, pp. 1–11. External Links: Link Cited by: §2.
  • [38] S. Reed, Y. Chen, T. Paine, A. van den Oord, S. M. A. Eslami, D. Rezende, O. Vinyals, and N. de Freitas (2018) Few-shot autoregressive density estimation: towards learning to learn distributions. arXiv:1710.10304 (2016), pp. 1–11. Cited by: §2.
  • [39] M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle, and R. S. Zemel (2018-03) Meta-Learning for Semi-Supervised Few-Shot Classification. ICLR. External Links: Link Cited by: §1, §1, §2, §3.2, Figure 3, §4.1, §4.3, §4.4.2, Table 2, §4.
  • [40] O. Rippel, M. Paluri, P. Dollar, and L. Bourdev (2015) Metric Learning with Adaptive Density Discrimination. arXiv:1511.05939, pp. 1–15. External Links: Link Cited by: §2.
  • [41] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015-09) ImageNet Large Scale Visual Recognition Challenge. IJCV. External Links: Link Cited by: §1, §4.1, §4.1.
  • [42] A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell (2018-07) Meta-Learning with Latent Embedding Optimization. In ICLR, External Links: Link Cited by: §2.
  • [43] K. Saito, D. Kim, S. Sclaroff, T. Darrell, and K. Saenko (2019-04) Semi-supervised Domain Adaptation via Minimax Entropy. In ICCV, External Links: Link Cited by: §2.
  • [44] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap (2016) Meta-Learning with Memory-Augmented Neural Networks.

    Journal of Machine Learning Research

    48 (Proceedings of The 33rd International Conference on Machine Learning), pp. 1842–1850.
    External Links: ISBN 9781617796029, Document, ISSN 19449224 Cited by: §2.
  • [45] E. Schwartz, L. Karlinsky, R. Feris, R. Giryes, and A. M. Bronstein (2019) Baby steps towards few-shot learning with multiple semantics. pp. 1–11. External Links: Link Cited by: §2.
  • [46] E. Schwartz, L. Karlinsky, J. Shtok, S. Harary, M. Marder, A. Kumar, R. Feris, R. Giryes, and A. M. Bronstein (2018) Delta-Encoder: an Effective Sample Synthesis Method for Few-Shot Object Recognition. Neural Information Processing Systems (NIPS). External Links: Link Cited by: §2.
  • [47] J. Snell, K. Swersky, and R. Zemel (2017) Prototypical Networks for Few-shot Learning. In NIPS, External Links: Link Cited by: §1, §1, §2, §3.1, §3.2.
  • [48] H. Su, C. R. Qi, Y. Li, and L. J. Guibas (2015) Render for CNN Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views.pdf. IEEE International Conference on Computer Vision (ICCV), pp. 2686–2694. Cited by: §2.
  • [49] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales Learning to Compare: Relation Network for Few-Shot Learning. External Links: Link Cited by: §2.
  • [50] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales (2017-11) Learning to Compare: Relation Network for Few-Shot Learning. arXiv:1711.06025. External Links: Link Cited by: §2.
  • [51] O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra (2016) Matching Networks for One Shot Learning. NIPS. External Links: Link, ISBN 9781467369640, Document, ISSN 10636919 Cited by: §1, §1, §2, §3.1, §4.1, §4.
  • [52] Y. Wang, W. Chao, K. Q. Weinberger, and L. van der Maaten (2019-11) SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning. External Links: Link Cited by: §1, §1, §2, §3.1, §3.3, §4.2, §4.4.1, §4.4.4, §4.4.5, Table 1, §4.
  • [53] Y. Wang, R. Girshick, M. Hebert, and B. Hariharan (2018) Low-Shot Learning from Imaginary Data. arXiv:1801.05401. External Links: Link Cited by: §2.
  • [54] K. Q. Weinberger and L. K. Saul (2009) Distance Metric Learning for Large Margin Nearest Neighbor Classification. The Journal of Machine Learning Research 10, pp. 207–244. External Links: ISBN 1532-4435, Document, ISSN 1532-4435 Cited by: §2.
  • [55] C. Xing, N. Rostamzadeh, B. N. Oreshkin, and P. O. Pinheiro Adaptive Cross-Modal Few-Shot Learning. Technical report External Links: Link Cited by: §2.
  • [56] A. Yu and K. Grauman (2017) Semantic Jitter: Dense Supervision for Visual Comparisons via Synthetic Images. Proceedings of the IEEE International Conference on Computer Vision 2017-Octob, pp. 5571–5580. External Links: ISBN 9781538610329, Document, ISSN 15505499 Cited by: §2.
  • [57] J. Zhang, C. Zhao, B. Ni, M. Xu, and X. Yang (2019) Variational Few-Shot Learning. In IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
  • [58] F. Zhou, B. Wu, and Z. Li (2018-02) Deep Meta-Learning: Learning to Learn in the Concept Space. Technical report External Links: Link Cited by: §2.