How to Train Your MAML to Excel in Few-Shot Classification

06/30/2021 ∙ by Han-Jia Ye, et al. ∙ Nanjing University The Ohio State University 0

Model-agnostic meta-learning (MAML) is arguably the most popular meta-learning algorithm nowadays, given its flexibility to incorporate various model architectures and to be applied to different problems. Nevertheless, its performance on few-shot classification is far behind many recent algorithms dedicated to the problem. In this paper, we point out several key facets of how to train MAML to excel in few-shot classification. First, we find that a large number of gradient steps are needed for the inner loop update, which contradicts the common usage of MAML for few-shot classification. Second, we find that MAML is sensitive to the permutation of class assignments in meta-testing: for a few-shot task of N classes, there are exponentially many ways to assign the learned initialization of the N-way classifier to the N classes, leading to an unavoidably huge variance. Third, we investigate several ways for permutation invariance and find that learning a shared classifier initialization for all the classes performs the best. On benchmark datasets such as MiniImageNet and TieredImageNet, our approach, which we name UNICORN-MAML, performs on a par with or even outperforms state-of-the-art algorithms, while keeping the simplicity of MAML without adding any extra sub-networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Meta-learning, or learning to learn, is a sub-field of machine learning that attempts to search for the best learning strategy as the learning experiences increases 

Vilalta and Drissi (2002); Thrun and Pratt (2012); Lemke et al. (2015); Vanschoren (2018); Finn (2018). Recent years have witnessed an abundance of new approaches on meta-learning Finn and Levine (2018); Finn et al. (2017a); Vinyals et al. (2016); Rusu et al. (2019); Triantafillou et al. (2020); Rajeswaran et al. (2019); Lee et al. (2019); Mishra et al. (2018); Munkhdalai and Yu (2017); Ritter et al. (2018); Santoro et al. (2016); Nichol et al. (2018), developed in various areas including few-shot learning Ravi and Larochelle (2017); Snell et al. (2017); Wang and Hebert (2016); Ye et al. (2020a, b); Sung et al. (2018); Zhang et al. (2018a); Wang et al. (2018b), optimization Andrychowicz et al. (2016); Wichrowska et al. (2017); Li and Malik (2017); Bello et al. (2017)

, reinforcement and imitation learning 

Stadie et al. (2018); Frans et al. (2018); Wang et al. (2017a); Duan et al. (2016, 2017); Finn et al. (2017b); Yu et al. (2018)

, unsupervised learning 

Garg and Kalai (2018); Metz et al. (2019); Edwards and Storkey (2017); Reed et al. (2018), continual learning Riemer et al. (2019); Kaiser et al. (2017); Al-Shedivat et al. (2018); Clavera et al. (2019), transfer and multi-task learning Motiian et al. (2017); Balaji et al. (2018); Ying et al. (2018); Zhang et al. (2018b); Li et al. (2019, 2018)

, active learning 

Ravi and Larochelle (2018); Sharma et al. (2018); Bachman et al. (2017); Pang et al. (2018), etc. Specifically, meta-learning has demonstrated the capability to generalize learned knowledge to novel tasks, which greatly reduces the need for training data and time to optimize.

Model-agnostic meta-learning (MAML) Finn et al. (2017a) is one of the most wildly studied and applied meta-learning algorithms, thanks to its “model-agnostic” nature and its elegant formulation. Concretely, MAML aims to learn a good model initialization (through the outer loop optimization), which can then be quickly adapted to novel tasks using few data and few gradient updates (through the inner loop optimization). However, in few-shot classification Vinyals et al. (2016); Snell et al. (2017) which many meta-learning algorithms are dedicated to, MAML’s performance has been shown to fall far behind Wang et al. (2019); Chen et al. (2019); Triantafillou et al. (2020), even after paired with a stronger backbone, e.g., ResNet He et al. (2016) pre-trained on the meta-training set Ye et al. (2020a); Chao et al. (2020); Rusu et al. (2019); Qiao et al. (2018).111The original MAML Finn and Levine (2018) uses a simple 4-layer convolutional network (ConvNet) without pre-training.

In this paper, we take a closer look at MAML on the few-shot classification problem. The standard setup of few-shot classification using meta-learning involves a meta-training and meta-testing phase. For example, MAML learns the initialization during meta-training and applies it during meta-testing. In both phases, a meta-learning algorithm receives multiple -way -shot tasks. Each task is an -class classification problem provided with labeled support images per class. After the (temporal) inner loop optimization using the labeled support images, the resulting model is then evaluated on the query images of the same classes. In the meta-training phase, the loss calculated on the query images is the driving force to optimize the meta-parameters (e.g., the initialization in MAML). We note that, the classes seen in the meta-training and meta-testing phases are disjoint.

For an -way problem, what MAML learns is the initialization of an -class classifier. Without loss of generality, we denote it by , where is the feature extractor on image , and are the linear classifiers. We use to represent the collection of meta-parameters . For simplicity and fair comparisons, we use (a) a pre-trained ResNet-12 backbone Lee et al. (2019) released by an existing algorithm Ye et al. (2020a), (b) the first-order approximation in calculating outer loop gradients, and (c) the same number of inner loop steps in meta-training and meta-testing.

Figure 1: The unicorn-MAML algorithm. We learn only a single initialization of the linear classifier , which is then duplicated into classifiers and updated together with in the inner loop according to the support set of the few-shot task. We note that, while every class likely has a semantic meaning, it is unavoidably assigned an arbitrary index during meta-training and meta-testing. unicorn-MAML directly forces the model to be permutation-invariant to such an arbitrary class index assignment.

Our first observation is that MAML needs a large number of inner loop gradient steps. For example, on the benchmark MiniImageNet Vinyals et al. (2016) and TierdImageNet Ren et al. (2018a) datasets, MAML’s performance improves along with increased number of steps and achieves the highest accuracy around steps, which are much larger than the conventional usage of MAML Antoniou et al. (2019).

Our second observation is that MAML is inherently sensitive to the permutation of label assignments of each -way -shot task. Concretely, when a new task arrives, MAML pairs the learned initialization of to the corresponding class label of the task. The issue, however, resides in the “meaning” of in a task. In the standard setup, each -way task is created by drawing classes from a bigger pool of semantic classes (e.g., “dog”, “cat”, “bird”, etc.), followed by an arbitrary label re-assignment into . In other words, the same set of semantic classes can be re-labeled totally differently into and thus be paired with differently. Taking a five-way task for example, there are ways (permutations) to pair the same semantic classes to the linear classifiers. In some of them, a class “dog” may be assigned to ; in some others, it will be assigned to . In our experiment, we find that different permutations indeed lead to different meta-testing accuracy. Specifically, if we cherry-pick the best permutation for each meta-testing task, the resulting accuracy over five-way one-shot tasks can be higher on both datasets.

Building upon this observation, we investigate permutation-invariant treatments during meta-testing. First, we search the best permutation for each task; we explore using the loss or accuracy on the support set as the signal to determine a permutation. Second, we explore ensemble Breiman (1996); Zhou (2012); Dietterich (2000) over (a subset) of all possible permutations, which inevitably needs more computations. Third, we employ forced permutation invariance — during meta-testing, we initialize each by their average: . Overall, we found that (a) it is challenging to find the best permutation per task: the strategies we explore can hardly improve; (b) ensemble helps, even if we just pick a subset of permutations; (c) using the averaged initialization does not hurt but can sometimes improve.

We further investigate permutation-invariant treatments during meta-training. (The corresponding treatments are applied in meta-testing as well.) First, we again explore using the loss or accuracy on the support set to decide a permutation. Second, we investigate a fixed order in assigning the semantic classes into . Third, we employ the forced permutation invariance; i.e., we meta-train only a single . We duplicate into in the beginning of the inner loop optimization and aggregate the outer loop gradients to optimize (see Figure 1). We found that the first two treatments hurt MAML, suggesting that the permutations in meta-training may be beneficial, e.g., to prevent the initialization from over-fitting meta-training tasks. To our surprise, the third treatment — learning and testing with a single — consistently improve MAML. On few-shot tasks in both benchmarks, this approach, which we name unicorn-MAML, is on a par with or outperforms state-of-the-art algorithms, without any extra network modules or learning strategies. We provide further analysis on unicorn-MAML and show that, even with a strong backbone, it is still crucial to perform inner loop updates to the feature extractor in meta-testing, which matches the claims by Arnold and Sha (2021): “Embedding adaptation is still needed for few-shot learning.”

2 Related Work

Training a model under data budgets is important in machine learning, computer vision, and many other application fields, since the costs of collecting data 

Li and Zhou (2015) and labeling them Huang et al. (2014)

are by no means negligible. This is especially the case for deep learning models in visual recognition 

He et al. (2016); Dosovitskiy et al. (2021); Simonyan and Zisserman (2015); Szegedy et al. (2015); Krizhevsky et al. (2012); Huang et al. (2017), which usually needs thousands of, millions of, or even more images to train Russakovsky et al. (2015); Deng et al. (2009); Guo et al. (2016); Zhou et al. (2017); Thomee et al. (2016); Mahajan et al. (2018); Joulin et al. (2016) in a conventional supervised manner. Different from training a model to predict at the instance level, meta-learning attempts to learn the inductive bias across training tasks Baxter (2000); Vilalta and Drissi (2002). A “meta-model” summarizes the common characteristic of tasks and generalizes them to those novel but related tasks Maurer (2009); Maurer et al. (2016); Denevi et al. (2018). Meta-learning has been applied in various fields, including imbalance learning Wang et al. (2017c); Ren et al. (2018b), data compression Wang et al. (2018a), architecture search Elsken et al. (2019), recommendation systems Vartak et al. (2017), data augmentation Ratner et al. (2017), teaching Fan et al. (2018), and hyper-parameter tuning Franceschi et al. (2017); Probst et al. (2019).

In few-shot learning (FSL), meta-learning is applied to learn the ability of “how to build a classifier using limited data” that can be generalized across tasks. Such an inductive bias is first learned over few-shot tasks composed of “base” classes, and then evaluated on tasks composed of “novel” classes. For example, few-shot classification can be implemented in a non-parametric way with soft nearest neighbor Vinyals et al. (2016) or nearest center classifiers Snell et al. (2017), so that the feature extractor is learned and acts at the task level. The learned features pull similar instances together and push dissimilar ones far away, such that a test instance can be classified even with a few labeled training examples Koch et al. (2015). Considering the complexity of a hypothesis class, the model training configurations (i.e., hyper-parameters) also serve as a type of inductive biases. Andrychowicz et al. (2016); Ravi and Larochelle (2017) meta-learn the optimization strategy for each task, including the learning rate and update directions. Other kinds of inductive biases are also explored. Hariharan and Girshick (2017); Wang et al. (2018b) learn a data generation prior to augment examples given few images; Dai et al. (2017) extract logical derivations from related tasks; Wang et al. (2017b); Shyam et al. (2017) learn the prior to attend images.

Model-agnostic meta-learning (MAML) Finn et al. (2017a) proposes another inductive bias, i.e., the model initialization. After the model initialization shared among tasks is meta-trained, the classifier of a new few-shot task can be fine-tuned with several steps of gradient descent from that initial point. The universality of this MAML-type updates is proved in Finn and Levine (2018)

. MAML has been applied in various scenarios, such as uncertainty estimation 

Finn et al. (2018), robotics control Yu et al. (2018); Clavera et al. (2018), neural translation Gu et al. (2018), language generation Huang et al. (2018), etc. Despite the success, there are still problems with MAML. Nichol et al. (2018) handle the computational burden by presenting a family of approaches using first-order approximations; Antoniou et al. (2019) provide a bunch of tricks to train and stabilize the MAML framework; Bernacchia (2021) points out that negative rates of gradient updates help in some scenarios.

Since MAML applies a uniform initialization to all the tasks (i.e., the same set of and ), recent methods explore ways to better incorporate task characteristics. Lee et al. (2019); Bertinetto et al. (2019); Rajeswaran et al. (2019) optimize the linear classifiers (not the feature ) till convergence in the inner loop; Triantafillou et al. (2020) construct the linear classifiers from class prototypes (i.e., aggregated features per class) so they are task-aware and need no inner loop updates. Another direction is to enable task-specific initialization Requeima et al. (2019); Vuorio et al. (2019); Yao et al. (2019); Ye et al. (2020b), which often needs additional sub-networks.

Our work is complementary to the above improvements of MAML: we find an inherent permutation issue and conduct a detailed analysis. We then build upon it to improve MAML. We note that, some of the above methods can be invariant to the permutations. For example, LEO Rusu et al. (2019) and ProtoMAML Triantafillou et al. (2020) compute class prototypes to represent each semantic class. However, they need to either introduce additional sub-networks or modify the training objective.

3 MAML for Few-Shot Classification

3.1 Problem definition

Following the literature Vinyals et al. (2016), we define an -way -shot task as an -class classification problem with labeled support examples per class. The value of is small, e.g., or . The goal of few-shot learning (FSL) is to construct a classifier using the support set of examples. Each is a pair of the instance and label, where . To evaluate the quality of the resulting classifier, each task is usually associated with a set of query examples , which is composed of examples of the same classes. The challenge of FSL is the potential over-fitting or poor generalization problem.

The core idea of meta-learning for FSL is to sample few-shot tasks from a set of “base” classes, of which we instead have ample data per class, and learn how to build a classifier using limited data from these tasks. After this so-called “meta-training” phase, we then proceed to the “meta-testing” phase to tackle the true few-shot tasks composed of “novel” classes that are disjoint from the “base” classes. It is worth noting that the number of total “base” (and “novel”) classes is usually larger than (see subsection 3.3). Thus, to construct an -way -shot task in both phases, one usually first samples classes randomly from the corresponding set of classes, and re-labels each sampled class by an index . Throughout the paper, we will use base and meta-training classes interchangeably, as well as novel and meta-testing classes.

3.2 Model-Agnostic Meta-Learning (MAML)

As mentioned in section 1 and section 2, MAML aims to learn an initialization of an -way classifier, such that when provided with the support set of an -way -shot task, the classifier can quickly update to perform well for the task (i.e., classify the query set well). Let us denote the classifier initialization by , where is the feature extractor on , are the linear classifiers, and represents the collection of both. MAML evaluates on and uses the gradient to update into a classifier that is ready for . This procedure is called the inner loop optimization, which usually takes gradient steps:

(1)

where is the loss computed on instances of and is the learning rate (or step size). The cross-entropy loss is commonly used for . As suggested in the original MAML paper Finn et al. (2017a) and Antoniou et al. (2019), is considered as a small integer (e.g., ). For ease of notation, let us denote the output after gradient steps by .

MAML learns such an initialization using the few-shot tasks sampled from the base classes. Let us denote by the distribution of tasks from the base classes, where each task is a pair of support and query sets , MAML aims to minimize the following meta-learning objective w.r.t. :

(2)

That is, MAML aims to find a shared among tasks, which, after the -step inner loop optimization using , can lead to a small classification loss on the query set . (We add the subscript to to indicate that depends on .) To optimize Equation 2

, MAML applies stochastic gradient descent (SGD) but at the task level;

i.e., at every iteration it samples a task and computes the gradient w.r.t. :

(3)

In practice, one may sample a mini-batch of tasks and compute the mini-batch task gradient w.r.t. for learning . This SGD for is known as the outer loop optimization for MAML. It is worth noting that calculating the gradient in Equation 3 can be computation and memory heavy since it involves a gradient through a gradient (along the inner loop but in a backward order) Finn et al. (2017a). Thus in practice, it is common to apply the first-order approximation Finn et al. (2017a); Nichol et al. (2018), i.e., .

3.3 Experimental setup

As our paper is heavily driven by empirical observations, we first introduce the two main datasets we experiment with, the neural network architecture we use, and the implementation details.

Datasets. We evaluate on MiniImageNet Vinyals et al. (2016) and TiredImageNet Ren et al. (2018a). MiniImageNet contains semantic classes, each has 600 images. Following Ravi and Larochelle (2017), the classes are split into meta-training/validation/testing sets with 64/16/20 (non-overlapped) classes, respectively. That is, there are base classes and novel classes; the other classes are used for hyper-parameter tuning. In TieredImageNet Ren et al. (2018a), there are in total classes, which are split into meta-training/validation/testing sets with 351/97/160 (non-overlapped) classes, respectively. On average, each class has images. All images are resized to , following Lee et al. (2019); Ye et al. (2020a).

Training and evaluation. During meta-training, meta-testing, and meta-validation, we sample -way -shot tasks from the corresponding classes and images. We follow literature Snell et al. (2017); Vinyals et al. (2016) to study the five-way one-shot and five-way five-shot tasks. As mentioned in subsection 3.1, every time we sample five distinct classes, we randomly assign each of them an index . During meta-testing, we follow the evaluation protocol in Zhang et al. (2020); Rusu et al. (2019); Ye et al. (2020a) to sample tasks. In each task, the query set contains images per class. We report the mean accuracy (in %) and the confidence interval.

Model architecture. We follow Lee et al. (2019) to use a ResNet-12 architecture He et al. (2016) for (cf. subsection 3.2), which contains a wider width and the Dropblock module Ghiasi et al. (2018) to avoid over-fitting. More specifically, we initialize with the weights released by Ye et al. (2020a), which is pre-trained on the entire meta-training set, following the recent practice Ye et al. (2020a); Chao et al. (2020); Rusu et al. (2019); Qiao et al. (2018). We randomly initialize .

Implementation details. MAML has several hyper-parameters and we select them on the meta-validation set. Specifically for the outer loop, we learn with at most tasks: we group every

tasks into an epoch. We apply SGD with momentum

and weight decay . We start with an outer loop learning rate for and for . Both are decayed by after every epochs. For the inner loop, we have to set the number of gradient step and the learning rate (cf. Equation 1). We provide more discussions in the next section.

4 MAML Needs a Large Number of Inner Loop Gradient Steps

Figure 2: The heat map of MAML’s meta-testing accuracy on MiniImageNet and TieredImageNet (five-way one-shot) w.r.t. the inner loop learning rate (x-axis) and the number of inner loop gradient updates (y-axis). We use a black bounding box to denote the best performance in each dataset.

While hyper-parameter tuning is a common practice in machine learning, we find that for MAML’s inner loop, the number of gradient update (cf. Equation 1) is usually searched in a small range close to , e.g.,  Antoniou et al. (2019). This makes sense according to the motivation of MAML Finn et al. (2017a) — with a small number of gradient steps, the resulting model will have a good generalization performance.

In our experiment, we however find that it is crucial to explore a larger .222For simplicity, we apply the same in meta-training and meta-testing. Specifically, we consider along with . We plot the meta-testing accuracy of five-way one-shot tasks on both datasets in Figure 2.333We tune hyper-parameters on the meta-validation set and find that the accuracy there reflects the meta-testing accuracy well. We show the meta-testing accuracy here simply for a direct comparison to results in the literature. We find that MAML achieves higher and much more stable results (w.r.t. the learning rate) when is larger, e.g., larger than . For MiniImageNet, the highest accuracy is obtained with , higher than with ; for TiredImageNet, the highest accuracy is obtained with , higher than with . As will be seen in Table 4, Table 5, these results with a larger are comparable with many existing algorithms.

Figure 3: We plot the change of the meta-testing accuracy (over 10,000 tasks) along with the process of inner loop updates based on learned MAML models.

To analyze how such a large value helps MAML, we plot the change of meta-testing accuracy along with the inner loop updates in Figure 3. Specifically, we meta-train with , but during meta-testing we show the accuracy from to inner loop updates. In general, the more inner loop updates we perform in meta-testing (even more than the number in meta-training), the higher the accuracy is. This observation matches the few-shot regression study in Finn et al. (2017a).

Also from Figure 3, we find that before any inner loop update, the learned initialization has a accuracy, i.e., the accuracy by random classification. While this may explain why a larger number of inner loop updates are needed, the accuracy is a bit surprising and hard to explain at first glance. MAML does learn a set of linear classifier initialization . How could they perform like random?

5 MAML is Sensitive to the Few-Shot Task Label Assignment

Figure 4: The change of the meta-testing accuracy (on MiniImageNet) along with the inner loop updates based on learned MAML models and the one with randomized .

To understand the above observation, we revisit how a few-shot task is generated. According to subsection 3.1 and subsection 3.3, each class index in an -way task can be paired with any of the base classes in meta-training or any of the novel classes in meta-testing. For a few-shot task of a specific set of semantic classes (e.g., “dog”, “cat”, ,“bird”), such an arbitrary nature can indeed turn it into a totally different task from MAML’s perspective. That is, the class “dog” may be assigned to and paired with at the current task, but to and paired with when it is sampled again. We note that, for a standard five-way task, a same set of five semantic classes can be assigned to in (i.e., ) different ways.

This permutation in class label assignments explains why we obtain accuracy using the learned initialization of MAML directly without inner loop updates. On the one hand, we may sample the same set of semantic classes but in different permutations so their accuracy cancels out. On the other hand, since the permutation occurs also in meta-training, each will be discouraged to learn specific knowledge towards any semantic class. Indeed, we find that the learned initialization also has a accuracy on the meta-training set (please see the supplementary material).

This observation leads to two further questions:

  • [nosep,topsep=0pt,parsep=0pt,partopsep=0pt, leftmargin=*]

  • Are the learned

    useless and can be replaced by any random vectors?

  • Do different permutations lead to different meta-testing results after inner loop updates?

(a) MiniImageNet, 1-Shot (b) MiniImageNet, 5-Shot (c) TieredImageNet, 1-Shot (d) TieredImageNet, 5-Shot
Figure 5: The histogram of the average meta-testing accuracy (averaged over tasks). The x-axis corresponds to accuracy; the y-axis corresponds to counts.
MiniImageNet TieredImageNet
Select the permutation by 1-Shot 5-Shot 1-Shot 5-Shot
None 64.420.20 83.440.13 65.720.20 84.370.16
Initial Support Acc 64.420.20 83.950.13 65.060.20 84.320.16
Initial Support Loss 64.420.20 83.910.13 65.420.20 84.230.16
Updated Support Acc 64.420.20 83.950.13 65.010.20 84.370.16
Updated Support Loss 64.670.20 84.050.13 65.430.20 84.220.16
Table 1: The meta-testing accuracy over 2,000 tasks given different permutation selection strategies.

We answer the first question in Figure 4: the learned initialization result in higher accuracy than randomized ones. For the second question, we conduct a detailed experiment as follows.

  • [nosep,topsep=0pt,parsep=0pt,partopsep=0pt, leftmargin=*]

  • We evaluate a learned MAML over sampled meta-testing tasks.

  • In each task, we consider all the permutations of class label assignments, followed by inner loop gradient steps. We then apply the updated models to the corresponding permutations, obtain meta-testing accuracy for each task, and sort them in the descending order.

  • We summarize the tasks by taking average over each task’s accuracy at the same rank. Namely, we will obtain averaged accuracy, each corresponds to a specific rank in each task.

We show the histogram of the average meta-testing accuracy in Figure 5. There exists a huge variance. Specifically, the best permutation can be higher than the worst in one/five-shot tasks. The best permutation is also much higher than vanilla MAML’s results (from section 5), which are 64.42%/83.44%/65.72%/84.37%, corresponding to the four sub-figures from left to right in Figure 5. What’s more, the best permutation can easily achieve state-of-the-art accuracy (see Table 4).

Of course, so far we find the best permutation through cherry-picking — by looking at the meta-testing accuracy — so it is like an upper bound. However, if we can (a) develop a strategy to find the best permutation without looking at the query sets’ labels, (b) leverage the variance among permutations, or (c) make MAML permutation-invariant, we may be able to practically improve MAML.

MiniImageNet TieredImageNet
1-Shot 5-Shot 1-Shot 5-Shot
Vanilla 64.420.20 83.440.13 65.720.24 84.370.16
Full 65.500.21 84.430.13 66.680.24 84.830.16
Rotated 65.370.21 84.400.13 66.630.24 84.810.16
Table 2: The meta-testing accuracy by ensemble over models started with different permutations.
MiniImageNet TieredImageNet
1-Shot 64.400.21 66.240.24
5-Shot 84.240.13 84.520.16
Table 3: We average the top-layer classifiers and expand it to -way during meta-testing.

6 Making MAML Permutation-Invariant in the Meta-Testing Phase

In this section, we investigate ways to make MAML permutation-invariant during meta-testing. That is, we take the same learned MAML as in section 5 without changing the meta-training phase.

The first direction we investigate is to search for the best permutation for each task. As we cannot access query sets’ labels in advance, we explore using the support sets’ data as a proxy. Specifically, we consider choosing the best permutation under which the learned initialization (before inner loop updates) leads to (a) the largest support set accuracy or (b) the smallest support set loss. We further consider two less practical ways, which are to perform inner loop updates for each permutation and re-evaluate (a) and (b). Table 1 summarizes the results: none of the above leads to consistent gains.

The second direction we investigate is to perform ensemble Breiman (1996); Zhou (2012); Dietterich (2000) over the updated model of each permutation. For this direction, instead of permute the class label assignment of a task, we permute , which is equivalent to the former but easier for aggregating the models. We study two versions: (a) full permutation (i.e., of them in five-way tasks), which is intractable for a larger ; (b) rotated permutation, which is to rotate ,444That is, we consider re-assign to , where . leading to permutations. Table 2 summarizes the results — ensemble can consistently improve vanilla MAML. Importantly, even with the rotated version that has exponentially fewer base models than the full version, the improvements are comparable. We note that, this ensemble is quite different from the common practice that performs augmentation to the test data or learns multiple meta-learners.

The third direction is forced permutation invariance. That is, we initialize each by the average: . By doing so, no matter which permutation we perform, the resulting inner loop update is the same. At first glance, this approach makes less sense as the resulting simply takes chances in classification. However, please note that according to Figure 3, even the original initialization has an averaged chance accuracy. This approach is further inspired by viewing the permutation in meta-training as a special form of dropout Srivastava et al. (2014). That is, in meta-training, we receive a task with an arbitrary permutation, which can be understood as drawing a permutation at random for the task. In meta-testing, we then take expectation over the distribution, which essentially lead to the averaged . Table 3 shows the results, which improve vanilla MAML (see Table 2) in three of four results.

ResNet-12 1-Shot 5-Shot
ProtoMAML Triantafillou et al. (2020) 62.620.20 79.240.20
MetaOptNet Lee et al. (2019) 62.640.35 78.630.68
MTL+E3BM Sun et al. (2019) 63.800.40 80.100.30
RFS-Distill Tian et al. (2020) 64.820.60 82.140.43
DeepEMD Zhang et al. (2020) 65.910.82 82.410.56
MATE+MetaOptNet Chen et al. (2020) 62.080.64 78.640.46
TRAML+AM3 Li et al. (2020a) 67.100.54 79.540.60
DSN-MR Simon et al. (2020) 64.600.72 79.510.50
FEAT Ye et al. (2020b) 66.780.20 82.050.14
MAML (Our reimplementation) 64.420.20 83.440.13
MAML-FO 63.030.20 83.270.13
MAML-PM 59.650.20 79.640.13
unicorn-MAML 65.170.20 84.300.13
Table 4: 5-Way 1-Shot and 5-Shot classification accuracy and 95% confidence interval on MiniImageNet over 10,000 tasks with a ResNet-12 backbone. “MAML-PM”: we choose the permutation by the minimum initial support set loss in both meta-training/testing phases. “MAML-FO”: we make the class label assignment deterministic.
ResNet-12 1-Shot 5-Shot
ProtoNet Snell et al. (2017) 68.230.23 84.030.16
ProtoMAML Triantafillou et al. (2020) 67.100.23 81.180.16
MetaOptNet Lee et al. (2019) 65.990.72 81.560.53
MTL+E3BM Sun et al. (2019) 71.200.40 85.300.30
RFS-Distill Tian et al. (2020) 69.740.72 84.410.55
DeepEMD Zhang et al. (2020) 71.520.69 86.030.49
MATE+MetaOptNet Chen et al. (2020) 71.160.87 86.030.58
DSN-MR Simon et al. (2020) 67.390.82 82.850.56
FEAT Ye et al. (2020b) 70.800.23 84.790.16
MAML (Our reimplementation) 65.720.20 84.370.16
MAML-FO 64.680.20 83.920.16
MAML-PM 64.350.20 82.130.16
unicorn-MAML 69.240.20 86.060.16
Table 5: 5-Way 1-Shot and 5-Shot classification accuracy and 95% confidence interval on TieredImageNet over 10,000 tasks with a ResNet-12 backbone.
(a) MiniImageNet, 1-Shot (b) MiniImageNet, 5-Shot (c) TieredImageNet, 1-Shot (d) TieredImageNet, 5-Shot
Figure 6: The change of meta-testing accuracy (over 10,000 tasks) along with the process of inner loop updates based on unicorn-MAML. We investigate updating/freezing the feature extractor .

7 Making MAML Permutation-Invariant in the Meta-Training Phase

We further investigate making both the meta-training and meta-testing phases permutation-invariant. The first direction is again to search for the best permutation for each task. Specifically, we select the order with the minimum initial support set loss in both phases. The second direction we investigate is to make the label assignment deterministic. Concretely, we give each meta-training/validation/testing class an integer index. Whenever we sample classes, we sort them by the indices and then label them with .

Our third direction is to apply forced permutation invariance, but this time to meta-testing. That is, we explore meta-training a single rather than (we name this method unicorn-MAML). We use this learned initialization of to initialize each in the beginning of the inner loop gradient updates. In meta-training, we then aggregate the gradients w.r.t. to to update .

Table 4 and Table 5 summarize the results, together with those by existing few-shot learning algorithms. The first two methods (i.e., MAML-PM and MAML-FO) hurt MAML, suggesting that the permutations in meta-training may be beneficial, e.g., to prevent the initialization from over-fitting meta-training tasks. To our surprise, the third method — learning and testing with a single — consistently improves MAML. Specifically, on MiniImageNet, unicorn-MAML has a gain on one-shot tasks and gain on five-shot tasks. The latter already achieves the state-of-the-art accuracy. On TieredImageNet, unicorn-MAML has significant improvements ( gain on one-shot tasks and gain on five-shot tasks). The latter, again, already achieves the state-of-the-art accuracy. Specifically, compared to ProtoMAML and MetaOptNet which are both permutation-invariant, unicorn-MAML notably outperforms them. We note that, similar to permutations in meta-training, learning a single prevents any from over-fitting a specific semantic class.

Embedding adaptation is needed. We analyze unicorn-MAML in terms of its inner loop updates during meta-testing, similar to Figure 3. This time, we also investigate updating or freezing the feature extractor . Figure 6 shows the results on five-way one- and five-shot tasks on both datasets. unicorn-MAML’s accuracy again begins with but rapidly increases along with the inner loop updates. In three out of four cases, adapting the feature extractor is necessary for claiming a higher accuracy, even if the backbone has been well pre-trained, which matches the recent claim by Arnold and Sha (2021).

We further evaluate unicorn-MAML on CUB Wah et al. (2011), where unicorn-MAML also achieves promising improvements. (See the supplementary material.)

8 Conclusion

We provide a series of analyses and observations of MAML for few-shot classification, in terms of hyper-parameter tuning and its sensitivity to the inherent permutations in few-shot task generations. With a large number of inner loop gradient steps (in both meta-training and meta-testing), MAML can achieve comparable results to many existing algorithms. By further incorporating a forced permutation-invariant treatment, we present unicorn-MAML, which arrives at the state-of-the-art accuracy of five-shot tasks, without introducing any extra sub-networks. We hope that unicorn-MAML could serve as a strong baseline for future work in few-shot classification.

Acknowledgment

This research is supported by National Key R&D Program of China (2020AAA0109401), NSFC (61773198,61921006,62006112), NSFC-NRF Joint Research Project under Grant 61861146001, Collaborative Innovation Center of Novel Software Technology and Industrialization, NSF of Jiangsu Province (BK20200313), and the OSU GI Development funds. We are thankful for the generous support of computational resources by Ohio Supercomputer Center and AWS Cloud Credits for Research. We thank Sébastien M.R. Arnold (USC) for helpful discussions.

References

  • A. Afrasiyabi, J. Lalonde, and C. Gagné (2020) Associative alignment for few-shot image classification. In ECCV, pp. 18–35. Cited by: Table F.
  • M. Al-Shedivat, T. Bansal, Y. Burda, I. Sutskever, I. Mordatch, and P. Abbeel (2018) Continuous adaptation via meta-learning in nonstationary and competitive environments. In ICLR, Cited by: §1.
  • M. Andrychowicz, M. Denil, S. G. Colmenarejo, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas (2016) Learning to learn by gradient descent by gradient descent. In NIPS, pp. 3981–3989. Cited by: §1, §2.
  • A. Antoniou, H. Edwards, and A. J. Storkey (2019) How to train your MAML. In ICLR, Cited by: §1, §2, §3.2, §4.
  • S. M. R. Arnold and F. Sha (2021) Embedding adaptation is still needed for few-shot learning. CoRR abs/2104.07255. Cited by: §1, §7.
  • P. Bachman, A. Sordoni, and A. Trischler (2017) Learning algorithms for active learning. In ICML, Cited by: §1.
  • Y. Balaji, S. Sankaranarayanan, and R. Chellappa (2018) MetaReg: towards domain generalization using meta-regularization. In NeurIPS, Cited by: §1.
  • J. Baxter (2000) A model of inductive bias learning. JAIR 12, pp. 149–198. Cited by: §2.
  • I. Bello, B. Zoph, V. Vasudevan, and Q. V. Le (2017)

    Neural optimizer search with reinforcement learning

    .
    In ICML, Cited by: §1.
  • A. Bernacchia (2021) Meta-learning with negative learning rates. In ICLR, Cited by: §2.
  • L. Bertinetto, J. F. Henriques, P. H. S. Torr, and A. Vedaldi (2019) Meta-learning with differentiable closed-form solvers. In ICLR, Cited by: §2.
  • L. Breiman (1996) Bagging predictors. Machine learning. Cited by: §1, §6.
  • W. Chao, H. Ye, D. Zhan, M. Campbell, and K. Q. Weinberger (2020)

    Revisiting meta-learning as supervised learning

    .
    CoRR abs/2002.00573. Cited by: §1, §3.3.
  • W. Chen, Y. Liu, Z. Kira, Y. F. Wang, and J. Huang (2019) A closer look at few-shot classificationA closer look at few-shot classification. In ICLR, Cited by: Table F, §1.
  • X. Chen, Z. Wang, S. Tang, and K. Muandet (2020) MATE: plugging in model awareness to task embedding for meta learning. In NeurIPS, Cited by: Table 4, Table 5.
  • I. Clavera, A. Nagabandi, R. S. Fearing, P. Abbeel, S. Levine, and C. Finn (2018) Learning to adapt: meta-learning for model-based control. CoRR abs/1803.11347. Cited by: §2.
  • I. Clavera, A. Nagabandi, S. Liu, R. S. Fearing, P. Abbeel, S. Levine, and C. Finn (2019) Learning to adapt in dynamic, real-world environments through meta-reinforcement learning. In ICLR, Cited by: §1.
  • W. Dai, S. Muggleton, J. Wen, A. Tamaddoni-Nezhad, and Z. Zhou (2017) Logical vision: one-shot meta-interpretive learning from real images. In ILP, pp. 46–62. Cited by: §2.
  • G. Denevi, C. Ciliberto, D. Stamos, and M. Pontil (2018) Learning to learn around A common mean. In NeurIPS, pp. 10190–10200. Cited by: §2.
  • J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In CVPR, Cited by: §2.
  • T. G. Dietterich (2000) Ensemble methods in machine learning. In International workshop on multiple classifier systems, Cited by: §1, §6.
  • A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. (2021) An image is worth 16x16 words: transformers for image recognition at scale. In ICLR, Cited by: §2.
  • Y. Duan, M. Andrychowicz, B. Stadie, O. J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba (2017) One-shot imitation learning. In NIPS, Cited by: §1.
  • Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel (2016) RL : fast reinforcement learning via slow reinforcement learning. CoRR abs/1611.02779. Cited by: §1.
  • H. Edwards and A. Storkey (2017) Towards a neural statistician. In ICLR, Cited by: §1.
  • T. Elsken, J. H. Metzen, and F. Hutter (2019) Neural architecture search: A survey. JMLR 20, pp. 55:1–55:21. Cited by: §2.
  • Y. Fan, F. Tian, T. Qin, X. Li, and T. Liu (2018) Learning to teach. In ICLR, Cited by: §2.
  • C. Finn, P. Abbeel, and S. Levine (2017a) Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, pp. 1126–1135. Cited by: §1, §1, §2, §3.2, §3.2, §4, §4.
  • C. Finn and S. Levine (2018) Meta-learning and universality: deep representations and gradient descent can approximate any learning algorithm. In ICLR, Cited by: §1, §2, footnote 1.
  • C. Finn, K. Xu, and S. Levine (2018) Probabilistic model-agnostic meta-learning. In NeurIPS, pp. 9537–9548. Cited by: §2.
  • C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine (2017b) One-shot visual imitation learning via meta-learning. In CoRL, Cited by: §1.
  • C. Finn (2018) Learning to learn with gradients. Ph.D. Thesis, UC Berkeley. Cited by: §1.
  • L. Franceschi, M. Donini, P. Frasconi, and M. Pontil (2017)

    A bridge between hyperparameter optimization and larning-to-learn

    .
    CoRR abs/1712.06283. Cited by: §2.
  • K. Frans, J. Ho, X. Chen, P. Abbeel, and J. Schulman (2018) Meta learning shared hierarchies. In ICLR, Cited by: §1.
  • V. Garg and A. Kalai (2018) Supervising unsupervised learning. In NeurIPS, pp. 4996–5006. Cited by: §1.
  • G. Ghiasi, T. Lin, and Q. V. Le (2018) DropBlock: A regularization method for convolutional networks. In NeurIPS, pp. 10750–10760. Cited by: §3.3.
  • J. Gu, Y. Wang, Y. Chen, V. O. K. Li, and K. Cho (2018)

    Meta-learning for low-resource neural machine translation

    .
    In EMNLP, pp. 3622–3631. Cited by: §2.
  • Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao (2016)

    MS-celeb-1m: A dataset and benchmark for large-scale face recognition

    .
    In ECCV, pp. 87–102. Cited by: §2.
  • B. Hariharan and R. B. Girshick (2017) Low-shot visual recognition by shrinking and hallucinating features. In ICCV, pp. 3037–3046. Cited by: §2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §1, §2, §3.3.
  • G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In CVPR, pp. 2261–2269. Cited by: §2.
  • P. Huang, C. Wang, R. Singh, W. Yih, and X. He (2018) Natural language to structured query generation via meta-learning. In ACL, pp. 732–738. Cited by: §2.
  • S. Huang, R. Jin, and Z. Zhou (2014) Active learning by querying informative and representative examples. TPAMI 36 (10), pp. 1936–1949. Cited by: §2.
  • A. Joulin, L. Van Der Maaten, A. Jabri, and N. Vasilache (2016) Learning visual features from large weakly supervised data. In ECCV, pp. 67–84. Cited by: §2.
  • Ł. Kaiser, O. Nachum, A. Roy, and S. Bengio (2017) Learning to remember rare events. In ICLR, Cited by: §1.
  • G. Koch, R. Zemel, and R. Salakhutdinov (2015) Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, Vol. 2. Cited by: §2.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012)

    Imagenet classification with deep convolutional neural networks

    .
    In NIPS, pp. 1106–1114. Cited by: §2.
  • K. Lee, S. Maji, A. Ravichandran, and S. Soatto (2019) Meta-learning with differentiable convex optimization. In CVPR, pp. 10657–10665. Cited by: Appendix E, §1, §1, §2, §3.3, §3.3, Table 4, Table 5.
  • C. Lemke, M. Budka, and B. Gabrys (2015) Metalearning: a survey of trends and technologies. Artificial intelligence review 44 (1), pp. 117–130. Cited by: §1.
  • A. Li, W. Huang, X. Lan, J. Feng, Z. Li, and L. Wang (2020a) Boosting few-shot learning with adaptive margin loss. In CVPR, pp. 12573–12581. Cited by: Table 4.
  • D. Li, Y. Yang, Y. Song, and T. M. Hospedales (2018) Learning to generalize: meta-learning for domain generalization. In AAAI, pp. 3490–3497. Cited by: §1.
  • K. Li, Y. Zhang, K. Li, and Y. Fu (2020b) Adversarial feature hallucination networks for few-shot learning. In CVPR, pp. 13470–13479. Cited by: Table F.
  • K. Li and J. Malik (2017) Learning to optimize. In ICLR, Cited by: §1.
  • Y. Li, Y. Yang, W. Zhou, and T. M. Hospedales (2019) Feature-critic networks for heterogeneous domain generalization. In ICML, pp. 3915–3924. Cited by: §1.
  • Y. Li and Z. Zhou (2015) Towards making unlabeled data never hurt. TPAMI 37 (1), pp. 175–188. Cited by: §2.
  • B. Liu, Y. Cao, Y. Lin, Q. Li, Z. Zhang, M. Long, and H. Hu (2020) Negative margin matters: understanding margin in few-shot classification. In ECCV, pp. 438–455. Cited by: Table F.
  • D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. Van Der Maaten (2018) Exploring the limits of weakly supervised pretraining. In ECCV, pp. 181–196. Cited by: §2.
  • A. Maurer, M. Pontil, and B. Romera-Paredes (2016) The benefit of multitask representation learning. JMLR 17, pp. 81:1–81:32. Cited by: §2.
  • A. Maurer (2009) Transfer bounds for linear feature learning. Machine Learning 75 (3), pp. 327–350. Cited by: §2.
  • L. Metz, N. Maheswaranathan, B. Cheung, and J. Sohl-Dickstein (2019) Meta-learning update rules for unsupervised representation learning. In ICLR, Cited by: §1.
  • N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel (2018) A simple neural attentive meta-learner. In ICLR, Cited by: §1.
  • S. Motiian, Q. Jones, S. M. Iranmanesh, and G. Doretto (2017) Few-shot adversarial domain adaptation. In NIPS, pp. 6673–6683. Cited by: §1.
  • T. Munkhdalai and H. Yu (2017) Meta networks. In ICML, Cited by: §1.
  • A. Nichol, J. Achiam, and J. Schulman (2018) On first-order meta-learning algorithms. CoRR abs/1803.02999. Cited by: §1, §2, §3.2.
  • K. Pang, M. Dong, Y. Wu, and T. Hospedales (2018) Meta-learning transferable active learning policies by deep reinforcement learning. CoRR abs/1806.04798. Cited by: §1.
  • P. Probst, A. Boulesteix, and B. Bischl (2019) Tunability: importance of hyperparameters of machine learning algorithms. JMLR 20, pp. 53:1–53:32. Cited by: §2.
  • S. Qiao, C. Liu, W. Shen, and A. L. Yuille (2018) Few-shot image recognition by predicting parameters from activations. In CVPR, pp. 7229–7238. Cited by: §1, §3.3.
  • A. Rajeswaran, C. Finn, S. M. Kakade, and S. Levine (2019) Meta-learning with implicit gradients. In NeurIPS, pp. 113–124. Cited by: §1, §2.
  • A. J. Ratner, H. Ehrenberg, Z. Hussain, J. Dunnmon, and C. Ré (2017) Learning to compose domain-specific transformations for data augmentation. In NIPS, pp. 3236–3246. Cited by: §2.
  • S. Ravi and H. Larochelle (2017) Optimization as a model for few-shot learning. In ICLR, Cited by: §1, §2, §3.3.
  • S. Ravi and H. Larochelle (2018) Meta-learning for batch mode active learning. In ICLR Workshop, Cited by: §1.
  • S. E. Reed, Y. Chen, T. Paine, A. van den Oord, S. M. A. Eslami, D. J. Rezende, O. Vinyals, and N. de Freitas (2018) Few-shot autoregressive density estimation: towards learning to learn distributions. In ICLR, Cited by: §1.
  • M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle, and R. S. Zemel (2018a) Meta-learning for semi-supervised few-shot classification. In ICLR, Cited by: §1, §3.3.
  • M. Ren, W. Zeng, B. Yang, and R. Urtasun (2018b) Learning to reweight examples for robust deep learning. In ICML, pp. 4331–4340. Cited by: §2.
  • J. Requeima, J. Gordon, J. Bronskill, S. Nowozin, and R. E. Turner (2019) Fast and flexible multi-task classification using conditional neural adaptive processes. In NeurIPS, pp. 7957–7968. Cited by: Appendix E, §2.
  • M. Riemer, I. Cases, R. Ajemian, M. Liu, I. Rish, Y. Tu, and G. Tesauro (2019) Learning to learn without forgetting by maximizing transfer and minimizing interference. In ICLR, Cited by: §1.
  • S. Ritter, J. X. Wang, Z. Kurth-Nelson, S. M. Jayakumar, C. Blundell, R. Pascanu, and M. Botvinick (2018) Been there, done that: meta-learning with episodic recall. In ICML, pp. 4351–4360. Cited by: §1.
  • O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F. Li (2015) ImageNet large scale visual recognition challenge. IJCV 115 (3), pp. 211–252. Cited by: §2.
  • A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell (2019) Meta-learning with latent embedding optimization. In ICLR, Cited by: §1, §1, §2, §3.3, §3.3.
  • A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap (2016) Meta-learning with memory-augmented neural networks. In ICML, pp. 1842–1850. Cited by: §1.
  • S. Sharma, A. Jha, P. Hegde, and B. Ravindran (2018) Learning to multi-task by active sampling. In ICLR, Cited by: §1.
  • P. Shyam, S. Gupta, and A. Dukkipati (2017) Attentive recurrent comparators. In ICML, pp. 3173–3181. Cited by: §2.
  • C. Simon, P. Koniusz, R. Nock, and M. Harandi (2020) Adaptive subspaces for few-shot learning. In CVPR, pp. 4135–4144. Cited by: Table 4, Table 5.
  • K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: §2.
  • J. Snell, K. Swersky, and R. S. Zemel (2017) Prototypical networks for few-shot learning. In NIPS, pp. 4080–4090. Cited by: Table F, §1, §1, §2, §3.3, Table 5.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. JMLR 15 (1), pp. 1929–1958. Cited by: Appendix D, §6.
  • B. Stadie, G. Yang, R. Houthooft, P. Chen, Y. Duan, Y. Wu, P. Abbeel, and I. Sutskever (2018) The importance of sampling inmeta-reinforcement learning. In NeurIPS, pp. 9300–9310. Cited by: §1.
  • Q. Sun, Y. Liu, Z. Chen, T. Chua, and B. Schiele (2019)

    Meta-transfer learning through hard tasks

    .
    CoRR abs/1910.03648. Cited by: Table 4, Table 5.
  • F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales (2018) Learning to compare: relation network for few-shot learning. In CVPR, pp. 1199–1208. Cited by: §1.
  • C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In CVPR, pp. 1–9. Cited by: §2.
  • B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L. Li (2016) YFCC100M: the new data in multimedia research. Communications of ACM 59 (2), pp. 64–73. Cited by: §2.
  • S. Thrun and L. Pratt (2012) Learning to learn. Springer Science & Business Media. Cited by: §1.
  • Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, and P. Isola (2020) Rethinking few-shot image classification: A good embedding is all you need?. In ECCV, pp. 266–282. Cited by: Table 4, Table 5.
  • E. Triantafillou, T. Zhu, V. Dumoulin, P. Lamblin, K. Xu, R. Goroshin, C. Gelada, K. Swersky, P. Manzagol, and H. Larochelle (2020) Meta-dataset: A dataset of datasets for learning to learn from few examples. In ICLR, Cited by: Appendix E, §1, §1, §2, §2, Table 4, Table 5.
  • J. Vanschoren (2018) Meta-learning: a survey. CoRR abs/1810.03548. Cited by: §1.
  • M. Vartak, A. Thiagarajan, C. Miranda, J. Bratman, and H. Larochelle (2017) A meta-learning perspective on cold-start recommendations for items. In NIPS, pp. 6907–6917. Cited by: §2.
  • R. Vilalta and Y. Drissi (2002) A perspective view and survey of meta-learning. Artificial Intelligence Review 18 (2), pp. 77–95. Cited by: §1, §2.
  • O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra (2016) Matching networks for one shot learning. In NIPS, pp. 3630–3638. Cited by: Table F, §1, §1, §1, §2, §3.1, §3.3, §3.3.
  • R. Vuorio, S. Sun, H. Hu, and J. J. Lim (2019) Multimodal model-agnostic meta-learning via task-aware modulation. In NeurIPS, pp. 1–12. Cited by: Appendix E, §2.
  • C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie (2011) The Caltech-UCSD Birds-200-2011 Dataset. Technical report Technical Report CNS-TR-2011-001, California Institute of Technology. Cited by: 3rd item, Appendix C, §7.
  • J. X. Wang, Z. Kurth-Nelson, D. Tirumala, H. Soyer, J. Z. Leibo, R. Munos, C. Blundell, D. Kumaran, and M. Botvinick (2017a) Learning to reinforcement learnLearning to reinforcement learn. In CogSci, Cited by: §1.
  • P. Wang, L. Liu, C. Shen, Z. Huang, A. van den Hengel, and H. T. Shen (2017b) Multi-attention network for one shot learning. In CVPR, pp. 6212–6220. Cited by: §2.
  • T. Wang, J. Zhu, A. Torralba, and A. A. Efros (2018a) Dataset distillation. CoRR abs/1811.10959. Cited by: §2.
  • Y. Wang, W. Chao, K. Q. Weinberger, and L. van der Maaten (2019) Simpleshot: revisiting nearest-neighbor classification for few-shot learning. CoRR abs/1911.04623. Cited by: §1.
  • Y. Wang, R. B. Girshick, M. Hebert, and B. Hariharan (2018b) Low-shot learning from imaginary data. In CVPR, pp. 7278–7286. Cited by: §1, §2.
  • Y. Wang and M. Hebert (2016) Learning to learn: model regression networks for easy small sample learning. In ECCV, pp. 616–634. Cited by: §1.
  • Y. Wang, D. Ramanan, and M. Hebert (2017c) Learning to model the tail. In NIPS, pp. 7032–7042. Cited by: §2.
  • O. Wichrowska, N. Maheswaranathan, M. W. Hoffman, S. G. Colmenarejo, M. Denil, N. de Freitas, and J. Sohl-Dickstein (2017) Learned optimizers that scale and generalize. In ICML, pp. 3751–3760. Cited by: §1.
  • H. Yao, Y. Wei, J. Huang, and Z. Li (2019) Hierarchically structured meta-learning. In ICML, pp. 7045–7054. Cited by: Appendix E, §2.
  • H. Ye, H. Hu, D. Zhan, and F. Sha (2020a) Few-shot learning via embedding adaptation with set-to-set functions. In CVPR, pp. 8805–8814. Cited by: Appendix C, §1, §1, §1, §3.3, §3.3, §3.3.
  • H. Ye, X. Sheng, and D. Zhan (2020b) Few-shot learning with adaptively initialized task optimizer: a practical meta-learning approach. Machine Learning 109 (3), pp. 643–664. Cited by: Appendix E, §1, §2, Table 4, Table 5.
  • W. Ying, Y. Zhang, J. Huang, and Q. Yang (2018) Transfer learning via learning to transfer. In ICML, pp. 5072–5081. Cited by: §1.
  • T. Yu, C. Finn, S. Dasari, A. Xie, T. Zhang, P. Abbeel, and S. Levine (2018) One-shot imitation from observing humans via domain-adaptive meta-learning. In Robotics: Science and Systems, Cited by: §1, §2.
  • C. Zhang, Y. Cai, G. Lin, and C. Shen (2020) DeepEMD: few-shot image classification with differentiable earth mover’s distance and structured classifiers. In CVPR, pp. 12200–12210. Cited by: Table F, §3.3, Table 4, Table 5.
  • R. Zhang, T. Che, Z. Ghahramani, Y. Bengio, and Y. Song (2018a) MetaGAN: an adversarial approach to few-shot learning. In NeurIPS, pp. 2371–2380. Cited by: §1.
  • Y. Zhang, Y. Wei, and Q. Yang (2018b) Learning to multitask. In NeurIPS, pp. 5776–5787. Cited by: §1.
  • B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba (2017)

    Places: a 10 million image database for scene recognition

    .
    TPAMI 40 (6), pp. 1452–1464. Cited by: §2.
  • Z. Zhou (2012) Ensemble methods: foundations and algorithms. Chapman and Hall/CRC. Cited by: §1, §6.

Appendix A Permutations of Class Label Assignments

We provide more details and discussions on the permutation issue in the class label assignment. As illustrated in Figure G, few-shot tasks of the same set of semantic classes (e.g., “unicorn”, “bee”, etc.) can be associated with different label assignments (i.e., ) and are paired with the learned initialization of MAML differently.

In section 5 and Table 5, we study how the permutations affect the meta-testing accuracy (after inner loop optimization on ). (For five-way tasks, there are permutations.) We see a high variance among the permutations. We note that, the inner loop optimization updates not only the linear classifiers , but also of the feature extractor. Different permutations therefore can lead to different feature extractors.

Here, we further sample a five-way one-shot meta-testing task, and study the change of accuracy along with the inner loop updates (using a MAML trained with a fixed number of inner loop updates). Specifically, we plot both the support set and query set accuracy for each permutation. As shown in Figure H, there exists a high variance of query set accuracy among permutations after inner loop optimization. This is, however, not the case for the support set. (The reason that only three curves appear for the support set is because there are only five examples, and all the permutations reach support set accuracy within five inner loop steps.) Interestingly, for all the permutations, their initialized accuracy (i.e., before inner loop optimization) is all . After an investigation, we find that the meta-learned (initialization) is dominated by one of them; i.e., all the support or query examples are classified into one class. While this may not always be the case for other few-shot tasks or if we re-train MAML, for the task we sampled, it explains why we obtain for all permutations. We note that, even with an initial accuracy of , the learned initialization can quickly be updated to attain high classification accuracy.

We further compare the change of support and query set accuracy along with the inner loop optimization. We find that, while both accuracy increases, since the support set accuracy converges quickly and has a smaller variance among permutations, it is difficult to use its information to determine which permutation leads to the highest query set accuracy. This makes sense since the support set is few-shot: its accuracy thus cannot robustly reflect the query set accuracy. This explains why the methods studied in Table 1 cannot determine the best permutation for the query set.

We further study if the poor initialization accuracy using the learned also occurs on meta-training tasks, which are sampled from the base classes seen during the meta-training phases. Figure I shows the results — even on meta-training tasks, the initialization gives almost a random accuracy. We provide a simple mathematical explanation as follows. Let us assume we have a five-way one-shot task with five semantic classes and the best permutation follows the ascending order; i.e., for the classes in order. Let us assume this permutation gives a initialized support set accuracy. Since there are in total possible permutations, there will be of them with accuracy (i.e., by switching two-class indices), of them with accuracy (i.e., by shuffling the indices of three classes such that they do not take their original indices), of them with accuracy, and of them with accuracy. Taking an average over these permutations gives a accuracy. In other words, even if one of the permutations performs well, on average the accuracy will be close to random.

Figure G: Illustration of the permutation in class label assignment. A vanilla MAML learns classifier initialization . Each of them is paired with the corresponding class of a few-shot task. A few-shot task, however, may consist of the same set of semantic classes but in different permutations.
Figure H: The support (left) and query (right) set accuracy on a randomly sampled five-way one-shot meta-testing task from MiniImageNet. We plot the accuracy of each permutation (totally 120) along with the process of inner loop optimization (the same permutation is colored the same in the left and right images).
Figure I: The query set accuracy on meta-training (i.e., base) classes. We sample 10,000 five-way one-shot and five-shot meta-training tasks from MiniImageNet. We plot the averaged accuracy along with the inner loop optimization (the MAML is trained with M=15 and M=20 for one-shot and five-shot, respectively). We again see that the accuracy of the initialization is around .

Appendix B unicorn-MAML

We provide some further details of unicorn-MAML. The meta-parameters learned by unicorn-MAML are

for feature extraction and a single linear classifier

. In the inner loop optimization, is first duplicated into , which then undergo the same inner loop optimization process as vanilla MAML. Let us denote the resulting model . Now to perform the outer loop optimization during meta-training, we need to collect the gradients derived from the query set of a task. Let us denote by the gradient w.r.t. the initialization (please be referred to subsection 3.2). Since are duplicated from, we obtain the gradient w.r.t. the single classifier by .

Appendix C Experimental Results on the CUB dataset

We further evaluate unicorn-MAML on the CUB dataset Wah et al. (2011), following the split proposed by Ye et al. (2020a): there are 100/50/50 meta-training/evaluation/testing classes. All images are resized to . Table F shows the results: unicorn-MAML outperforms the existing methods.

ResNet-12 1-Shot 5-Shot
MatchNet Vinyals et al. (2016) 66.090.92 82.500.58
ProtoNet Snell et al. (2017) 71.870.85 85.080.57
DeepEMD Zhang et al. (2020) 75.650.83 88.690.50
Baseline++ Chen et al. (2019) 67.020.90 83.580.54
AFHN Li et al. (2020b) 70.531.01 83.950.63
Neg-Cosine Liu et al. (2020) 72.660.85 89.400.43
Align Afrasiyabi et al. (2020) 74.221.09 88.650.55
MAML (Our reimplementation) 77.670.20 90.350.16
unicorn-MAML 78.070.20 91.670.16
Table F: 5-Way 1/5-Shot classification accuracy and 95% confidence interval on CUB, evaluated over 10,000 tasks with a ResNet-12 backbone. : methods with a ResNet-18 backbone. : our reimplementation of MAML is with carefully tuned numbers of inner loop steps.

Appendix D Additional Explanations of Our Studied Methods

We provide some more explanations on the ensemble and forced permutation invariance methods introduced in section 6. For the ensemble method, give a few-shot task, we can permute

to pair them differently with such a task. We can then perform different inner loop optimization to obtain a set of five-class classifiers that we can perform ensemble. In the main text, we average the posterior probabilities of these five-class classifiers to make the final predictions.

Since the permutation affects the meta-training phase as well, we can interpret the meta-training phase as follows. Ever time we sample a few-shot task , we also sample a permutation to re-label the classes. (We note that, this is implicitly done when few-shot tasks are sampled.) We then take to optimize in the inner loop. That is, in meta-training, the objective function in Equation 2 can indeed be re-written as

(D)

where

is a uniform distribution over all possible permutations.

Equation D can be equivalently re-written as

(E)

where means that the initialization of the linear classifiers are permuted; is the corresponding updated model. This additional sampling process of is reminiscent of dropout Srivastava et al. (2014)

, which randomly masks out a neural network’s neurons or edges to prevent an over-parameterized neural network from over-fitting. During testing, dropout takes expectation over the masks. We also investigate a similar idea, by taking expectation (

i.e., average) over the permutations on the linear classifiers, which results in a new initialization during the meta-testing phase: .

According to Table 4 and Table 5, we see that vanilla MAML outperforms MAML-FO and MAML-PM, both of which tend to pick a fixed permutation for each task during meta-training. This suggests that randomized permutations in meta-training are beneficial for vanilla MAML.

Appendix E Additional Comparison to Related Works

As mentioned in section 2, there are several following-up works to improve MAML (not specifically for the permutation issue). Requeima et al. (2019); Vuorio et al. (2019); Yao et al. (2019); Ye et al. (2020b) enable task-specific initialization with additional task embedding sub-networks. However, since they take an average of the feature embeddings (over classes) to represent a task, their methods cannot resolve the permutation issue. MetaOptNet Lee et al. (2019) performs inner loop optimization only on (till convergence), making it a convex problem which is not sensitive to the initialization and hence the permutations. This method, however, has a high computational burden and needs careful hyper-parameter tuning for additionally introduced regularizers. Triantafillou et al. (2020); Ye et al. (2020b) match the classifier with the prototypes (i.e., averaged feature embedding per class), which could be permutation-invariant, but cannot achieve accuracy as high as our unicorn-MAML (except for Ye et al. (2020b) on MiniImageNet one-shot tasks).