Few-shot classification refers to learning a classifier for new classes given only a few examples. While a plethora of models have emerged to tackle this recently, we find the current procedure and datasets that are used to systematically assess progress in this setting lacking. To address this, we propose Meta-Dataset: a new benchmark for training and evaluating few-shot classifiers that is large-scale, consists of multiple datasets, and presents more natural and realistic tasks. The aim is to measure the ability of state-of-the-art models to leverage diverse sources of data to achieve higher generalization, and to evaluate that generalization ability in a more challenging setting. We additionally measure robustness of current methods to variations in the number of available examples and the number of classes. Finally our extensive empirical evaluation leads us to identify weaknesses in Prototypical Networks and MAML, two popular few-shot classification methods, and to propose a new method, Proto-MAML, which achieves improved performance on our benchmark.READ FULL TEXT VIEW PDF
In few-shot classification, we are interested in learning algorithms tha...
Few-shot classification aims to learn a classifier to recognize unseen
In few-shot classification, the aim is to learn models able to discrimin...
We introduce a new large-scale NLI benchmark dataset, collected via an
Few-shot classification consists of learning a predictive model that is ...
Viewpoint estimation for known categories of objects has been improved
Problems with few examples of a new class of objects prove challenging t...
Few-shot learning refers to learning new concepts from few examples, an ability that humans naturally possess, but machines desperately lack. Improving on this aspect would lead to more efficient algorithms that can flexibly expand their knowledge as necessary without requiring large labeled datasets. We focus on few-shot classification: classifying unseen examples into one of new ‘test’ classes, given only a few reference examples of each new class. Recent progress in this direction has been made by considering a meta-problem: though we are not interested in learning about any training class in particular, we can still exploit the training classes for the purpose of learning to learn new classes from few examples. The acquired learning procedure can then be directly applied to few-shot learning problems on new classes.
This intuition has inspired numerous models of increasing complexity for this problem (see the Related Work for many examples). However, we believe that the commonly used setup for measuring success in this direction is lacking. Notably, the typical approach is to train a model on a subset of classes from a given dataset and then subject it to classification tasks formed from the remaining set of classes from the same dataset. However, to be practically useful, meta-learners must generalize to truly different classes sampled from a different data distribution altogether. Furthermore, the evaluation tasks are artificially constrained to have training sets that are perfectly class-balanced, and any two classes are equally likely to co-appear in the same task. It is not clear therefore to what extent the performance on these tasks approximates the performance in the significantly more structured and imbalanced real world.
directly addresses the aforementioned limitations. In particular: 1) it is significantly larger-scale than previous benchmarks and is comprised of multiple datasets of different data distributions, 2) its task creation is informed by class structure for ImageNet and Omniglot, 3) it introduces realistic class imbalance, and 4) it varies the number of classes in each task and the size of the training set, enabling us to examine the robustness of models across a spectrum of tasks: from very-low-shot learning onwards.
The main contribution of this work is therefore to offer a more realistic and challenging environment for training and evaluating meta-learners for few-shot classification. By evaluating various baselines and meta-learners on Meta-Dataset, we are able to expose weaknesses of two popular meta-learners: Prototypical Networks and MAML. Finally, in light of these findings, we propose a novel hybrid of these two approaches which we demonstrate captures complementary desired aspects of both and achieves state-of-the-art in Meta-Dataset.
The end-goal of few-shot classification is to produce a model which, given a new learning episode with classes and a few labeled examples ( per class, ), is able to generalize to unseen examples for that episode. In other words, the model learns from a training (support) set (with ) and is evaluated on a held-out test (query) set . Each example
is formed of an input vectorand a class label . Episodes with balanced training sets (i.e., ) are usually described as ‘-way, -shot’ episodes.
These evaluation episodes are constructed by sampling their classes from a larger set of classes and sampling the desired number of examples per class. A disjoint set of classes is used to train the model; note that this notion of training is distinct from the training that occurs within a few-shot learning episode.
Few-shot learning does not prescribe a specific training procedure, but a common approach involves matching the conditions in which the model is trained and evaluated (Vinyals et al., 2016). In other words, training often (but not always) proceeds in an episodic fashion. Some authors use training and testing to refer to what happens within any given episode, and use the terms meta-training and meta-testing to refer to using to turn the model into a learner capable of fast adaptation and for evaluating its success to learn using few shots. This nomenclature highlights the meta-learning perspective alluded to earlier, but to avoid confusion we will adopt another common nomenclature and refer to the training and test sets of an episode as the support and query sets and to the process of learning from simply as training.
Two datasets have emerged as de facto benchmarks for few-shot learning. Omniglot (Lake et al., 2015) is a dataset of 1623 handwritten characters from 50 different alphabets and contains 20 examples per class (character). Most recent methods obtain very high accuracy on various meta-learning problems formulated on Omniglot (using various numbers of ways and shots), rendering the comparisons between new few-shot learning methods unreliable.
The second benchmark, miniImageNet (Vinyals et al., 2016), is formed out of 100 ImageNet (Russakovsky et al., 2015) classes (64/16/20 for train/validation/test) and contains 600 examples per class. miniImageNet, albeit harder than Omniglot, has the same property that most recent methods trained on it present similar accuracy when controlling for model capacity, and we believe the dataset is approaching its limit in terms of allowing to discriminate between the merits of competing approaches. We hypothesize this can be due to an artificially constrained setup. In particular, current benchmarks:
Consider a fixed number of shots and ways. In contrast, real-life episodes are heterogeneous: they vary in terms of their number of classes and examples per class, and are unbalanced.
Measure only within-dataset generalization. However, realistic applications often involve generalization across datasets.
Ignore the relationships between classes when forming episodes. The coarse-grained classification of dogs and chairs may present different difficulties than the fine-grained classification of dog breeds, and current benchmarks do not establish a distinction between the two.
In this section we review common baseline and meta-learning models that we evaluate on our benchmark, and introduce a novel meta-learner that achieves the state-of-the-art on our benchmark.
Before diving into meta-learning, it is important to explore non-episodic solutions. Consider a deep neural network trained on a classification task. A natural non-episodic approach would exploit the meta-training data by simply using it to train a classifier over all of the meta-training classes. Consider the embedding function , defined by all the layers except the penultimate layer of the classification network. The hope of the non-episodic baselines resides in the possibility that this embedding function produces ‘meaningful’ representations even for examples of previously-unseen classes, thus enabling few-shot classification. It then remains to define an algorithm for using these representations for few-shot classification. We consider two choices for this algorithm, yielding the ‘-NN’ and ‘Finetune’ variants of this baseline.
The ‘-NN’ baseline classifies each query example as the class that its ‘closest’ support example belongs to. Closeness is measured by either Euclidean or cosine distance in the learned embedding space
. We treat this choice over the metric as a hyperparameter. On the other hand, the ‘Finetune’ baseline exploits the support set of each new meta-test task to train a new ‘output layer’ on top of the embedding functionfor the purpose of classifying between the new classes of the given task.
In the episodic setting, models are trained end-to-end for the purpose of learning to build classifiers from a few examples. We choose to experiment with Matching Networks (Vinyals et al., 2016), Prototypical Networks (Snell et al., 2017) and Model Agnostic Meta-Learning (MAML) (Finn et al., 2017) since we believe that these three cover a diverse set of approaches to few-shot learning. We also introduce a novel meta-learner which is inspired by the last two models.
In each training episode, episodic models compute for each query example , the distribution for its label conditioned on the support set and allow training this differentiably-parameterized conditional distribution end-to-end via gradient descent. The different models are distinguished by the manner in which this conditioning on the support set is realized. In all cases, the performance on the query set drives the update of the meta-learner’s weights, which include (and sometimes consist only of) the embedding weights. We briefly describe each method below.
Prototypical Networks construct a prototype for each class and then classify each query example as the class whose prototype is ‘nearest’ to it under Euclidean distance. More concretely, the probability that a query examplebelongs to class is defined as:
where is the prototype for class , computed by averaging the embeddings of class ’s support examples.
Matching Networks (in its simplest form) label each query example as a weighted linear combination of the support labels, where a support label is weighted more heavily the ‘closer’ the corresponding support embedding is to the query in question. Specifically the probability distribution over the labelof the query is given by:
where equals 1 if is true and 0 otherwise, and
denoting the cosine similarity.
Let us now expand the notation for the embedding function to , exposing its parameters
. MAML for few-shot classification starts by assuming a linear classifier, parametrized by a bias vectorand a weight matrix , applied in the embedding space. It then classifies a query example based on
where the support set is used to perform a small number of within-episode training steps for adjusting parameters and produce fine-tuned parameters . Training in MAML is made possible by unrolling the within-episode gradient descent steps performed on and optimizing the prediction of the query set labels with respect to the initial
by backpropagation. This normally requires computing second-order gradients, which can be expensive to obtain (both in terms of time and memory). For this reason, an approximation is often used whereby gradients of the within-episode descent steps are ignored. This variant is referred to as first-order MAML (fo-MAML) and was used our experiments. We did attempt to use the full-order version, but found it to be impractically expensive (e.g., it caused frequent out-of-memory problems).
Moreover, in our setting, since the number of ways will be varying between episodes, we do not learn and set them to zero (i.e., are the result of within-episode gradient descent initialized at 0), thus only training . In other words, MAML focuses on learning the within-episode initialization of the embedding network so that it can be rapidly adapted for a new task.
We introduce a novel meta-learner which we argue captures the best of both Prototypical Networks and MAML. In particular, the former exploits a very simple inductive bias that was demonstrated to be effective for reasoning about new classes from very few examples. However, it lacks a mechanism for within-task adaptation. On the other hand, the latter adopts a simple procedure for task adaptation using only a few steps. We view Proto-MAML as the marriage of these two complementary strengths.
As explained in Snell et al. (2017), Prototypical Networks can be re-interpreted as a linear classifier applied to a learned representation . In particular, due to using the squared Euclidean distance metric on top of the learned embeddings, the probability of a query example belonging to the different classes of the episode under the formulation of the Prototypical Network can be viewed as the output of a linear layer with a particular parameterization. Specifically, let denote a query example, the trainable embedding function, and the prototype for class
. Then, the ‘logit’ forbelonging to class is:
where the scalar value captures the term that does not relate to class and will not affect the softmax probabilities. The ’th unit of the equivalent linear layer therefore has weights and bias . It’s worth mentioning that since is a function of , and are thus differentiable with respect to .
We refer to Proto-MAML as the (fo-)MAML model where the task-specific linear layer of each episode is initialized from the Prototypical Network-equivalent weights and bias defined above and subsequently optimized as usual on the given support set. When computing the meta update for , we allow gradients to flow through the Prototypical Network-equivalent linear layer initialization. We show that this simple modification significantly helps the optimization of this model and outperforms vanilla fo-MAML by a large margin on Meta-Dataset.
Meta-Dataset aims to offer an environment for measuring progress in realistic few-shot classification tasks. In particular, we argue that it constitutes a more realistic environment for assessing performance on a more realistic version of the task. Our approach therefore is twofold: 1) changing the data and 2) changing the formulation of the task (i.e., how episodes are generated).
We accomplish the former by incorporating multiple diverse data sources, and the latter by introducing a new sampling procedure for episodes that more closely resembles realistic learning scenarios. The following sections describe these modifications in detail. The code is open source and publicly available111github.com/google-research/meta-dataset.
The data we propose to use is much larger in size than any previous benchmark, and is comprised of multiple different existing datasets. This invites research into how diverse sources of data can be exploited by a meta-learner, and allows us to evaluate a more challenging generalization problem, to new datasets altogether. Specifically, Meta-Dataset leverages data from the following 10 datasets: ILSVRC-2012 (ImageNet) (Russakovsky et al., 2015), Omniglot (Lake et al., 2015), Aircraft (Maji et al., 2013), CUB-200-2011 (Birds) (Wah et al., 2011), Describable Textures (Cimpoi et al., 2014), Quick Draw (Jongejan et al., 2016), Fungi (Schroeder & Cui, 2018), VGG Flower (Nilsback & Zisserman, 2008), Traffic Signs (Houben et al., 2013) and MSCOCO (Lin et al., 2014). These datasets were chosen because they are free and easy to obtain, span a variety of visual concepts (natural and human-made) and vary in how fine-grained the class definition is. More information about each of these datasets is provided in Appendix A.
However, to ensure that episodes correspond to realistic classification problems, all episodes generated in Meta-Dataset use classes from a single dataset at a time only. Moreover, two of these datasets, Traffic Signs and MSCOCO, are fully reserved for evaluation, meaning that no classes from them participate in the training set. The remaining ones contribute some classes to each of the training, validation and test splits of classes, roughly with 70% / 15% / 15% proportions. Two of these datasets, ImageNet and Omniglot, possess a class hierarchy that we exploit in Meta-Dataset. These are described below.
While ImageNet is commonly-used, we define a new class split for meta-learning on it and a novel procedure for sampling classes from it during episode creation. Both of these are informed by its class hierarchy, which we describe below.
ImageNet is a dataset comprised of 82,115 ‘synsets’, i.e., concepts of the WordNet ontology, and it provides ‘is-a’ relationships for its synsets, thus defining a DAG over them. In this benchmark, we only use the 1000 synsets that were chosen for the ILSVRC 2012 classification challenge as classes that can appear in our episodes. However, we leverage the ontology DAG for defining a sampling procedure that determines which of these 1000 classes should co-occur in each episode.
For this purpose, we consider a sub-graph of the overall DAG that consists of only the 1000 synsets of ILSVRC-2012 and their ancestors, so these 1000 synsets are all and only the leaves of the DAG. We then further ‘cut’ this sub-graph into three pieces, for the training, validation, and test splits, such that there is no overlap between the leaves of any of these pieces. We selected the synsets ‘carnivore’ and ‘device’ as the roots of the validation and test sub-graphs, respectively. The leaves that are reachable from ‘carnivore’ and ‘device’ form the sets of the validation and test classes, respectively. All of the remaining leaves constitute the training classes. This method of splitting ensures that the training classes (non-carnivore animals) are substantially semantically different from the test classes (inanimate devices such as various tools and instruments). We end up with 712 training, 202 validation and 188 test classes, roughly adhering to the standard 70 / 15 / 15 (%) splits.
This dataset is one of the established benchmarks for few-shot classification as mentioned earlier. The commonly used setup, however, flattens and ignores its two-level hierarchy of alphabets and characters. Instead, we take advantage of it to influence how we sample classes for creating episodes, yielding finer-grained within-alphabet classification problems. We also use the original splits proposed in (Lake et al., 2015): (all characters of) the ‘background’ and ‘evaluation’ alphabets are used for training and testing, respectively. However, we removed the 5 smallest alphabets from the ‘background’ set to reserve them for validation.
In this section we outline our proposed sampling algorithm for creating more realistic episodes.
Firstly, for ImageNet and Omniglot whose classes are hierarchically organized, we depart from the usual random uniform class selection process by incorporating this additional knowledge into the episode creation. Exploiting class structure can lead to more realistic tasks as, for example, it is arguably fairly unusual to classify frogs from laptops. A more natural task would instead be to classify desks from laptops, since these are more often observed simultaneously.
Further, we allow classes to have different ‘shots’, i.e., numbers of examples in the support set, therefore allowing for imbalanced episodes. Indeed, class imbalance is an inherent property of the world, so it is desirable to examine and improve upon the ability of few-shot learners to cope with it. Consider for example the classes of cat and a very specific species of alligator. There are plausibly more cats in the world than that specific alligator species, so there are more opportunities to learn about the former than the latter class. To obtain realistic imbalance ratios in episodes, we sample the number of examples of each class from a distribution derived from the relative class frequencies in the original dataset for these classes, as outlined later. Additionally, unlike current benchmarks, we allow the support sets to vary in size, both in its number of classes and in the total number of examples.
More concretely, our algorithm for sampling an episode from a given split of a dataset can be broken down into sampling two steps: sampling a set of classes from the given split and dataset, and sampling support and query sets of examples from those classes.
This procedure differs depending on which dataset is chosen. For datasets without a known class organization, we sample the ‘way’ uniformly from the range , where MAX-CLASSES is either or as many as there are available. Then we sample ‘way’ many classes uniformly at random from the requested class split (train, validation or test) of the given dataset. For ImageNet and Omniglot we employ a class-structure-aware procedure, outlined below.
We adopt a hierarchy-aware sampling algorithm for ImageNet, as follows. First, we sample a node uniformly at random from the set of ‘eligible’ nodes of the DAG structure corresponding to the specified split (train, validation or test). An internal node is ‘eligible’ for this selection if it spans at least 5 leaves, but no more than 392 leaves. The number 392 was chosen because it is the smallest number so that, collectively, all eligible internal nodes span all leaves in the DAG.
Once an eligible node is selected, some of the leaves that it spans will constitute the classes of the episode. Specifically, if the number of those leaves is no greater than 50, we use all of them. Otherwise, we randomly choose 50 of them.
This procedure enables the creation of tasks of varying degrees of fine-grainedness. For instance, if the sampled internal node has a small height, the leaf classes that it spans will represent semantically-related concepts, thus posing a fine-grained classification task. As the height of the sampled node increases, we ‘zoom out’ to consider a broader scope from which we sample classes and the resulting episodes are more coarse-grained.
We sample classes from Omniglot by first sampling an alphabet uniformly at random from the chosen split of alphabets (train, validation or test). Then, the ‘way’ of the episode is sampled uniformly at random using the same restrictions as for the rest of the datasets, but taking care not to sample a larger number than the number of characters that belong to the chosen alphabet. Finally, the prescribed number of characters of that alphabet are randomly sampled. This ensures that each episode presents a within-alphabet fine-grained classification.
We first sample the query set size. The query set is class-balanced, reflecting the fact that we care equally to perform well on all classes of an episode. The number of query images per class is computed as:
where is the set of selected classes and denotes the set of images belonging to class . The min over classes ensures that each class has at least images to add to the query set, thus allowing it to be class-balanced. The multiplier ensures that enough images of each class will be available to add to the support set, and the minimum with prevents the query set from being too large.
Then, we compute the total support set size:
where is a scalar sampled uniformly from interval . Intuitively, each class on average contributes either all its remaining examples (after placing of them in the query set) if there are less than or otherwise, to avoid having too large support sets. The multiplication with enables the potential generation of smaller support sets even when multiple images are available, since we are also interested in examining the very-low-shot end of the spectrum. The ‘ceiling’ operation ensures that each selected class will have at least one image in the support set. Finally, we cap the total support set size to .
We are now ready to compute the ‘shot’ of each class. Specifically, the proportion of the support set that will be devoted to class is computed as:
where is sampled uniformly from the interval . Intuitively, the un-normalized proportion of the support set that will be occupied by class is a noisy version of the total number of images of that class in the dataset . This design choice is made in the hopes of obtaining realistic class ratios, under the hypothesis that the dataset class statistics are a reasonable approximation of the real-world statistics of appearances of the corresponding classes. The shot of a class is then set to:
which ensures that at least one example is selected for each class, with additional examples selected proportionally to , if enough are available.
After these steps, we complete the episode creation process by choosing the prescribed number of examples of each chosen class uniformly at random to populate the support and query sets.
In our experiments, we focused on the evaluation of three meta-learning methods: Matching Networks, Prototypical Networks and fo-MAML. Indeed, they are some of the first from the meta-learning family to have been proposed and are regularly used as the meta-learning baselines to beat in few-shot learning research. That said, in the past two years, there have been several other methods proposed for few-shot learning. Some bear similarity with MAML and correspond to a meta-learner trained to quickly adapt its parameters to various novel tasks (Ravi & Larochelle, 2017; Munkhdalai & Yu, 2017; Rusu et al., 2019; Yoon et al., 2018). Others relate to Prototypical Networks by learning a data representation as well as a compact representation for a classifier of data under that representation (Bertinetto et al., 2019; Gidaris & Komodakis, 2018; Oreshkin et al., 2018; Gidaris & Komodakis, 2018). Methods similar to Matching Networks, in how they classify a novel example by performing comparisons with each individual support set example, were also proposed. These were based on graph neural networks (Satorras & Estrach, 2018), relational networks (Sung et al., 2018) and attentional networks (Mishra et al., 2018). Hence, we believe that the three meta-learning methods evaluated in this work are reasonably representative of current few-shot learning research. That said, there are other methods that less directly relate to the methods above, such as the work of Santoro et al. (2016) based on a memory-augmented recurrent network (one of the earliest meta-learning approach to few-shot learning). Hence, we also look forward to future work evaluating more alternative methods on Meta-Dataset.
Similarly to our work, Antoniou et al. (2019) have also proposed improvements to MAML. Coined MAML++, their method is a collection of adjustments, including the use of multiple meta-trained inner loop learning rates, derivative-order annealing, and more. In comparison, our Proto-MAML variant simply changes the expression for the initial output weights in the inner loop and could easily be combined with the recommendations made by MAML++.
Finally, Meta-Dataset bears similarity to the CVPR 2017 Visual Domain Decathlon Challenge, in which contestants were tasked to train a joint system that can perform well on 10 different datasets, many of which are included in our benchmark. At test time, the submitted system classifies examples from the same classes as those available for training. This is unlike Meta-Dataset, which is designed for the few-shot learning scenario where generalization must be achieved for examples of never-before-seen classes.
Meta-Dataset does not prescribe a procedure for learning from the training data. In fact, we believe that meta-learning from multiple heterogeneous sources of training classes is an open research problem. In these experiments though, keeping with the spirit of matching training and testing conditions, we trained each model through a series of training episodes that were sampled using the same algorithm as we used for Meta-Dataset’s evaluation episodes, described above. The choice of the dataset from which to sample the next episode was also random uniform. The non-episodic baseline is trained to solve the large classification problem that results from ‘concatenating’ the training classes of all datasets.
Further, we decided to perform validation on (the validation split of) ImageNet only, ignoring the validation sets of the other datasets. The rationale behind this choice is that the performance on ImageNet has been known to be a good proxy for the performance on different datasets. Notably, a common procedure for dealing with a new classification dataset is to finetune ImageNet-pre-trained weights on it, instead of learning new weights from scratch. However, we believe that the choice of the validation procedure used in this setup could benefit from additional research.
|Test Source||Method: Accuracy (%) confidence (%)|
|Test Source||Method: Accuracy (%) confidence (%)|
This common ImageNet-pretraining procedure also inspired us to train variants of each meta-learner in which the embedding function is initialized from the representation to which the baseline model trained on ImageNet converged to. We find that this initialization helps meta-learners substantially.
We experiment with two architectures: a four-layer convolutional network that is commonly used for few-shot learning, and an 18-layer residual network. All models performed best with the latter. We also tried two different image resolutions: 84, which is the image resolution of the commonly-used ‘mini-ImageNet’ benchmark, and 126. All models performed better with the larger images, except for fo-MAML. Finally, we tuned the learning rate schedule, weight decay, and model-specific hyperparameters. We used ADAM to train all models.
Notably, for fo-MAML and Proto-MAML, we tuned the learning rate of the within-episode training, the number of within-episode training steps, and the number of additional such steps to perform in evaluation episodes (sampled from the validation or test splits). Our best-performing fo-MAML variant used 6 training steps, with a learning rate of 0.01 and no additional steps on evaluation episodes. Interestingly, Proto-MAML preferred the lower learning rate of 0.0001 but took 10 steps to adapt to each training task, and an additional 5 steps (totalling 15) in each validation or test episode. All other experimental details are included in the source code.
Table 2 and Table 2 present the results of the evaluation on the test set of each of the 10 datasets. The difference between them is the training source, i.e., the data that the models were trained on, which is (the training classes of) ILSVRC-only, and all datasets, respectively. No classes from Traffic Signs and MSCOCO are used during training, since these datasets have no training split and are reserved for evaluation only. We propose to use the average (over the datasets) rank of each method as our metric for comparison, where smaller is better. A method receives rank 1 if it has the highest accuracy, rank 2 if it has the second highest, and so on. When two methods are ‘tied’ for a position, they both receive the corresponding rank. Both tables demonstrate the superiority of Proto-MAML over the remaining models in Meta-Dataset’s evaluation tasks. The Finetune Baseline notably presents a worthy opponent, while fo-MAML, to our surprise, performs quite poorly on Meta-Dataset.
We also recorded the performance of the different models for the various ways and shots that they encountered during their evaluation on test episodes of Meta-Dataset. This enables us to examine their robustness to these different settings. We show the evaluation results of the methods whose training source was (the training classes of) all datasets. Their ImageNet-only-trained counterparts exhibit the same trends, and we included them in the Appendix instead.
Figure (a)a plots the accuracy as a function of the episode’s ‘way’. These results reflect what we expected: the more classes a task has, the harder it is. Perhaps more interestingly, Figure (b)b illustrates the ability of the different models to benefit from larger shots. In particular, for every ‘shot’ of a class in a test episode, we plot the percentage of query examples of that class that are correctly classified (we refer to this as the ‘precision’ of the class). The general trend is not surprising: the more support examples a class has, the easier it is to correctly classify its query examples. However, this plot sheds light on some interesting trade-offs between the different models. In the very-low-shot end of the spectrum, Prototypical Networks and Proto-MAML outshine the other models. However, Prototypical Networks are evidently less capable of improving given more ‘shots’. On the other hand, the Finetune baseline, Matching Networks and fo-MAML improve at a faster rate given more data. Further, we argue that Proto-MAML indeed constitutes a step towards a ‘best of all worlds’ model, since it is the top-performer in the truly few-shot setting, and yet improves upon Prototypical Networks’ ability to benefit from more data. We think that taking additional steps in this direction is an interesting research problem.
We had hypothesized that finer-grained tasks (e.g., between dog breeds) are more challenging than coarse-grained ones (e.g., frogs versus laptops). To investigate this, we created binary ImageNet episodes whose two classes are chosen uniformly at random from the DAG’s set of leaves. We then define the degree of coarse-grainedness of a task as the height of the lowest common ancestor of the two chosen leaves, where the height is defined as the length of the longest path from the lowest common ancestor to one of the selected leaves. Larger heights then correspond to coarser-grained tasks. Surprisingly, we did not detect any trend when performing this analysis on the test DAG. The results on the training DAG, though, do seem to indicate that our hypothesis holds to some extent. These results are shown in Figure (c)c. We conjecture that this may be due to the richer structure of the training DAG, but we encourage further investigation.
We have introduced a new large-scale, diverse, and realistic environment for training and testing meta-learners for the task of few-shot classification. We believe that our exploration of various models on Meta-Dataset has uncovered certain weaknesses of the current state-of-the-art meta-learning methods which allow us to pinpoint interesting directions for future research.
In particular, we view our experiments as the first attempt to meta-learn from a diverse set of sources, and we feel there is plenty of room for improvement. In particular, we don’t always observe a generalization gain from training on all datasets over training on ImageNet only and in fact, in some cases the performance drops. This suggests that our method of consuming training data of different datasets can be improved. Further, through our analysis of the performance as a function of the shots, we discovered that different models perform well on different ends of this spectrum. We argue that our new Proto-MAML variant is a first step towards more robust meta-learners, but we believe that more work is needed on this front.
Generally, this benchmark opens the door to the use of multiple data sources for few-shot learning. Despite having only 10 datasets, developing this benchmark allowed us to explore and identify good practices for a codebase that supports this setting. On the longer term, we thus view Meta-Dataset as only a first step towards the establishment of more challenging benchmarks for few-shot learning research, with increasingly many dataset sources. To move in this direction, future work will likely require considering other domains beyond natural or man-made images (e.g., from the medical domain or from computer graphics simulations). It will also rely on the ability and willingness of the community to continue to release new freely available image classification datasets.
Eleni, Hugo, and Kevin came up with the benchmark idea and requirements. Eleni developed the core of the project, and worked on the experiment design and management with Tyler and Kevin, as well as experiment analysis. Carles, Ross, Kelvin, Pascal, Vincent, and Tyler helped extend the benchmark by adding datasets. Eleni and Vincent contributed the Prototypical Networks and Matching Networks implementations, respectively. Tyler implemented baselines, MAML (with Kevin) and Proto-MAML models, and updated the backbones to support them. Writing was mostly led Eleni, with contributions by Hugo, Vincent, and Kevin and help from Tyler and Pascal for visualizations. Pascal and Pierre-Antoine worked on code organization, efficiency, and open-sourcing, Pascal and Vincent optimized the efficiency of the data input pipeline. Pierre-Antoine supervised the code development process and reviewed most of the changes, Hugo and Kevin supervised the overall direction of the research.
The authors would like to thank Chelsea Finn for fruitful discussions and advice on tuning fo-MAML and ensuring the correctness of implementation, as well as Zack Nado and Dan Moldovan for the initial dataset code that was adapted.
International Conference of Machine Learning, 2017.
Meta-Dataset is formed of data originating from 10 different image datasets. A complete list of the datasets we use is the following.
A dataset of natural images from 1000 categories (Figure (a)a). We removed some images that were duplicates of images in another dataset in Meta-Dataset (43 images that were also part of Birds) or other standard datasets of interest (92 from Caltech-101 and 286 from Caltech-256). The complete list of duplicates is part of the source code release.
A dataset of images of 1623 handwritten characters from 50 different alphabets, with 20 examples per class (Figure (b)b). While recently (Vinyals et al., 2016) proposed a new split for this dataset, we instead make use of the original intended split (Lake et al., 2015) which is more challenging since the split is on the level of alphabets (30 training alphabets and 20 evaluation alphabets), not characters from those alphabets, therefore posing a more challenging generalization problem. Out of the 30 training alphabets, we hold out the 5 smallest ones (i.e. with the least number of character classes) to form our validation set, and use the remaining 25 for training.
A dataset of images of aircrafts spanning 102 model variants, with 100 images per class (Figure (c)c).
A dataset for fine-grained classification of 200 different bird species. (Figure (d)d).
A texture database, consisting of 5640 images, organized according to a list of 47 terms (categories) inspired from human perception. (Figure (e)e).
A dataset of 50 million black-and-white drawings across 345 categories, contributed by players of the game Quick, Draw! (Figure (f)f).
A large dataset of approximately 100K images of nearly 1,500 wild mushrooms species (Figure (g)g).
A dataset of natural images of 102 flower categories. The flowers chosen to be ones commonly occurring in the United Kingdom. Each class consists of between 40 and 258 images. (Figure (h)h).
A dataset of 50,000 images of German road signs in 43 classes (Figure (i)i).
A dataset of images collected from Flickr with 1.5 million object instances belonging to 80 classes labelled and localized using bounding boxes. We choose the train2017 split and create images crops from original images using each object instance’s groundtruth bounding box. (Figure (j)j).
For completeness, we show the results of the analysis of the robustness to different ways and shots for the variants of the models that were trained on ImageNet only. We observe the same trends as discussed in our Experiments section for the variants of the models that were trained on all datasets.
For more clearly observing whether training on all datasets leads to improved generalization over training on ImageNet only, Figure 5 displays the data of Tables 1 and 2, showing side-to-side the performance of each model trained on ILSVRC only vs. all datasets.
We also computed the ‘element-wise’ difference between the results in Table 2 and Table 1. These differences are shown in the following table, as well as in Figure 6. A positive entry indicates that the test performance on the corresponding datasets improved when using the variant of the corresponding model that was trained on all training sources.
|Test Source||Method: Accuracy (%) confidence (%)|
This table shows that we do not always observe a clear generalization advantage in training from a wider collection of image datasets. While some of the datasets that were added to the meta-training phase did see an improvement across all models, in particular for Omniglot and Quick Draw, this was not true across the board. In fact, in certain cases the performance dropped. We believe that more successfully leveraging diverse sources of data is an interesting open research problem.