The accuracy of visual recognition systems has grown dramatically. But modern recognition systems still need thousands of examples of each class to saturate performance. This is impractical in cases where one does not have enough resources to collect large training sets or that involve rare visual concepts. It is also unlike the human visual system, which can learn a novel visual concept from even a single example . This challenge of learning new concepts from very few labeled examples, often called low-shot or few-shot learning, is the focus of this work.
Many recently proposed approaches to this problem fall under the umbrella of meta-learning . Meta-learning methods train a learner
, which is a parametrized function that maps labeled training sets to classifiers. Meta-learners are trained bysampling small training sets and test sets from a large universe of labeled examples, feeding the sampled training set to the learner to get a classifier, and then computing the loss of the classifier on the sampled test set. These methods directly frame low-shot learning as an optimization problem.
However, generic meta-learning methods treat images as black boxes, ignoring the structure of the visual world. In particular, many modes of variation (for example camera pose, translation, lighting changes, and even articulation) are shared across categories. As humans, our knowledge of these shared modes of variation may allow us to visualize what a novel object might look like in other poses or surroundings (Figure 1). If machine vision systems could do such “hallucination” or “imagination”, then the hallucinated examples could be used as additional training data to build better classifiers.
Unfortunately, building models that can perform such hallucination is hard, except for simple domains like handwritten characters . For general images, while considerable progress has been made recently in producing realistic samples, most current generative modeling approaches suffer from the problem of mode collapse : they are only able to capture some modes of the data. This may be insufficient for low-shot learning since one needs to capture many modes of variation to be able to build good classifiers. Furthermore, the modes that are useful for classification may be different from those that are found by training an image generator. Prior work has tried to avoid this limitation by explicitly using pose annotations to generate samples in novel poses 
, or by using carefully designed, but brittle, heuristics to ensure diversity.
Our key insight is that the criterion that we should aim for when hallucinating additional examples is neither diversity nor realism. Instead, the aim should be to hallucinate examples that are useful for learning classifiers. Therefore, we propose a new method for low-shot learning that directly learns to hallucinate examples that are useful for classification by the end-to-end optimization of a classification objective that includes data hallucination in the model.
We achieve this goal by unifying meta-learning with hallucination. Our approach trains not just the meta-learner, but also a hallucinator: a model that maps real examples to hallucinated examples. The few-shot training set is first fed to the hallucinator; it produces an expanded training set, which is then used by the learner. Compared to plain meta-learning, our approach uses the rich structure of shared modes of variation in the visual world. We show empirically that such hallucination adds a significant performance boost to two different meta-learning methods [35, 30], providing up to a 6 point improvement when only a single training example is available. Our method is also agnostic to the choice of the meta-learning method, and provides significant gains irrespective of this choice. It is precisely the ability to leverage standard meta-learning approaches without any modifications that makes our model simple, general, and very easy to reproduce. Compared to prior work on hallucinating examples, we use no extra annotation and significantly outperform hallucination based on brittle heuristics . We also present a novel meta-learning method and discover and fix flaws in previously proposed benchmarks.
2 Related Work
Low-shot learning is a classic problem . One class of approaches builds generative models that can share priors across categories [7, 25, 10]. Often, these generative models have to be hand-designed for the domain, such as strokes [17, 18] or parts  for handwritten characters. For more unconstrained domains, while there has been significant recent progress [24, 11, 22], modern generative models still cannot capture the entirety of the distribution .
Different classes might not share parts or strokes, but may still share modes of variation, since these often correspond to camera pose, articulation, etc
. If one has a probability density on transformations, then one can generate additional examples for a novel class by applying sampled transformations to the provided examples[20, 5, 13]. Learning such a density is easier for handwritten characters that only undergo 2D transformations , but much harder for generic image categories. Dixit et al.  tackle this problem by leveraging an additional dataset of images labeled with pose and attributes; this allows them to learn how images transform when the pose or the attributes are altered. To avoid annotation, Hariharan and Girshick  try to transfer transformations from a pair of examples from a known category to a “seed” example of a novel class. However, learning to do this transfer requires a carefully designed pipeline with many heuristic steps. Our approach follows this line of work, but learns to do such transformations in an end-to-end manner, avoiding both brittle heuristics and expensive annotations.
Another class of approaches to low-shot learning has focused on building feature representations that are invariant to intra-class variation. Some work tries to share features between seen and novel classes [1, 36] or incrementally learn them as new classes are encountered 
. Contrastive loss functions[12, 16] and variants of the triplet loss [31, 29, 8] have been used for learning feature representations suitable for low-shot learning; the idea is to push examples from the same class closer together, and farther from other classes. Hariharan and Girshick  show that one can encourage classifiers trained on small datasets to match those trained on large datasets by a carefully designed loss function. These representation improvements are orthogonal to our approach, which works with any features.
More generally, a recent class of methods tries to frame low-shot learning itself as a “learning to learn” task, called meta-learning 
. The idea is to directly train a parametrized mapping from training sets to classifiers. Often, the learner embeds examples into a feature space. It might then accumulate statistics over the training set using recurrent neural networks (RNNs)[35, 23], memory-augmented networks 
, or multilayer perceptrons (MLPs), perform gradient descent steps to finetune the representation , and/or collapse each class into prototypes . An alternative is to directly predict the classifier weights that would be learned from a large dataset using few novel class examples  or from a small dataset classifier [37, 38]. We present a unified view of meta-learning and show that our hallucination strategy can be adopted in any of these methods.
Let be the space of inputs (e.g., images) and be a discrete label space. Let be a distribution over
. Supervised machine learning typically aims to capture the conditional distributionby applying a learning algorithm to a parameterized model and a training set . At inference time, the model is evaluated on test inputs
to estimate. The composition of the inference and learning algorithms can be written as a function (a classification algorithm) that takes as input the training set and a test input
, and outputs an estimated probability distributionover the labels:
In low-shot learning, we want functions that have high classification accuracy even when is small. Meta-learning is an umbrella term that covers a number of recently proposed empirical risk minimization approaches to this problem [37, 35, 30, 9, 23]. Concretely, they consider parametrized classification algorithms
and attempt to estimate a “good” parameter vector, namely one that corresponds to a classification algorithm that can learn well from small datasets. Thus, estimating this parameter vector can be construed as meta-learning .
Meta-learning algorithms have two stages. The first stage is meta-training in which the parameter vector of the classification algorithm is estimated. During meta-training, the meta-learner has access to a large labeled dataset that typically contains thousands of images for a large number of classes . In each iteration of meta-training, the meta-learner samples a classification problem out of . That is, the meta-learner first samples a subset of classes from . Then it samples a small “training” set and a small “test” set . It then uses its current weight vector to compute conditional probabilities for every point in the test set . Note that in this process may perform internal computations that amount to “training” on . Based on these predictions, incurs a loss for each point in the current . The meta-learner then back-propagates the gradient of the total loss . The number of classes in each iteration, , and the maximum number of training examples per class,
, are hyperparameters.
The second stage is meta-testing in which the resulting classification algorithm is used to solve novel classification tasks: for each novel task, the labeled training set and unlabeled test examples are given to the classification algorithm and the algorithm outputs class probabilities.
Different meta-learning approaches differ in the form of . The data hallucination method introduced in this paper is general and applies to any meta-learning algorithm of the form described above. Concretely, we will consider the following three meta-learning approaches:
Snell et al.  propose an architecture for that assigns class probabilities based on distances from class means in a learned feature space:
Here are the components of the probability vector and is a distance metric (Euclidean distance in ). The only parameters to be learned here are the parameters of the feature extractor . The estimation of the class means can be seen as a simple form of “learning” from that takes place internal to .
Vinyals et al.  argue that when faced with a classification problem and an associated training set, one wants to focus on the features that are useful for those particular class distinctions. Therefore, after embedding all training and test points independently using a feature extractor, they propose to create a contextual embedding
of the training and test examples using bi-directional long short-term memory networks (LSTMs) and attention LSTMs, respectively. These contextual embeddings can be seen as emphasizing features that are relevant for the particular classes in question. The final class probabilities are computed using a soft nearest-neighbor mechanism. More specifically,
Here, again is a distance metric. Vinyals et al. used the cosine distance. There are three sets of parameters to be learned: and .
Prototype matching networks:
One issue with matching networks is that the attention LSTM might find it harder to “attend” to rare classes (they are swamped by examples of common classes), and therefore might introduce heavy bias against them. Prototypical networks do not have this problem since they collapse every class to a single class mean. We want to combine the benefits of the contextual embedding in matching networks with the resilience to class imbalance provided by prototypical networks.
To do so, we collapse every class to its class mean before creating the contextual embeddings of the test examples. Then, the final class probabilities are based on distances to the contextually embedded class means instead of individual examples:
The parameters to be learned are , and . We call this novel modification to matching networks prototype matching networks.
4 Meta-Learning with Learned Hallucination
We now present our approach to low-shot learning by learning to hallucinate additional examples. Given an initial training set , we want a way of sampling additional hallucinated examples. Following recent work on generative modeling [11, 15], we will model this stochastic process by way of a deterministic function operating on a noise vector as input. Intuitively, we want our hallucinator to take a single example of an object category and produce other examples in different poses or different surroundings. We therefore write this hallucinator as a function that takes a seed example and a noise vector as input, and produces a hallucinated example as output. The parameters of this hallucinator are .
We first describe how this hallucinator is used in meta-testing, and then discuss how we train the hallucinator.
Hallucination during meta-testing:
During meta-testing, we are given an initial training set . We then hallucinate new examples using the hallucinator. Each hallucinated example is obtained by sampling a real example from , sampling a noise vector , and passing and to to obtain a generated example where . We take the set of generated examples and add it to the set of real examples to produce an augmented training set . We can now simply use this augmented training set to produce conditional probability estimates using . Note that the hallucinator parameters are kept fixed here; any learning that happens, happens within the classification algorithm .
Meta-training the hallucinator:
The goal of the hallucinator is to produce examples that help the classification algorithm learn a better classifier. This goal differs from realism: realistic examples might still fail to capture the many modes of variation of visual concepts, while unrealistic hallucinations can still lead to a good decision boundary . We therefore propose to directly train the hallucinator to support the classification algorithm by using meta-learning.
As before, in each meta-training iteration, we sample classes from the set of all classes, and at most examples per class. Then, for each class, we use to generate additional examples till there are exactly examples per class. Again, each hallucinated example is of the form , where , is a sampled example from and is a sampled noise vector. These additional examples are added to the training set to produce an augmented training set . Then this augmented training set is fed to the classification algorithm , to produce the final loss , where and .
To train the hallucinator , we require that the classification algorithm is differentiable with respect to the elements in . This is true for many meta-learning algorithms. For example, in prototypical networks, will pass every example in the training set through a feature extractor, compute the class means in this feature space, and use the distances between the test point and the class means to estimate class probabilities. If the feature extractor is differentiable, then the classification algorithm itself is differentiable with respect to the examples in the training set. This allows us to back-propagate the final loss and update not just the parameters of the classification algorithm , but also the parameters of the hallucinator. Figure 2 shows a schematic of the entire process.
Using meta-learning to train the hallucinator and the classification algorithm has two benefits. First, the hallucinator is directly trained to produce the kinds of hallucinations that are useful for class distinctions, removing the need to precisely tune realism or diversity, or the right modes of variation to hallucinate. Second, the classification algorithm is trained jointly with the hallucinator, which enables it to make allowances for any errors in the hallucination. Conversely, the hallucinator can spend its capacity on suppressing precisely those errors which throw the classification algorithm off.
Note that the training process is completely agnostic to the specific meta-learning algorithm used. We will show in our experiments that our hallucinator provides significant gains irrespective of the meta-learner.
5 Experimental Protocol
We use the benchmark proposed by Hariharan and Girshick . This benchmark captures more realistic scenarios than others based on handwritten characters  or low-resolution images . The benchmark is based on ImageNet images and subsets of ImageNet classes. First, in the representation learning
phase, a convolutional neural network (ConvNet) based feature extractor is trained on one set of classes with thousands of examples per class; this set is called the “base” classes. Then, in the low-shot learning phase, the recognition system encounters an additional set of “novel” classes with a small number of examples per class. It also has access to the base class training set. The system has to now learn to recognize both the base and the novel classes. It is tested on a test set containing examples from both sets of classes, and it needs to output labels in the joint label space . Hariharan and Girshick report the top-5 accuracy averaged over all classes, and also the top-5 accuracy averaged over just base-class examples, and the top-5 accuracy averaged over just novel-class examples.
Tradeoffs between base and novel classes:
We observed that in this kind of joint evaluation, different methods had very different performance tradeoffs between the novel and base class examples and yet achieved similar performance on average. This makes it hard to meaningfully compare the performance of different methods on just the novel or just the base classes. Further, we found that by changing hyperparameter values of some meta-learners it was possible to achieve substantially different tradeoff points without substantively changing average performance. This means that hyperparameters can be tweaked to make novel class performance look better at the expense of base class performance (or vice versa).
One way to concretize this tradeoff is by incorporating a prior over base and novel classes. Consider a classifier that gives a score for every class given an image . Typically, one would convert these into probabilities by applying a softmax function:
However, we may have some prior knowledge about the probability that an image belongs to the base classes or the novel classes
. Suppose that the prior probability that an image belongs to one of the novel classes is. Then, we can update Equation (14) as follows:
The prior probability might be known beforehand, but can also be cross-validated to correct for inherent biases in the scores . However, note that in some practical settings, one may not have a held-out set of categories to cross-validate. Thus resilience to this prior is important.
Figure 3 shows the impact of this prior on matching networks in the evaluation proposed by Hariharan and Girshick . Note that the overall accuracy remains fairly stable, even as novel class accuracy rises and base class accuracy falls. Such prior probabilities for calibration were proposed for the zero-shot learning setting by Chao et al. .
A new evaluation:
The existence of this tunable tradeoff between base and novel classes makes it hard to make apples-to-apples comparisons of novel class performance if the model is tasked with making predictions in the joint label space. Instead, we use a new evaluation protocol that evaluates four sets of numbers:
The model is given test examples from the novel classes, and is only supposed to pick a label from the novel classes. That is, the label space is restricted to (note that doing so is equivalent to setting for prototypical networks but not for matching networks and prototype matching networks because of the contextual embeddings). We report the top-5 accuracy on the novel classes in this setting.
Next, the model is given test examples from the base classes, and the label space is restricted to the base classes. We report the top-5 accuracy in this setting.
The model is given test examples from both the base and novel classes in equal proportion, and the model has to predict labels from the joint label space. We report the top-5 accuracy averaged across all examples. We present numbers both with and without a novel class prior ; the former set cross-validates to achieve the highest average top-5 accuracy.
Note that, following , we use a disjoint set of classes for cross-validation and testing. This prevents hyperparameter choices for the hallucinator, meta-learner, and novel class prior from becoming overfit to the novel classes that are seen for the first time at test time.
6.1 Implementation Details
Unlike prior work on meta-learning which experiments with small images and few classes [35, 30, 9, 23], we use high resolution images and our benchmark involves hundreds of classes. This leads to some implementation challenges. Each iteration of meta-learning at the very least has to compute features for the training set and the test set . If there are 100 classes with 10 examples each, then this amounts to 1000 images, which no longer fits in memory. Training a modern deep convolutional network with tens of layers from scratch on a meta-learning objective may also lead to a hard learning problem.
Instead, we first train a convolutional network based feature extractor on a simple classification objective on the base classes . Then we extract and save these features to disk, and use these pre-computed features as inputs. For most experiments, consistent with , we use a small ResNet-10 architecture . Later, we show some experiments using the deeper ResNet-50 architecture .
Hallucinator architecture and initialization:
For our hallucinator , we use a three layer MLP with ReLU as the activation function. We add a ReLU at the end since the pre-trained features are known to be non-negative. All hidden layers have a dimensionality of 512 for ResNet-10 features and 2048 for ResNet-50 features. Inspired by , we initialize the weights of our hallucinator network as block diagonal identity matrices. This significantly outperformed standard initialization methods like random Gaussian, since the hallucinator can “copy” its seed examples to produce a reasonable generation immediately from initialization.
|Novel||All||All with prior|
|PMN w/ G*||45.8||57.8||69.0||74.3||77.4||57.6||64.7||71.9||75.2||77.5||56.4||63.3||70.6||74.0||76.2|
|PN w/ G*||45.0||55.9||67.3||73.0||76.5||56.9||63.2||70.6||74.5||76.5||55.6||62.1||69.3||73.1||75.4|
|LogReg w/ Analogies ||40.7||50.8||62.0||69.3||76.5||52.2||59.4||67.6||72.8||76.9||53.2||59.1||66.8||71.7||76.3|
|PMN w/ G*||54.7||66.8||77.4||81.4||83.8||65.7||73.5||80.2||82.8||84.5||64.4||71.8||78.7||81.5||83.3|
|PN w/ G*||53.9||65.2||75.7||80.2||82.8||65.2||72.0||78.9||81.7||83.1||63.9||70.5||77.5||80.6||82.4|
Our methods. PN: Prototypical networks, MN: Matching networks, PMN: Prototype matching networks, LogReg: Logistic regression. Methods with “w/ G” use a meta-learned hallucinator.
As in , we run five trials for each setting of (the number of examples per novel class) and present the average performance. Different approaches are comparably good for base classes, achieving 92% top-5 accuracy. We focus more on novel classes since they are more important in low-shot learning. Table 1
contains a summary of the top-5 accuracy for novel classes and for the joint space both with and without a cross-validated prior. Standard deviations for all numbers are of the order of 0.2%. We discuss specific results, baselines, and ablations below.
Impact of hallucination:
We first compare meta-learners with and without hallucination to judge the impact of hallucination. We look at prototypical networks (PN) and prototype matching networks (PMN) for this comparison. Figure 4 shows the improvement in top-5 accuracy we get from hallucination on top of the original meta-learner performance. The actual numbers are shown in Table 1.
We find that our hallucination strategy improves novel class accuracy significantly, by up to 6 points for prototypical networks and 2 points for prototype matching networks. This suggests that our approach is general and can work with different meta-learners. While the improvement drops when more novel category training examples become available, the gains remain significant until for prototypical networks and for prototype matching networks.
Accuracy in the joint label space (right half of Figure 4) shows the same trend. However, note that the gains from hallucination decrease significantly when we cross-validate for an appropriate novel-class prior (shown in dotted lines). This suggests that part of the effect of hallucination is to provide resilience to mis-calibration. This is important in practice where it might not be possible to do extensive cross-validation; in this case, meta-learners with hallucination demonstrate significantly higher accuracy than their counterparts without hallucination.
Comparison to prior work:
Figure 5 and Table 1 compare our best approach (prototype matching networks with hallucination) with previously published approaches in low-shot learning. These include prototypical networks , matching networks , and the following baselines:
Logistic regression: This baseline simply trains a linear classifier on top of a pre-trained ConvNet-based feature extractor that was trained on the base classes.
Logistic regression with analogies: This baseline uses the procedure described by Hariharan and Girshick  to hallucinate additional examples. These additional examples are added to the training set and used to train the linear classifier.
Our approach easily outperforms all baselines, providing almost a 2 point improvement across the board on the novel classes, and similar improvements in the joint label space even after allowing for cross-validation of the novel category prior. Our approach is thus state-of-the-art.
Another intriguing finding is that our proposed prototype matching network outperforms matching networks on novel classes as more novel class examples become available (Table 1). On the joint label space, prototype matching networks are better across the board.
Unpacking the performance gain:
To unpack where our performance gain is coming from, we perform a series of ablations to answer the following questions.
Are sophisticated hallucination architectures necessary?
In the semantic feature space learned by a convolutional network, a simple jittering of the training examples might be enough. We created several baseline hallucinators that did such jittering by: (a) adding Gaussian noise with a diagonal covariance matrix estimated from feature vectors from the base classes, (b) using dropout (PN/PMN w/ Dropout), and (c) generating new examples through a weighted average of real ones (PN/PMN w/ Weighted). For the Gaussian hallucinator, we evaluated both a covariance matrix shared across classes and class-specific covariances. We found that the shared covariance outperformed class-specific covariances by 0.7 point and reported the best results. We tried both retraining the meta-learner with this Gaussian hallucinator, and using a pre-trained meta-learner: PN/PMN w/ Gaussian uses a pre-trained meta-learner and PN/PMN w/ Gaussian(tr) retrains the meta-learner. As shown in Figure 6, while such hallucinations help a little, they often hurt significantly, and lag the accuracy of our approach by at least 3 points. This shows that generating useful hallucinations is not easy and requires sophisticated architectures.
Is meta-learning the hallucinator necessary?
Simply passing Gaussian noise through an untrained convolutional network can produce complex distributions. In particular, ReLU activations might ensure the hallucinations are non-negative, like the real examples. We compared hallucinations with (a) an untrained and (b) a pre-trained and fixed based on analogies from  with our meta-trained version to see the impact of our training. Figure 6 shows the impact of these baseline hallucinators (labeled PN/PMN w/ init G and PN/PMN w/ Analogies, respectively). These baselines hurt accuracy significantly, suggesting that meta-training the hallucinator is important.
Does the hallucinator produce diverse outputs?
A persistent problem with generative models is that they fail to capture multiple modes . If this is the case, then any one hallucination should look very much like the others, and simply replicating a single hallucination should be enough. We compared our approach with: (a) a deterministic baseline that uses our trained hallucinator, but simply uses a fixed noise vector (PN/PMN w/ det. G) and (b) a baseline that uses replicated hallucinations during both training and testing (PN/PMN w/ det. G(tr)). These baselines had a very small, but negative effect. This suggests that our hallucinator produces useful, diverse samples.
Visualizing the learned hallucinations:
Figure 7 shows t-SNE  visualizations of hallucinated examples for novel classes from our learned hallucinator and a baseline Gaussian hallucinator for prototypical networks. As before, we used statistics from the base class distribution for the Gaussian hallucinator. Note that t-SNE tends to expand out parts of the space where examples are heavily clustered together. Thus, the fact that the cloud of hallucinations for the Gaussian hallucinator is pulled away from the class distributions suggests that these hallucinations are very close to each other and far away from the rest of the class. In contrast, our hallucinator matches the class distributions more closely, and with different seed examples captures different parts of the space. Interestingly, our generated examples tend to cluster around the class boundaries. This might be an artifact of t-SNE, or perhaps a consequence of discriminative training of the hallucinator. However, our hallucinations are still fairly clustered; increasing the diversity of these hallucinations is an avenue for future work.
Representations from deeper models:
All experiments till now used a feature representation trained using the ResNet-10 architecture . The bottom half of Table 1 shows the results on features from a ResNet-50 architecture. As expected, all accuracies are higher, but our hallucination strategy still provides gains on top of both prototypical networks and prototype matching networks.
In this paper, we have presented an approach to low-shot learning that uses a trained hallucinator to generate additional examples. Our hallucinator is trained end-to-end with meta-learning, and we show significant gains on top of multiple meta-learning methods. Our best proposed model achieves state-of-the-art performance on a realistic benchmark by a comfortable margin. Future work involves pinning down exactly the effect of the hallucinated examples.
Acknowledgments: We thank Liangyan Gui, Larry Zitnick, Piotr Dollár, Kaiming He, and Georgia Gkioxari for valuable and insightful discussions. This work was supported in part by ONR MURI N000141612007 and U.S. Army Research Laboratory (ARL) under the Collaborative Technology Alliance Program, Cooperative Agreement W911NF-10-2-0016. We also thank NVIDIA for donating GPUs and AWS Cloud Credits for Research program.
-  E. Bart and S. Ullman. Cross-generalization: Learning novel classes from a single example by feature replacement. In CVPR, 2005.
-  L. Bertinetto, J. Henriques, J. Valmadre, P. Torr, and A. Vedaldi. Learning feed-forward one-shot learners. In NIPS, 2016.
-  W.-L. Chao, S. Changpinyo, B. Gong, and F. Sha. An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. In ECCV, 2016.
Z. Dai, Z. Yang, F. Yang, W. W. Cohen, and R. Salakhutdinov.
Good semi-supervised learning that requires a bad GAN.In NIPS, 2017.
-  M. Dixit, R. Kwitt, M. Niethammer, and N. Vasconcelos. AGA: Attribute-Guided Augmentation. In CVPR, 2017.
-  H. Edwards and A. Storkey. Towards a neural statistician. In ICLR, 2017.
-  L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. TPAMI, 2006.
-  M. Fink. Object classification from a single example utilizing class relevance metrics. NIPS, 2005.
-  C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
-  D. George, W. Lehrach, K. Kansky, M. Lázaro-Gredilla, C. Laan, B. Marthi, X. Lou, Z. Meng, Y. Liu, H. Wang, A. Lavin, and D. S. Phoenix. A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs. Science, 2017.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
-  R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.
-  B. Hariharan and R. Girshick. Low-shot visual recognition by shrinking and hallucinating features. In ICCV, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
-  D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014.
G. Koch, R. Zemel, and R. Salakhudtinov.
Siamese neural networks for one-shot image recognition.
ICML Deep Learning Workshop, 2015.
-  B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. One-shot learning by inverting a compositional causal process. In NIPS. 2013.
-  B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015.
-  Q. V. Le, N. Jaitly, and G. E. Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015.
-  E. G. Miller, N. E. Matsakis, and P. A. Viola. Learning from one example through shared densities on transforms. In CVPR, 2000.
-  A. Opelt, A. Pinz, and A. Zisserman. Incremental learning of object detectors using a visual shape alphabet. In CVPR, 2006.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
-  S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
D. J. Rezende, S. Mohamed, and D. Wierstra.
Stochastic backpropagation and approximate inference in deep generative models.2014.
R. Salakhutdinov, J. Tenenbaum, and A. Torralba.
One-shot learning with a hierarchical nonparametric Bayesian model.
Unsupervised and Transfer Learning Challenges in Machine Learning, 2012.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training GANs. In NIPS, 2016.
-  A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning with memory-augmented neural networks. In ICML, 2016.
-  L. A. Schmidt. Meaning and compositionality as statistical induction of categories and constraints. PhD thesis, Massachusetts Institute of Technology, 2009.
F. Schroff, D. Kalenichenko, and J. Philbin.
FaceNet: A unified embedding for face recognition and clustering.In CVPR, 2015.
-  J. Snell, K. Swersky, and R. S. Zemel. Prototypical networks for few-shot learning. In NIPS, 2017.
-  Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Web-scale training for face identification. In CVPR, 2015.
-  S. Thrun. Is learning the n-th thing any easier than learning the first? NIPS, 1996.
-  S. Thrun. Lifelong learning algorithms. Learning to learn, 8:181–209, 1998.
-  L. van der Maaten and G. Hinton. Visualizing data using t-SNE. JMLR, 9:2579–2605, 2008.
-  O. Vinyals, C. Blundell, T. P. Lillicrap, K. Kavukcuoglu, and D. Wierstra. Matching networks for one shot learning. In NIPS, 2016.
-  Y.-X. Wang and M. Hebert. Learning from small sample sets by combining unsupervised meta-training with CNNs. In NIPS, 2016.
-  Y.-X. Wang and M. Hebert. Learning to learn: Model regression networks for easy small sample learning. In ECCV, 2016.
-  Y.-X. Wang, D. Ramanan, and M. Hebert. Learning to model the tail. In NIPS, 2017.
-  A. Wong and A. L. Yuille. One shot learning via compositions of meaningful patches. In ICCV, 2015.