Diversity Transfer Network for Few-Shot Learning

12/31/2019 ∙ by Mengting Chen, et al. ∙ Horizon Robotics Huazhong University of Science u0026 Technology 12

Few-shot learning is a challenging task that aims at training a classifier for unseen classes with only a few training examples. The main difficulty of few-shot learning lies in the lack of intra-class diversity within insufficient training samples. To alleviate this problem, we propose a novel generative framework, Diversity Transfer Network (DTN), that learns to transfer latent diversities from known categories and composite them with support features to generate diverse samples for novel categories in feature space. The learning problem of the sample generation (i.e., diversity transfer) is solved via minimizing an effective meta-classification loss in a single-stage network, instead of the generative loss in previous works. Besides, an organized auxiliary task co-training over known categories is proposed to stabilize the meta-training process of DTN. We perform extensive experiments and ablation studies on three datasets, i.e., miniImageNet, CIFAR100 and CUB. The results show that DTN, with single-stage training and faster convergence speed, obtains the state-of-the-art results among the feature generation based few-shot learning methods. Code and supplementary material are available at: https://github.com/Yuxin-CV/DTN

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Deep neural networks (DNNs) have shown tremendous success in solving many challenging real-world problems when a large amount of training data is available

[10, 24, 8]. Common practice suggests that models with more parameters have the greater capacity to fit data and more training data usually provide better generalization ability. However, DNNs struggle to generalize given only a few training data while humans excel at learning new concepts from just a few examples [2]. Few-shot learning has therefore been proposed to close the performance gap between machine learner and human learner. In the canonical setting of few-shot learning, there are a training set (seen, known) and a testing set (unseen, novel) with disjoint categories. Models are trained on the training set while tested in an -way -shot scheme [27] where the models need to classify the queries into one of the categories correctly when only samples of each novel category are given. This unique setting of few-shot learning poses an unprecedented challenge in fully utilizing the prior information in the training set , which corresponds to the known information or historical information of the human learner. Common approaches to address this challenge either learn a good metric for novel tasks [25, 27, 26] or train a meta-learner for fast adaptation [5, 16, 21].

Recently, the generation based approach is becoming an effective solution for few-shot learning [7, 23, 29, 32], since it directly alleviates the problem of lacking training samples. We propose a Diversity Transfer Network (DTN) for sample generation. In DTN, the offset between a random sample pair from the known category is composited with a support sample in the novel category in the latent feature space. Then, the generated features, as well as the support features, are averaged as the proxy of the novel category. At last, query samples are evaluated by the proxy. Only if the generated samples follow the distribution of the real samples to be diverse, can the meta-classifier (i.e., the proxy) be robust enough to classify queries correctly.

In addition to the new sample generation scheme, we utilize an effective meta-training curriculum called OAT (Organized Auxiliary task co-Training), inspired by the auxiliary task co-training in TADAM [18] and curriculum learning [1]. OAT organizes auxiliary tasks and meta-tasks reasonably and effectively reduces training complexity. Experiments show that by applying OAT, our DTN converges much faster compared with the naïve meta-training strategy (i.e., meta-training from scratch), the multi-stage training strategy used in -encoder [23] and the auxiliary task co-training strategy used in TADAM.

The main components of DTN are integrated into a single network and can be optimized in an end-to-end fashion. Thus, DTN is very simple to implement and easy to train. Our experimental results show that this simple method outperforms many previous works on a variety of datasets.

Related Work

Metric Learning Based Approaches

Metric learning is the most common and straightforward solution for few-shot learning. An embedding function can be learned by a myriad of instances of known categories. Then some simple metrics, such as Euclidean distance [25] and cosine distance [27, 20, 19], are used to build nearest neighbor classifiers for instances in unseen categories. Furthermore, to model the contextual information among support images and query images, bidirectional LSTM and attention mechanism are adopted in Matching Network [27]. Besides measuring the distances of a query to its support images, there is a new solution that compares the query to the center of the support images of each class in feature space, such as snell2017prototypical,Qiao_2018_CVPR,Qi_2018_CVPR,Gidaris_2018_CVPR. The center is usually termed as a proxy of the class. Specifically, squared Euclidean distance is used in Prototypical Network [25], and cosine distance is used in the other works. snell2017prototypical,Qi_2018_CVPR directly calculate proxies by averaging the embedding features, while Qiao_2018_CVPR,Gidaris_2018_CVPR take a small network to predict proxies. Based on Prototypical Network, TADAM [18] further proposes a dynamic task conditioned feature extractor by predicting the layer-level element-wise scale and shift vectors for each convolutional layer. Different from simple metrics, Relation Network [26] takes the neural network as a non-linear metric and directly predicts the similarities between the query and support images. TPN [13] performs transductive learning on the similarity graph contains both query and support images to obtain high-order similarities.

Meta-Learning Based Approaches

Meta-learning approaches have been widely used in few-shot learning scenarios by optimization learning for fast adaptation, aiming to learn a meta-learner that can solve the novel task quickly. Meta Network [16] and adaResNet [17]

are memory-based methods. Example and task level information in Meta Network are preserved in fast and slow weights, respectively. AdaResNet performs rapid adaptation by mimicking conditionally shifted neurons which modify activation values with task-specific shifts retrieved from a memory module. An LSTM-based update rule of the parameters of a classifier is proposed in ravi2017, where both short-term knowledge within a task and long-term knowledge common among all the tasks are learned. MAML

[5], LEO [22] and MT-net [11] all differentiate through gradient update steps to optimize performance after fine-tuning. While MAML operates directly in high dimensional parameter space, LEO performs meta-learning within a low-dimensional latent space. Different from MAML that assumes a fixed model, MT-net chooses a subset of its weights to fine-tune. pmlr-v80-franceschi18a propose a method based on bi-level programming that unifies gradient-based hyper-parameter optimization and meta-learning.

Generation Based Approaches

Sample synthesis using the generative models has recently emerged as a popular direction for few-shot learning [33, 6]. How to synthesize new samples based on a few examples remains an interesting open problem. AGA [4] and FATTEN [12]

are attribute-guided (w.r.t. pose and depth) augmentation methods in feature space by leveraging a corpus with attribute annotations. Hariharan_2017_ICCV tries to transfer transformations from a pair of examples from a known category to a “seed” example of a novel class. Finding specific generation targets requires a carefully designed pipeline with heuristic steps.

-encoder [23] also tries to extract intra-class deformations between image pairs sampled from the same class. Wang_2018_CVPR proposes to generate samples by adding random noises to support features. Different from previous methods, MetaGAN [32] generates fake samples that need to be discriminated by the classifier instead of augmentation, which sharpens the decision boundaries of novel categories.

Our proposed DTN shares a philosophical similarity with image hallucination [7] and -encoder [23] with distinct differences in the following aspects. The first difference is that DTN does not require to set specific target points for the generator. More specifically, -encoder takes a pair of images and from the same class and learns to infer the diversity between them by reconstructing . The image hallucination method collects quadruples for training based on clustering and traversal; each quadruple contains two image pairs from two classes and ; a generation network is trained to predict a sample from the quadruple when the rest three are given as input. Then, synthesized samples are used to train a linear classifier. The input of the generator in DTN is also a triplet as Hariharan_2017_ICCV, but the generated sample

is used directly to construct the meta-classifier, and the generator is optimized by minimizing the meta-classification loss instead of setting specific generation targets. Secondly, DTN integrates feature extraction, feature generation, and meta-learning into a single network and enjoys the simplicity and effectiveness of end-to-end training, while Hariharan_2017_ICCV,NIPS2018_7549 are stage-wise methods.

Figure 1: Illustration of the proposed diversity transfer network. The branch indicated by orange arrows is the meta-task, which is trained in a meta-learning way. The orange solid arrows indicate the process of meta-training, while the orange dashed arrows indicate the process of meta-testing. During meta-training, the features of the support image and reference images from the feature extractor are fed into the feature generator to generate new features. The parameters of the meta-classifier are formed by the averaged proxies of the support features and generated features. Then the query image is fed to evaluate the performance of the meta-classifier during meta-testing. The branch indicated by grey arrows is the auxiliary task aimed to accelerate convergence and improve the generalization ability.

More recent work based on sample generation and data augmentation are IDeMe-Net [3] and SalNet [31]. The former utilizes an additional deformation sub-network with a large number of parameters to synthesize diverse deformed images, the latter needs to pre-train a saliency network on the MSRA-B dataset. In contrast to these approaches, our method is based on a simple diversity transfer generator that learns a better proxy of each category with fewer parameters and faster convergence speed. Besides, our method can be regarded as an instance of compositional learning [30] in the latent feature space.

Method

Problem Definition

Different from the conventional classification task, where the training set and the testing set consist of samples from the same classes, few-shot learning aims to address the problem where the label spaces are disjoint between and . We follow the standard -way -shot classification scenario defined in vinyals2016matching to study the few-shot learning problem. An -way -shot task is termed as an episode. An episode is formed by classes sampled from the trainingtesting set firstly. Then images sampled from each of the classes constitute the support set , where and . For the sake of simplicity, we take -way -shot (i.e., ) classification for example in the following sections, and the support set will be simplified to . The query sample is sampled from the rest images of the classes. The goal is to classify the query into one of the classes correctly based only on the support set and the prior meta-knowledge learned from the training set .

An Overview of Diversity Transfer Network

The overall structure of the Diversity Transfer Network (DTN) is shown in Fig. 1. DTN contains four modules and is organized into two task branches. The task branch indicated by orange arrows is the meta-task, which is trained in a meta-learning way. The input for the meta-task consists of the following three parts: support images , a query image and reference images , where , , . All images are mapped to -normalized feature vectors by a feature extractor , where . and are feature vectors of two reference images. They come from the same category and make up a reference pair. The diversity of the pair is transferred to the support feature to generated a new feature by the feature generator . The generated feature is supposed to belong to the same category with . For each support feature, there are samples generated based on it. Since a meta-task is an -way -shot image classification task, the meta-classifier is an -way classifier consisting of a weight matrix and a trainable temperature . The values in the are determined by the proxies formed by support features and features generated by them. The meta-classifier is differentiable, so the feature extractor and feature generator

can be updated by standard back-propagation according to the loss function defined by the cosine similarity between the query and the proxies. The task branch indicated by grey arrows in Fig. 

1 is the auxiliary task, aiming to accelerate and stabilize the training of DTN. It is a conventional classification task over all categories of the training set .

Figure 2: Feature generator in DTN. The three input features are mapped into a latent space by the mapping function . Then the diversity (i.e., offset) between the reference features is added with the support feature in this space. Then it is mapped by the to keep the same size as inputs. The output is a generated feature which is supposed to be a sample belonging to category .

Feature Generation via Diversity Transfer

Each image is mapped to a feature vector by the feature extractor . , and are feature vectors of the query image , the support image and the reference images pair respectively, where , . For a specific support feature , during both meta-training and meta-testing phase, the reference image pairs are always sampled from the training set (seen, known). Specifically, we first randomly sample classes from the whole training classes with replacement. For each sampled class, we then randomly sample two different images and to form a reference pair. We do not sample any images from (unseen, novel) during the whole process. The conventional few-shot evaluation setting, termed as -way -shot setting, requires to get a -way classifier with the support of only samples for each novel class and the prior meta-knowledge from the whole training set . Therefore, our sampling method strictly complies with the few-shot learning protocol.

As shown in Fig. 2, the feature generator of DTN consists of two mapping functions and . Three input features are firstly mapped into a latent space , . The elementwise difference measures the diversity between the two reference features. It is applied to the support feature by a simple linear combination . After mapping it by , we get a feature which has the same size of the input and should belong to the same category with the support feature . More specifically:

(1)

Given different reference pairs for a single support feature , there will be generated features that enrich the diversity of category . They are helpful to construct a more robust classifier for unseen categories. When , each of the support samples is taken as a “seed” and samples are generated based on it. Therefore, there will be support samples and generated samples for each novel category.

Methods Ref. Backbone -way -shot -way -shot
Matching Network (vinyals2016matching) NeurIPS’16 64-64-64-64
Meta-Learn LSTM (ravi2017) ICLR’17 64-64-64-64
MAML (pmlr-v70-finn17a) ICML’17 32-32-32-32
Prototypical Network (snell2017prototypical) NeurIPS’17 64-64-64-64
Relation Network (sung2017learning) CVPR’18 64-96-128-256
MT-net (pmlr-v80-lee18a) ICML’18 64-64-64-64 -
MetaGAN (NIPS2018_7504) NeurIPS’18 64-96-128-256
Qiao_2018_CVPR CVPR’18 64-64-64-64
Gidaris_2018_CVPR CVPR’18 64-64-64-64
DTN (Ours) 64-64-64-64
Gidaris_2018_CVPR CVPR’18 ResNet-12
adaResnet (pmlr-v80-munkhdalai18a) ICML’18 ResNet-12
TADAM (NIPS2018_7352) NeurIPS’18 ResNet-12
Qiao_2018_CVPR CVPR’18 WRN-28-10
STANet (Yan2019ADA) AAAI’19 ResNet-12
TPN (liu2018learning) ICLR’19 ResNet-12
LEO (rusu2018meta) ICLR’19 WRN-28-10
-encoder (NIPS2018_7549) NeurIPS’18 VGG-16
IDeMe-Net (chen2019deformation) CVPR’19 ResNet-18
SalNet Intra-class Hal. (Zhang_2019_CVPR) CVPR’19 ResNet-101
Deep DTN (Ours) ResNet-12
Generation based approaches Using a deformation sub-network Using a saliency network pre-trained on MSRA-B
Table 1: Few-shot images classification accuracies on miniImageNet. ‘-’: not reported.

Meta-Learning Based on Averaged Proxies

The meta-task branch of DTN is shown in Fig. 1 indicated by orange arrows. The orange solid arrows and dashed arrows indicate the process of meta-training and meta-testing, respectively. Each image is mapped to a feature vector . Similar to Qiao_2018_CVPR,Gidaris_2018_CVPR,Qi_2018_CVPR, all the features here are -normalized vectors (i.e., ). The support feature and all the reference feature pairs are fed into the generator to generate new features ( is set to in Fig. 1 for example). So we get features for the -th category. The meta-task is an -way classification task, therefore the meta-classifier is represented by a matrix , in which each row can be viewed as a proxy [15] of the -th category. After obtaining all the features for category , the -th row of , termed as averaged proxy, is the -normalized average of those features:

(2)
(3)

All the averaged proxies are also -normalized vectors, so that the meta-classifier essentially becomes a cosine-similarity based classification model. After constructing the meta-classifier, the -normalized query feature is fed into it for evaluation. The prediction is the combination of classification scores of each category. To further increase stability and robustness when dealing with a large number of categories, we adopt a learnable temperature in our meta-task loss as Qi_2018_CVPR, where is updated by back-propagation during training. The meta-task loss can be defined as follow:

(4)

Organized Auxiliary Task Co-training

In order to accelerate the convergence of training and get better generalization ability, the meta-learning network in DTN is jointly trained with an auxiliary task. The auxiliary task is a conventional classification for all categories in . It shares the same feature extractor with the meta-task branch. Different from the meta-classifier , which consists of the averaged proxies, the auxiliary classifier after the feature extractor are randomly initialized and updated via back-propagation. The mini-batch is randomly sampled from the training set , where , and is the batch size. The auxiliary task loss has the same form as the meta-task loss :

(5)

where is one of the training features in the mini-batch, is the -th row of , and is learnable.

In TADAM [18]

, the auxiliary task is sampled with a probability that is annealed exponentially. We observe some positive effects from this training strategy compared with naïve meta-training and multi-stage training in our DTN.

However, the inadequacy of this approach is: the randomness in both the frequency and the order of the two tasks affects the final result to some extent, and the distribution of auxiliary tasks are unpredictable

rather than annealed exponentially, especially when the number of training epochs is not very large. Another problem brought by the randomness is that it is hard to determine the training schedule, e.g., the learning rate, the number of training epochs, etc., since the permutation of auxiliary tasks and meta-tasks varies according to the random seed. We empirically find that the stochastic auxiliary task co-training strategy used in TADAM results in a large fluctuation in the meta-classification accuracy (over

, see Table 3

for details) when using different random seeds. This randomness makes the choice of hyperparameters as well as the training schedule more difficult.

Therefore, we propose the OAT (Organized Auxiliary task co-Training) strategy, which organizes auxiliary tasks and meta-tasks in a more orderly and more reasonable manner. More specifically, there are two kinds of training epochs: the auxiliary training epoch and the meta-training epoch . We select training epochs to form one training unit , the -th training unit has meta-training epochs, and auxiliary training epochs. The array of is denoted as , where is the total number of training units. Then the total number of training epochs is , and the whole training sequence can be expressed as follow:

(6)

By changing and , we can obtain different training sequences arranged in different frequency and order, which is proven to be more manageable and effective compared with the training strategy used in TADAM. Intuitively, we would like to gradually add harder few-shot classification tasks into a series of simpler auxiliary classification tasks. Therefore the setting of and is quite simple and straightforward. We choose and for training DTN, though a more careful scheduling may achieve better performance. Therefore the whole training sequence is organized as follow:

(7)
Methods CIFAR100 CUB
Nearest neighbor
Meta-Learn LSTM (ravi2017) -
Matching Network (vinyals2016matching)
MAML (pmlr-v70-finn17a)
-encoder (NIPS2018_7549)
Deep DTN (Ours)
Table 2: The -way -shot/-way -shot images classification accuracies on CIFAR100 and CUB. ‘-’: not reported.

Initially, the auxiliary tasks could be considered as a simpler curriculum[1], later they bring regularization effects to meta-tasks. Ablation studies show that compared with the training strategy used in TADAM, DTN trained by OAT obtains better and more robust results with a faster convergence speed.

Figure 3: Visualization of generated samples, support samples, and real samples. The light dots indicate real samples, the shapes (circle, square, triangle, pentagon and diamond) with black border indicate support samples which are also real samples, and the shapes without border indicate generated samples. There are generated samples for each support sample. The top row shows the results of -way -shot learning and the bottom row shows the results of -way -shot learning. The data in the left two columns are from the training set and the data in the right two columns are from the testing set.

Experiments

Implementation Details

Dataset. The proposed method is evaluated on multiple datasets: miniImageNet, CIFAR100 and CUB. The miniImageNet dataset has been widely used by few-shot learning since it is firstly proposed by vinyals2016matching. There are , and classes for training, validation, and testing respectively. The hyper-parameters are optimized on the validation set. After that, it will be merged into the training set for the final results. The CIFAR100 dataset [9] contains 6000 images of 100 classes. We use 64, 16, and 20 classes for training, validation, and testing, respectively. The CUB dataset [28] is a fine-grained dataset from 200 categories of birds. It is divided into training, validation, and testing sets with 100, 50, and 50 categories respectively. The splits of CIFAR100 and CUB follow NIPS2018_7549.

Architectures. The feature extractor for DTN is a CNN with convolutional modules. Each module contains a

convolutional layer with 64 channels followed by a batch normalization(BN) layer, a ReLU non-linearity layer, and a

max-pooling layer. The structure of feature extractor is the same as those in former methods, e.g., vinyals2016matching,snell2017prototypical for fair comparisons. Many other works also use deeper networks for feature extraction to achieve better accuracy, e.g., pmlr-v80-munkhdalai18a,NIPS2018_7352. To make a comparison with them, we also implement our algorithm with ResNet-12 architecture[8]. The output of the feature extractor is a -dimensional vector. The mapping function in the feature generator is a fully-connected (FC) layer with units followed by a leaky ReLU activation layer, and a dropout layer with dropout rate. The mapping function has the same settings with except that the number of units of the FC layer is .

Random seed Training sequence of AT (number of total training epochs = ) Results of AT Training sequence of OAT (number of total training epochs = ) Results of OAT
-way -shot -way -shot -way -shot -way -shot
Seed 13-1-2-1-1-1-2-1-2-1-5 10-4-1-4-1-3-2-3-2
Seed 11-1-7-1-7-3
Seed 13-1-9-1-4-2
Seed 18-1-2-1-1-1-2-2-1-1
Seed 14-1-4-1-3-2-3-2
Table 3: Ablation studies on the fluctuation of the results obtained by AT and OAT on miniImageNet. AT: auxiliary task co-training strategy used in TADAM. OAT: organized auxiliary task co-training. The representation of the training sequence follows the notation introduced in the previous section, and the training sequence of AT is completely determined by the random seed. The results show that compared with the AT strategy(over fluctuation), the model trained by OAT (less than fluctuation) obtains better and more robust results.
Methods -way -shot -way -shot
1 Gaussian noise generator
2 -encoder
3 DTN w/ two-stage training
4 DTN w/ OAT (Ours)

Our reimplementation, which outperforms the original.

Table 4: Ablation studies for different feature generators on miniImageNet.
Methods -way -shot -way -shot
1 DTN w/ naïve meta-training
2 DTN w/ two-stage training
3 DTN w/ AT
4 DTN w/ OAT (Ours)
Table 5: Ablation studies for different training strategies on miniImageNet. AT: auxiliary task co-training strategy used in TADAM . OAT: organized auxiliary task co-training.

Results

Quantitative Results. Table 1 provides comparative results on the miniImageNet dataset. All these results are reported with confidence intervals following the setting in vinyals2016matching. Under the 4-CONV feature extractor setting, our approach significantly outperforms the previous state-of-the-art works, especially in the -way

-shot task. As for the comparisons with models using deep feature extractor, deep DTN also surpasses other alternatives in the

-way -shot scenario and achieves very competitive results under the -way -shot setting. The results confirm that our feature generation method is extremely useful to address the problem of learning with scarce data, i.e., the -way -shot scenario. DTN is also one of the simplest and lightweight feature generation methods which learns to enrich intra-class diversity, and does not rely on any extra information from other datasets, such as the salient object information in Zhang_2019_CVPR.

Table 2 shows that DTN also gets large improvements on the CIFAR100 and CUB datasets compared with existing state-of-the-arts in both -way -shot task and -way -shot task, which confirms DTN is generally useful for different few-shot learning scenarios.

Visualization Results. To better understand the results, Fig. 3 shows tSNE [14] visualizations of generated samples, support samples and real samples. It can be seen that our method can greatly enrich the diversity of an unseen class with only a single or a few support examples given. Most of the generated samples fit the distribution of real samples, which means that the category information of each support sample is well preserved by the generated sample, and they are close to the center of the real distribution even when the support sample lies on the edge. From the diagrams of -way -shot learning, it can be seen that generated features from support samples can cover the major distribution of the real samples, which facilitates to build a more robust classifier for unseen classes.

Ablation Study

In this section, we study the impact of the feature generator, the training strategy and the number of generated features. We conduct the following ablation studies using models with deep feature extractor on miniImageNet. All the results are summarized in Table 3, Table 4, Table 5 and Table 6.

First, we make a comparison between different feature generators. For the sake of fairness, we use exactly the same meta-classifier (cosine-similarity based classifiers) and the same training strategy (two-stage training), only the feature generators are different. All models are trained until convergence. Experiments show that diversity transfer generator outperforms Gaussian noise seeded generator (by in -way -shot, in -way -shot. Table 4, Row and Row ) and -encoder (by in -way -shot, in -way -shot. Table 4, Row and Row ).

Second, we study the effects of different training strategies. Obviously, OAT (Table 5, Row ) surpasses the naïve meta-training (Table 5, Row ) and two-stage training (Table 5, Row ). As mentioned before, a large fluctuation was observed (e.g., from to for -way -shot, from to for -way -shot, see Table 3 for details) in the meta-classification accuracy if the DTN is trained with auxiliary task sampled via a probability in 30 epochs. The result becomes better and more stable if we increase the total number of training epochs to 60 (Table 5, Row ), but this is still worse than the result obtained via DTN trained with OAT in only 30 epochs (Table 5, Row ). A comparison in the result’s fluctuation between two training strategies is detailed in Table 3.

Finally, in Table 6 we study the impact on the number of generated features. The results gradually become better as the number of generated features increases. No improvement was observed when the number of generated features exceeds 64. We attribute this to the fact that 64 generated features have been well fitted to the real sample distribution.

Number of -way -way
generated features -shot -shot
Table 6: Ablation studies for DTN trained with different number of generated features on miniImageNet. Numbers in the "( )" are difference in meta-classification accuracies compared with the result with generated features.

Conclusion and Future Work

In this work, we propose a novel generative model, Diversity Transfer Network (DTN), for few-shot image recognition. It learns transferable diversity from the known categories and augments the unseen category with the sample generation. DTN achieves competitive performance on three benchmarks. We believe that the proposed generative method can be utilized in various problems challenged by the scarcity of supervision information, e.g., semi-supervised learning, active learning and imitation learning. These interesting research directions will be explored in the future.

Acknowledgment

This work was supported by National Natural Science Foundation of China (NSFC) (No. 61733007, No. 61572207 and No. 61876212), National Key R&D Program of China (No. 2018YFB1402600) and HUST-Horizon Computer Vision Research Center.

References

  • [1] Y. Bengio, J. Louradour, R. Collobert, and J. Weston (2009) Curriculum learning. In

    International Conference on Machine Learning (ICML)

    ,
    Cited by: Introduction, Organized Auxiliary Task Co-training.
  • [2] P. Bloom (2000) How children learn the meanings of words. Vol. 377, Citeseer. Cited by: Introduction.
  • [3] Z. Chen (2019-06) Image deformation meta-networks for one-shot learning. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: Generation Based Approaches.
  • [4] M. Dixit, R. Kwitt, M. Niethammer, and N. Vasconcelos (2017) AGA: attribute-guided augmentation. In Computer Vision and Pattern Recognition (CVPR), Cited by: Generation Based Approaches.
  • [5] C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning (ICML), Cited by: Introduction, Meta-Learning Based Approaches.
  • [6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Neural Information Processing Systems (NIPS), Cited by: Generation Based Approaches.
  • [7] B. Hariharan and R. Girshick (2017-10) Low-shot visual recognition by shrinking and hallucinating features. In International Conference on Computer Vision (ICCV), Cited by: Introduction, Generation Based Approaches.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun (2015) Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: Introduction, Implementation Details.
  • [9] A. Krizhevsky and G. Hinton (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: Implementation Details.
  • [10] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012)

    ImageNet classification with deep convolutional neural networks

    .
    In Neural Information Processing Systems (NIPS), Cited by: Introduction.
  • [11] Y. Lee and S. Choi (2018) Gradient-based meta-learning with learned layerwise metric and subspace. In International conference on machine learning (ICML), Cited by: Meta-Learning Based Approaches.
  • [12] B. Liu, X. Wang, M. Dixit, R. Kwitt, and N. Vasconcelos (2018) Feature space transfer for data augmentation. In Computer Vision and Pattern Recognition (CVPR), Cited by: Generation Based Approaches.
  • [13] Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. J. Hwang, and Y. Yang (2019) Learning to propagate labels: transductive propagation network for few-shot learning. In International Conference on Learning Representations (ICLR), Cited by: Metric Learning Based Approaches.
  • [14] L. v. d. Maaten and G. Hinton (2008) Visualizing data using t-sne. Journal of machine learning research 9 (Nov), pp. 2579–2605. Cited by: Results.
  • [15] Y. Movshovitz-Attias, A. Toshev, T. K. Leung, S. Ioffe, and S. Singh (2017) No fuss distance metric learning using proxies. In International Conference on Computer Vision (ICCV), Cited by: Meta-Learning Based on Averaged Proxies.
  • [16] T. Munkhdalai and H. Yu (2017) Meta networks. In International conference on machine learning (ICML), Cited by: Introduction, Meta-Learning Based Approaches.
  • [17] T. Munkhdalai, X. Yuan, S. Mehri, and A. Trischler (2018) Rapid adaptation with conditionally shifted neurons. In International Conference on Machine Learning (ICML), Cited by: Meta-Learning Based Approaches.
  • [18] B. Oreshkin, P. Rodríguez López, and A. Lacoste (2018) TADAM: task dependent adaptive metric for improved few-shot learning. In Neural Information Processing Systems (NIPS), Cited by: Introduction, Metric Learning Based Approaches, Organized Auxiliary Task Co-training.
  • [19] H. Qi, M. Brown, and D. G. Lowe (2018) Low-shot learning with imprinted weights. In Computer Vision and Pattern Recognition (CVPR), Cited by: Metric Learning Based Approaches.
  • [20] S. Qiao, C. Liu, W. Shen, and A. L. Yuille (2018) Few-shot image recognition by predicting parameters from activations. In Computer Vision and Pattern Recognition (CVPR), Cited by: Metric Learning Based Approaches.
  • [21] S. Ravi and H. Larochelle (2017) Optimization as a model for few-shot learning. In International Conference on Learning Representations (ICLR), Cited by: Introduction.
  • [22] A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell (2019) Meta-learning with latent embedding optimization. In International Conference on Learning Representations (ICLR), Cited by: Meta-Learning Based Approaches.
  • [23] E. Schwartz, L. Karlinsky, J. Shtok, S. Harary, M. Marder, A. Kumar, R. Feris, R. Giryes, and A. Bronstein (2018) Delta-encoder: an effective sample synthesis method for few-shot object recognition. In Neural Information Processing Systems (NIPS), Cited by: Introduction, Introduction, Generation Based Approaches, Generation Based Approaches.
  • [24] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556. Cited by: Introduction.
  • [25] J. Snell, K. Swersky, and R. Zemel (2017) Prototypical networks for few-shot learning. In Neural Information Processing Systems (NIPS), Cited by: Introduction, Metric Learning Based Approaches.
  • [26] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales (2018) Learning to compare: relation network for few-shot learning. In Computer Vision and Pattern Recognition (CVPR), Cited by: Introduction, Metric Learning Based Approaches.
  • [27] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. (2016) Matching networks for one shot learning. In Neural Information Processing Systems (NIPS), Cited by: Introduction, Metric Learning Based Approaches.
  • [28] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie (2011) The caltech-ucsd birds-200-2011 dataset. Cited by: Implementation Details.
  • [29] Y. Wang, R. Girshick, M. Hebert, and B. Hariharan (2018) Low-shot learning from imaginary data. In Computer Vision and Pattern Recognition (CVPR), Cited by: Introduction.
  • [30] A. L. Yuille (2011) Towards a theory of compositional learning and encoding of objects. In 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1448–1455. Cited by: Generation Based Approaches.
  • [31] H. Zhang, J. Zhang, and P. Koniusz (2019-06) Few-shot learning via saliency-guided hallucination of samples. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Generation Based Approaches.
  • [32] R. Zhang, T. Che, Z. Ghahramani, Y. Bengio, and Y. Song (2018) MetaGAN: an adversarial approach to few-shot learning. In Neural Information Processing Systems (NIPS), Cited by: Introduction, Generation Based Approaches.
  • [33] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017)

    Unpaired image-to-image translation using cycle-consistent adversarial networks

    .
    In International Conference on Computer Vision (ICCV), Cited by: Generation Based Approaches.