Unsupervised Meta-Learning through Latent-Space Interpolation in Generative Models

06/18/2020 ∙ by Siavash Khodadadeh, et al. ∙ University of Central Florida 63

Unsupervised meta-learning approaches rely on synthetic meta-tasks that are created using techniques such as random selection, clustering and/or augmentation. Unfortunately, clustering and augmentation are domain-dependent, and thus they require either manual tweaking or expensive learning. In this work, we describe an approach that generates meta-tasks using generative models. A critical component is a novel approach of sampling from the latent space that generates objects grouped into synthetic classes forming the training and validation data of a meta-task. We find that the proposed approach, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM), outperforms or is competitive with current unsupervised learning baselines on few-shot classification tasks on the most widely used benchmark datasets. In addition, the approach promises to be applicable without manual tweaking over a wider range of domains than previous approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Meta-learning algorithms for neural networks 

[18, 8, 29]

prepare networks to quickly adapt to unseen tasks. This is done in a meta-training phase that typically involves a large number of supervised learning tasks. Very recently, several approaches had been proposed that perform the meta-training by generating synthetic training tasks from an

unsupervised dataset. This requires us to generate samples with specific pairwise information: in-class pairs of samples that are with high likelihood in the same class, and out-of-class pairs that are with high likelihood not in the same class. For instance, UMTRA [12] and AAL [1] achieve this through random selection from a domain with many classes for out-of-class pairs and by augmentation for in-class pairs. CACTUs [10] creates synthetic labels through unsupervised clustering of the domain. Unfortunately, these algorithms depend on domain specific expertise for the appropriate clustering and augmentation techniques.

In this paper, we rely on recent advances in the field of generative models, such as the variants of generative adversarial networks (GANs) and variational autoencoders (VAEs), to generate the in-class and out-of-class pairs of meta-training data. The fundamental idea of our approach is that in-class pairs are close while out-of-class pairs are far away in the latent space representation of the generative model. Thus, we can generate in-class pairs by interpolating between two out-of-class samples in the latent space and choosing interpolation ratios that put the new sample close to one of the objects. From this latent sample, the generative model creates the new in-class object. Our approach requires minimal domain-specific tweaking, and the necessary tweaks are human-comprehensible. For instance, we need to choose thresholds for latent space distance that ensure that classes are in different domains, as well as interpolation ratio thresholds that ensure that the sample is in the same class as the nearest edge. Another advantage of the approach is that we can take advantage of off-the-shelf, pre-trained generative models.

The main contributions of this paper can be summarized as follows:

  • We describe an algorithm, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM), that creates training data for a downstream meta-learning algorithm starting from an unlabeled dataset by taking advantage of interpolation in the latent space of a generative model.

  • We show that on the most widely used few-shot learning datasets, LASIUM outperforms or performs competitively with other unsupervised meta-learning algorithms, significantly outperforms transfer learning in all cases, and in a number of cases approaches the performance of supervised meta-learning algorithms.

2 Related Work

Meta-learning or “learning to learn” in the field of neural networks is an umbrella term that covers a variety of techniques that involve training a neural network over the course of a meta-training phase, such that when presented with the target task, the network is able to learn it much more efficiently than an unprepared network would. Such techniques had been proposed since the 1980s [27, 2, 19, 30]. In recent years, meta-learning has gained a resurgence, through approaches that either “learn to optimize”  [8, 24, 17, 20, 26, 22] or learn embedding functions in a non-parametric setting [29, 32, 25, 14]. Hybrids between these two approaches had also been proposed [31, 33].

Most approaches use labeled data during the meta-learning phase. While in some domains there is an abundance of labeled datasets, in many domains such labeled data is difficult to acquire. Unsupervised meta-learning approaches aim to learn from an unsupervised dataset from a domain similar from that of the target task. Typically these approaches generate synthetic few-shot learning tasks for the meta-learning phase through a variety of techniques. CACTUs [10] uses a progressive clustering method. UMTRA [12] utilizes the statistical diversity properties and domain-specific augmentations to generate synthetic training and validation data. AAL [1] uses augmentation of the unlabeled training set to generate the validation data. The accuracy of these approaches was shown to be comparable with but lower than supervised meta-learning approaches, but with the advantage of requiring orders of magnitude less labeled training data. A common weakness of these approaches is that the techniques used to generate the synthetic tasks (clustering, augmentation, random sampling) are highly domain dependent.

Our proposed approach, LASIUM, takes advantage of generative models trained on the specific domain to create the in-class and out-of-class pairs of meta-training data. The most successful neural-network based generative models in recent years are variational autoencoders (VAE) [5] and generative adversarial networks (GANs) [9]. The implementation variants of the LASIUM algorithm described in this paper rely on the original VAE model and on two specific variations of the GAN concept, respectively. MSGAN (aka Miss-GAN) [16] aims to solve the missing mode problem of conditional GANs through a regularization term that maximizes the distance between the generated images with respect to the distance between their corresponding input latent codes. Progressive GANs [11] are growing both the generator and discriminator progressively, and approach resembling the layer-wise training of autoencoders.

3 Method

3.1 Preliminaries

We define an -way, -shot supervised classification task, , as a set composed of data points such that there are exactly samples for each categorical label . During meta-learning, an additional set ,, is attached to each task that contains another data points separate from the ones in . We have exactly samples for each class in as well.

It is straightforward to package -way, -shot tasks with and from a labeled dataset. However, in unsupervised meta-learning setting, a key challenge is how to automatically construct tasks from the unlabeled dataset .

3.2 Generating meta-tasks using generative models

We have seen that in order to generate the training data for the meta-learning phase, we need to generate -way training tasks with training and validation samples. The label associated with the classes in these tasks is not relevant, as it will be discarded after the meta-learning phase. Our objective is simply to generate samples of the type with and with the following properties: (a) all the samples are different (b) any two samples with the same index are in-class samples and (c) any two samples with different index are out-of-class samples. In the absence of human provided labels, the class structure of the domain is defined only implicitly by the sample selection procedure. Previous approaches to unsupervised meta-learning chose samples directly from the training data , or created new samples through augmentation. For instance, we can define the class structure of the domain by assuming that certain types of augmentations keep the samples in-class with the original sample. One challenge of such approaches is that the choice of the augmentation is domain dependent, and the augmentation itself can be a complex mathematical operation.

In this paper we approach the sample selection problem differently. Instead of sampling from , we use the unsupervised dataset to train a generative model

. Generative models represent the full probability distribution of a model, and allow us to sample new instances from the distribution. For many models, this sampling process can be computationally expensive iterative process. Many successful neural network based generative models use the

reparametrization trick for the training and sampling which concentrate the random component of the model in a latent representation . By choosing the latent representation

from a simple (uniform or normal) distribution, we can obtain a sample from the complex distribution

by passing through a deterministic generator . Two of the most popular generative models, variational autoencoders (VAEs) and generative adversarial networks (GANs) follow this model.

The idea of the LASIUM algorithm is that given a generator component , nearby latent space values and map to in-class samples and . Conversely, and values that are far away from each other, map to out of class samples. Naturally, we still need to define what we mean by “near” and “far” in the latent space and how to choose the corresponding values. However, this is a significantly simpler task than, for instance, defining the set of complex augmentations that might retain class membership.

[width=0.8height=0.35]figures/GAN-method-illustration.pdf

Figure 1: -way, -shot task generation with images for validation by a pre-trained GAN generator . a) Sample

random vectors.

b) Generate new vectors by one of the proposed in-class sampling strategies. c) Generate images from all of the latent vectors and put them into train and validation set to construct a task. The images in this figure have been generated by our algorithm. The colored edge of each image indicates that it was generated from its corresponding latent vector.

[ht]

[width=0.98height=0.3]figures/VAE-Method-Illusteration.pdf

Figure 2: -way, -shot task generation by VAE on Omniglot dataset with images for validation set of each task. a) Sample images from dataset. b) Encode the images into latent space and check if they are distanced. c) Use proposed in-class sampling techniques to generate new latent vectors. d) Generate images from the latent vectors and put them alongside with sampled images from step a into train and validation set to construct a task.

Training a generative model Our method for generating meta-tasks is agnostic to the choice of training algorithm for the generative model and can use either a VAE or a GAN with minimal adjustments. In our VAE experiments, we used a network trained with the standard VAE training algorithm [5]. For the experiments with GANs we used two different methods mode seeking GANs (MSGAN) [16] and progressive growing of GANs (proGAN) [11].

Algorithm 1 describes the steps of our method. We will delve into each step in the following parts of this section.

require : Unlabeled dataset , Pre-trained generator , Policy
require : , : number of samples for train and validation during meta-learning
require : : class-count, : meta-batch size
;
 // meta-batch of tasks
1 for i in  do
2       Sample class-vectors in latent space of and add them to task-vectors for  in  do
3             generate new-vectors = (class-vectors, ) and add them to task-vectors
4       end for
5      Generate images by feeding task-vectors to generator Construct task by putting the first images in task train set and the last images in task validation set
6 end for
return B
Algorithm 1 LASIUM for unsupervised meta-learning task generation

Sampling out of class instances from the latent space representation: Our sampling techniques differ slightly whether we are using a GAN or VAE. For GAN, we use rejection sampling to find latent space vectors that are at a pairwise distance of at least threshold - see Figure 1(a). When using a VAE, we also have an encoder network that allows us to map from the domain to the latent space. Taking advantage of this, we can additionally sample data points from our unlabeled dataset and embed them into a latent space. If the latent space representation of these images are too close to each other, we re-sample, otherwise we can use the images and their representations and continue the following steps exactly the same as GANs - see Figure 2(a) and (b). We will refer to the vectors selected here as anchor vectors.

Generating in-class latent space vectors Next, having sampled anchor vectors from the latent space representation, we aim to generate new vectors from the latent space representation such that the generated image belongs to the same class as the one of for . This process needs to be repeated for times.

The sampling strategy takes as input the sampled vectors and a number and returns new vectors such that and are an in-class pair for . This ensures that no two belong to the same class and creates groups of vectors in our latent space. We feed these vectors to our generator to get groups of images. From each group we pick the first for and the last for .

What remains is to define the strategy to sample the individual in-class vectors. We propose three different sampling strategies, all of which can be seen as variations of the idea of latent space interpolation sampling. This motivates the name of the algorithm LAtent Space Interpolation Unsupervised Meta-learning (LASIUM).

[width=0.85height=0.55]figures/all_procedures.pdf

Figure 3: Latent space representation visualization of proposed strategies for generating in-class candidates. Left: LASIUM-N, adding random noise to the sample vector. Middle: LASIUM-RO, interpolate with random out-of-class samples. Right: LASIUM-OC, interpolate with other classes’ samples.

LASIUM-N (adding Noise): This technique generates in-class samples by adding Gaussian noise to the anchor vector where (see Figure 3-Left). In the context of LASIUM, we can see this as an interpolation between the anchor vector and a noise vector, with the interpolation factor determined by . For the impact of different choices of see the ablation study in section 4.6.

LASIUM-RO (with Random Out-of-class samples) To generate a new in-class sample to anchor vector we first find a random out-of-class sample , and choose an interpolated version closer to the anchor: (see Figure 3-Middle). Here,

is a hyperparameter, which can be tuned to define the size of the class. As we are in a comparatively high-dimensional latent space (in our case, 512 dimensions), we need relatively large values of

, such as to define classes of reasonable size. This model effectively allows us to define complex augmentations (such as a person seen without glasses, or in a changed lighting) with only one scalar hyperparameter to tune. By interpolating towards another sample we ensure that we are staying on the manifold that defines the dataset (in the case of Figure 3, this being human faces).

LASIUM-OC (with Other Classes’ samples) This technique is similar to LASIUM-RO, but instead of using a randomly generated out-of-class vector, we are interpolating towards vectors already chosen from the other classes in the same task (see Figure 3-Right). This limits the selection of the samples to be confined to the convex hull defined by the initial anchor points. The intuition behind this approach is that choosing the samples this way focuses the attention of the meta-learner towards the hard to distinguish samples that are between the classes in the few shot learning class (eg. they share certain attributes).

4 Experiments

We tested the proposed algorithms on three few-shot learning benchmarks: (a) the -way Omniglot [13], a benchmark for few-shot handwritten character recognition, (b) the -way CelebA few-shot identity recognition, and (c) the CelebA attributes dataset [15] proposed as a few-shot learning benchmark by [8] that comprises binary classification (

-way) tasks in which each task is defined by selecting 3 different attributes and 3 boolean values corresponding to each attribute. Every image in a certain task-specific class has the same attributes with each other while does not share any of these attributes with images in the other class. Last but not least we evaluate our results on (d) the mini-ImageNet 

[23] few-shot learning benchmark.

We partition each dataset into meta-training, meta-validation, and meta-testing splits between classes. To evaluate our method, we use the classes in the test set to generate 1000 tasks as described in section 3.2. We set to be 15. We average the accuracy on all tasks and report a confidence interval. To ensure that comparisons are fair, we use the same random seed in the whole task generation process. For the Omniglot dataset, we report the results for , and . For CelebA identity recognition, we report our results for and . For CelebA attributes, we follow the and tasks as proposed by [10].

4.1 Baselines

As baseline algorithms for our approach we follow the practice of recent papers in the unsupervised meta-learning literature. The simplest baseline is to train the same network architecture from scratch with images. More advanced baselines can be obtained by learning an unsupervised embedding on and use it for downstream task training. We used the ACAI [3], BiGAN [6, 7], and DeepCluster [4] as representative of the unsupervised learning literature. On top of these embeddings, we report accuracy for

-nearest neighbors, linear classifier, multi layer perceptron (MLP) with dropout, and cluster matching.

The direct competition for our approach are the current state-of-the-art algorithms in unsupervised meta-learning. We compare our results with CACTUs-MAML [10], CACTUs-ProtoNets [10] and UMTRA [12]. Finally, it is useful to compare our approach with algorithms that require supervised data. We include results for supervised standard transfer learning from VGG19 pre-trained on ImageNet [28] and two supervised meta-learning algorithms, MAML [10], and ProtoNets [10].

4.2 Neural network architectures

Since excessive tuning of hyperparameters can lead to the overestimation of the performance of a model [21], we keep the hyperparameters of the unsupervised meta-learning as constant as possible (including the MAML, and ProtoNets model architectures) in all experiments. Our model architecture consists of four stacked convolutional blocks. Each block comprises 64 filters that carry out

convolutions, followed by batch normalization, a ReLU non-linearity, and

max-pooling. For the MAML experiments, classification is performed by a fully connected layer, whereas for the ProtoNets model we compute distances based on the feature vectors produced by the last convolution module without any dense layers. The input size to our model is for CelebA and for Omniglot.

For Omniglot, our VAE model is constructed symmetrically. The encoder is composed of four convolutional blocks, with batch normalization and ReLU activation following each of them. A dense layer is connected to the end such that given an input image of shape , the encoder produces a latent vector of length . On the other side, the decoder starts from a dense layer whose output has length . It is then fed into four modules each of which consists of a transposed convolutional layer, batch normalization and the ReLU non-linearity. We use kernels,

channels and a stride of

for all the convolutional and transposed convolutional layers. Hence, the generated image has the size of

that is identical to the input images. This VAE model is trained for 1000 epochs with a learning rate of 0.001.

Our GAN generator gets an input of size which is the dimensionality of the latent space and feeds it into a dense layer of size . After applying a Leaky ReLU with , we reshape the output of dense layer to 128 channels of shape . Then we feed it into two upsampling blocks, where each block has a transposed convolution with 128 channels, kernels and strides. Finally, we feed the outcome of the upsampling blocks into a convolution layer with 1 channel and a kernel with sigmoid activaiton. The discriminator takes a input and feeds it into three convolution layers with 64, 128 and 128 channels and strides. We apply leaky ReLU activation after each convolution layer with . Finally we apply a global

D max pooling layer and feed it into a dense layer with 1 neuron to classify the output as real or fake. We use the same loss function for training as described in 

[16].

For the CelebA GAN experiments, we use the pre-trained network architecture described in [11]. For VAE, we use the same architecture as we described for Omniglot VAE with one more convolution block and more channels to handle the larger input size of . The exact architecture is described in section 4.6.

4.3 Results on Omniglot

Table 1 shows the results on the Omniglot dataset. We find that the LASIUM-RO-GAN-MAML configuration outperforms all the unsupervised approaches, including the meta-learning based ones like CACTUs [10] and UMTRA [12]. Beyond the increase in performance, we must note that the competing approaches use more domain specific knowledge (in case of UMTRA augmentations, in case of CACTUs, learned clustering). We also find that on this benchmark, LASIUM outperforms transfer learning using the much larger VGG-19 network.

As expected even the best LASIUM result is worse than the supervised meta-learning models. However, we need to consider that the unsupervised meta-learning approaches use several orders of magnitude less labels. For instance, the 95.29% accuracy of LASIUM-RO-GAN-MAML was obtained with only 25 labels, while the supervised approaches used 25,000.

Algorithm Feature Extractor = 1 = 5
Training from scratch
K-nearest neighbors ACAI
Linear Classifier ACAI
MLP with dropout ACAI
Cluster matching ACAI
K-nearest neighbors BiGAN
Linear Classifier BiGAN
MLP with dropout BiGAN
Cluster matching BiGAN
CACTUs-MAML BiGAN
CACTUs-MAML ACAI
UMTRA-MAML
LASIUM-RO-GAN-MAML
LASIUM-N-VAE-MAML
CACTUs-ProtoNets BiGAN
CACTUs-ProtoNets ACAI
LASIUM-RO-GAN-ProtoNets
LASIUM-OC-VAE-ProtoNets
Transfer Learning (VGG-19)
Supervised MAML
Supervised ProtoNets
Table 1: Accuracy results on the Omniglot dataset averaged over 1000, -way, -shot downstream tasks with for each task. indicates the 95% confidence interval. The top three unsupervised results are reported in bold.

4.4 Results on CelebA

Table 2 shows our results on the CelebA identity recognition tasks where the objective is to recognize different people given images for each. We find that on this benchmark as well, the LASIUM-RO-GAN-MAML configuration performs better than other unsupervised meta-learning models as well as transfer learning with VGG-19 - it only falls slightly behind LASIUM-RO-GAN-ProtoNets on the one-shot case. As we have discussed in the case of Omniglot results, the performance remains lower then the supervised meta-learning approaches which use several orders of magnitude more labeled data.

Finally, Table 3 shows our results for CelebA attributes benchmark introduced in [10]. A peculiarity of this dataset is that the way in which classes are defined based on the attributes, the classes are unbalanced in the dataset, making the job of synthetic task selection more difficult. We find that LASIUM-N-GAN-MAML obtains the second best on this test with a performance of , within the confidence interval of the winner, CACTUs MAML with BiGAN . In this benchmark, transfer learning with the VGG-19 network performed better than all unsupervised meta-learning approaches, possibly due to existing representations of the discriminating attributes in that much more complex network.

Algorithm = 1 = 5 = 15
Training from scratch
CACTUs
UMTRA
LASIUM-RO-GAN-MAML
LASIUM-RO-VAE-MAML
LASIUM-RO-GAN-ProtoNets
LASIUM-RO-VAE-ProtoNets
Transfer Learning (VGG-19)
Supervised MAML
Supervised ProtoNets
Table 2: Accuracy results of unsupervised learning on CelebA for different unsupervised methods. The results are averaged over 1000, -way, -shot downstream tasks with for each task. indicates the 95% confidence interval. The top three unsupervised results are reported in bold.
Algorithm Feature Extractor Accuracy
Training from scratch N/A
K-nearest neighbors BiGAN
Linear Classifier BiGAN
MLP with dropout BiGAN
Cluster matching BiGAN
K-nearest neighbors DeepCluster
Linear Classifier DeepCluster
MLP with dropout DeepCluster
Cluster matching DeepCluster
CACTUs MAML BiGAN
CACTUs MAML DeepCluster
LASIUM-N-GAN-MAML N/A
CACTUs ProtoNets BiGAN
CACTUs ProtoNets DeepCluster
LASIUM-N-GAN-ProtoNets N/A
Transfer Learning (VGG-19) N/A
Supervised MAML N/A
Supervised ProtoNets N/A
Table 3: Results on CelebA attributes benchmark -way, -shot tasks with . The results are averaged over 1000 downstream tasks and indicates 95% confidence interval. The top three unsupervised results are reported in bold.

4.5 Results on mini-ImageNet

In this section, we evaluate our algorithm on mini-ImageNet benchmark. Its complexity is high due to the use of ImageNet images. In total, there are 100 classes with 600 samples of color images per class. These 100 classes are divided into 64, 16, and 20 classes respectively for sampling tasks for meta-training, meta-validation, and meta-test. A big difference between mini-ImageNet and CelebA is that we have to classify a group of concepts instead of just the identity of a subject. This makes interpreting the latent space a bit trickier. For example, it is not rational to interpolate between a bird and a piano. However, the assumption that nearby latent vectors belong to nearby instances is still valid. Thereby, we could be confident by not getting too far from the current latent vector, we generate something which belongs to the same class (identity).

For mini-ImageNet we use a pre-trained network BigBiGAN111https://tfhub.dev/deepmind/bigbigan-resnet50/1. Our experiments show that our method is very effective and can outperform state-of-the-art algorithms. See Table 4 for the results on mini-ImageNet benchmark. Figure 4 demonstrates tasks constructed for mini-ImageNet by LASIUM-N with .

[width=0.7height=0.4]figures/mini_ImageNet.pdf

Figure 4: Train and validation tasks for mini-ImageNet constructed by LASIUM-N with
Algorithm Embedding = 1 = 5 = 20 = 50
Training from scratch N/A
K-nearest neighbors BiGAN
Linear Classifier BiGAN
MLP with dropout BiGAN
Cluster matching BiGAN
K-nearest neighbors DeepCluster
Linear Classifier DeepCluster
MLP with dropout DeepCluster
Cluster matching DeepCluster
CACTUs MAML BiGAN
CACTUs MAML DeepCluster
UMTRA MAML N/A
LASIUM-N-GAN-MAML N/A
CACTUs ProtoNets BiGAN
CACTUs ProtoNets DeepCluster
LASIUM-N-GAN-ProtoNets N/A
Supervised MAML N/A
Supervised ProtoNets N/A
Table 4: Results on mini-ImageNet benchmark for -way, -shot tasks with . The results are averaged over 1000 downstream tasks and indicates 95% confidence interval. The top three unsupervised results are reported in bold.

4.6 Hyperparameters and ablation studies

In this section, we report the hyperparameters of LASIUM-MAML in Table 5 and LASIUM-ProtoNets in Table 6 for Omniglot, CelebA, CelebA attributes and mini-ImageNet datasets.

We also report the ablation studies on different strategies for task construction in Table 7. We run all the algorithm for just 1000 iterations and compared between them. We also apply a small shift to Omniglot images.

Hyperparameter Omniglot CelebA CelebA attributes mini-ImageNet
Number of classes
Input size
Inner learning rate 0.4 0.05 0.05 0.05
Meta learning rate 0.001 0.001 0.001 0.001
Meta-batch size 4 4 4 4
meta-learning 1 1 5 1
meta-learning 5 5 5 5
evaluation 15 15 5 15
Meta-adaptation steps 5 5 5 5
Evaluation adaptation steps 50 50 50 50
Table 5: LASIUM-MAML hyperparameters summary
Hyperparameter Omniglot CelebA CelebA attributes mini-ImageNet
Number of classes
Input size
Meta learning rate 0.001 0.001 0.001 0.001
Meta-batch size 4 4 4 4
meta-learning 1 1 5 1
meta-learning 5 5 5 5
evaluation 15 15 5 15
Table 6: LASIUM-ProtoNets hyperparameters summary
Sampling Strategy Hyperparameters GAN-MAML VAE-MAML GAN-Proto VAE-Proto
LASIUM-N =0.5
LASIUM-N =1.0
LASIUM-N =2.0
LASIUM-RO =0.2
LASIUM-RO =0.4
LASIUM-OC =0.2
LASIUM-OC =0.4
Table 7: Accuracy of different proposed strategies on Omniglot. For the sake of comparison, we stop meta-learning after 1000 iterations. Results are reported on 1000 tasks with a confidence interval.

5 Conclusion

We described LASIUM, an unsupervised meta-learning algorithm for few-shot classification. The algorithm is based on interpolation in the latent space of a generative model to create synthetic meta-tasks. In contrast to other approaches, LASIUM requires minimal domain specific knowledge. We found that LASIUM outperforms state-of-the-art unsupervised algorithms on the Omniglot and CelebA identity recognition benchmarks and competes very closely with CACTUs on the CelebA attributes learning benchmark.

6 Acknowledgements

This work had been in part supported by the National Science Foundation under Grant Number IIS-1409823.

References

  • [1] A. Antoniou and A. Storkey (2019) Assume, augment and learn: unsupervised few-shot meta-learning via random labels and data augmentation. arXiv preprint arXiv:1902.09884. Cited by: §1, §2.
  • [2] Y. Bengio, S. Bengio, and J. Cloutier (1990) Learning a synaptic learning rule. Université de Montréal, Département d’Informatique et de Recherche Opérationelle. Cited by: §2.
  • [3] D. Berthelot*, C. Raffel*, A. Roy, and I. Goodfellow (2019) Understanding and improving interpolation in autoencoders via an adversarial regularizer. In Int’l Conf. on Learning Representations (ICLR), Cited by: §4.1.
  • [4] M. Caron, P. Bojanowski, A. Joulin, and M. Douze (2018) Deep clustering for unsupervised learning of visual features. In

    Proc. of the European Conf. on Computer Vision (ECCV)

    ,
    pp. 132–149. Cited by: §4.1.
  • [5] P. K. Diederik and M. Welling (2014) Auto-encoding variational bayes. In Proc. of the Int’l Conf. on Learning Representations (ICLR), Vol. 1. Cited by: §2, §3.2.
  • [6] J. Donahue, P. Krähenbühl, and T. Darrell (2017) Adversarial feature learning. In Int’l Conf. on Learning Representations (ICLR), Cited by: §4.1.
  • [7] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville (2017) Adversarially learned inference. In Int’l Conf. on Learning Representations (ICLR), Cited by: §4.1.
  • [8] C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In

    Proc. of Int’l Conf. on Machine Learning (ICML)

    ,
    pp. 1126–1135. Cited by: §1, §2, §4.
  • [9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), pp. 2672–2680. Cited by: §2.
  • [10] K. Hsu, S. Levine, and C. Finn (2019) Unsupervised learning via meta-learning. In Int’l Conf. on Learning Representations (ICLR), Cited by: §1, §2, §4.1, §4.3, §4.4, §4.
  • [11] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2018) Progressive growing of GANs for improved quality, stability, and variation. Proc. of the Int’l Conf. on Learning Representations (ICLR). Cited by: §2, §3.2, §4.2.
  • [12] S. Khodadadeh, L. Bölöni, and M. Shah (2019) Unsupervised meta-learning for few-shot image classification. In Advances in Neural Information Processing Systems (NeurIPS), pp. 10132–10142. Cited by: §1, §2, §4.1, §4.3.
  • [13] B. Lake, R. Salakhutdinov, J. Gross, and J. Tenenbaum (2011) One shot learning of simple visual concepts. In Proc. of the Annual Meeting of the Cognitive Science Society, Vol. 33. Cited by: §4.
  • [14] Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. Hwang, and Y. Yang (2019) Learning to propagate labels: Transductive propagation network for few-shot learning. In Int’l Conf. on Learning Representations (ICLR), Cited by: §2.
  • [15] Z. Liu, P. Luo, X. Wang, and X. Tang (2015-12) Deep learning face attributes in the wild. In Proc. of Int’l Conf. on Computer Vision (ICCV), Cited by: §4.
  • [16] Q. Mao, H. Lee, H. Tseng, S. Ma, and M. Yang (2019) Mode seeking generative adversarial networks for diverse image synthesis.

    Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition

    , pp. 1429–1437.
    Cited by: §2, §3.2, §4.2.
  • [17] N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel (2017) Meta-learning with temporal convolutions. arXiv preprint arXiv:1707.03141. Cited by: §2.
  • [18] N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel (2018) A Simple Neural Attentive Meta-Learner. In Int’l Conf. on Learning Representations (ICLR), Cited by: §1.
  • [19] D. K. Naik and R. J. Mammone (1992) Meta-neural networks that learn by learning. In [Proc. 1992] IJCNN Int’l Joint Conf. on Neural Networks, Vol. 1, pp. 437–442. Cited by: §2.
  • [20] A. Nichol, J. Achiam, and J. Schulman (2018) On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999. Cited by: §2.
  • [21] A. Oliver, A. Odena, C. A. Raffel, E. D. Cubuk, and I. Goodfellow (2018)

    Realistic evaluation of deep semi-supervised learning algorithms

    .
    In Advances in Neural Information Processing Systems (NeurIPS), pp. 3235–3246. Cited by: §4.2.
  • [22] A. Rajeswaran, C. Finn, S. M. Kakade, and S. Levine (2019) Meta-learning with implicit gradients. In Advances in Neural Information Processing Systems (NeurIPS), pp. 113–124. Cited by: §2.
  • [23] S. Ravi and H. Larochelle (2016) Optimization as a model for few-shot learning. Proc. of Int’l Conf. on Learning Representations (ICLR). Cited by: §4.
  • [24] S. Ravi and H. Larochelle (2016) Optimization as a model for few-shot learning. Int’l Conf. on Learning Representations (ICLR). Cited by: §2.
  • [25] M. Ren, S. Ravi, E. Triantafillou, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle, and R. S. Zemel (2018) Meta-Learning for Semi-Supervised Few-Shot Classification. In Int’l Conf. on Learning Representations (ICLR), Cited by: §2.
  • [26] A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell (2019) Meta-Learning with Latent Embedding Optimization. In Int’l Conf. on Learning Representations (ICLR), Cited by: §2.
  • [27] J. Schmidhuber (1987) Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-… hook. Ph.D. Thesis, Technische Universität München. Cited by: §2.
  • [28] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. Int’l Conf. on Learning Representations (ICLR). Cited by: §4.1.
  • [29] J. Snell, K. Swersky, and R. Zemel (2017) Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems (NeurIPS), pp. 4077–4087. Cited by: §1, §2.
  • [30] S. Thrun and L. Pratt (1998) Learning to learn. Kluwer Academic Publishers. Cited by: §2.
  • [31] E. Triantafillou, T. Zhu, V. Dumoulin, P. Lamblin, U. Evci, K. Xu, R. Goroshin, C. Gelada, K. Swersky, P. Manzagol, and H. Larochelle (2020) Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. In Int’l Conf. on Learning Representations (ICLR), Cited by: §2.
  • [32] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. (2016) Matching networks for one shot learning. In Advances in Neural Information Processing Systems (NeurIPS), pp. 3630–3638. Cited by: §2.
  • [33] D. Wang, Y. Cheng, M. Yu, X. Guo, and T. Zhang (2019) A hybrid approach with optimization-based and metric-based meta-learner for few-shot learning. Neurocomputing 349, pp. 202–211. Cited by: §2.