Regularizing Meta-Learning via Gradient Dropout

04/13/2020 ∙ by Hung-Yu Tseng, et al. ∙ 34

With the growing attention on learning-to-learn new tasks using only a few examples, meta-learning has been widely used in numerous problems such as few-shot classification, reinforcement learning, and domain generalization. However, meta-learning models are prone to overfitting when there are no sufficient training tasks for the meta-learners to generalize. Although existing approaches such as Dropout are widely used to address the overfitting problem, these methods are typically designed for regularizing models of a single task in supervised training. In this paper, we introduce a simple yet effective method to alleviate the risk of overfitting for gradient-based meta-learning. Specifically, during the gradient-based adaptation stage, we randomly drop the gradient in the inner-loop optimization of each parameter in deep neural networks, such that the augmented gradients improve generalization to new tasks. We present a general form of the proposed gradient dropout regularization and show that this term can be sampled from either the Bernoulli or Gaussian distribution. To validate the proposed method, we conduct extensive experiments and analysis on numerous computer vision tasks, demonstrating that the gradient dropout regularization mitigates the overfitting problem and improves the performance upon various gradient-based meta-learning frameworks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

page 12

page 13

page 14

Code Repositories

DropGrad

Regularizing Meta-Learning via Gradient Dropout


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, significant progress has been made in meta-learning, which is also known as learning to learn. One common setting is that, given only a few training examples, meta-learning aims to learn new tasks

rapidly by leveraging the past experience acquired from the known tasks. It is a vital machine learning problem due to the potential for reducing the amount of data and time for adapting an existing system. Numerous recent methods successfully demonstrate how to adopt meta-learning algorithms to solve various learning problems, such as few-shot classification 

[5, 30, 31], reinforcement learning [9, 24], and domain generalization [2, 19].

Despite the demonstrated success, meta-learning frameworks are prone to overfitting [12]

when there do not exist sufficient training tasks for the meta-learners to generalize. For instance, few-shot classification on the mini-ImageNet 

[39] dataset contains only training categories. Since the training tasks can be only sampled from this small set of classes, meta-learning models may overfit and fail to generalize to new testing tasks.

Significant efforts have been made to address the overfitting issue in the supervised learning framework, where the model is developed to learn a

single task (e.g., recognizing the same set of categories in both training and testing phases). The Dropout [33] method randomly drops (zeros) intermediate activations in deep neural networks during the training stage. Relaxing the limitation of binary dropout, the Gaussian dropout [41] scheme augments activations with noise sampled from a Gaussian distribution. Numerous methods [7, 18, 36, 40, 46] further improve the Dropout method by injecting structural noise or scheduling the dropout process to facilitate the training procedure. Nevertheless, these methods are developed to regularize the models to learn a single task, which may not be effective for meta-learning frameworks.

In this paper, we address the overfitting issue [12] in gradient-based meta-learning. As shown in Figure 1(a), given a new task, the meta-learning framework aims to adapt model parameters to be via the gradients computed according to the few examples (support data ). This gradient-based adaptation process is also known as the inner-loop optimization. To alleviate the overfitting issue, one straightforward approach is to apply the existing dropout method to the model weights directly. However, there are two sets of model parameters and in the inner-loop optimization. As such, during the meta-training stage, applying normal dropout would cause inconsistent randomness, i.e

., dropped neurons, between these two sets of model parameters. To tackle this issue, we propose a dropout method on the gradients in the inner-loop optimization, denoted as

DropGrad, to regularize the training procedure. This approach naturally bridges and , and thereby involves only one randomness for the dropout regularization. We also note that our method is model-agnostic and generalized to various gradient-based meta-learning frameworks such as [1, 5, 20]. In addition, we demonstrate that the proposed dropout term can be formulated in a general form, where either the binary or Gaussian distribution can be utilized to sample the noise, as demonstrated in Figure 1(b).

To evaluate the proposed DropGrad method, we conduct experiments on numerous computer vision tasks, including few-shot classification on the mini-ImageNet [39], online object tracking [23]

, and few-shot viewpoint estimation 

[37]

, showing that the DropGrad scheme can be applied to and improve different tasks. In addition, we present comprehensive analysis by using various meta-learning frameworks, adopting different dropout probabilities, and explaining which layers to apply gradient dropout. To further demonstrate the generalization ability of DropGrad, we perform a challenging cross-domain few-shot classification task, in which the meta-training and meta-testing sets are from two different distributions,

i.e., the mini-ImageNet and CUB [42] datasets. We show that with the proposed method, the performance is significantly improved under the cross-domain setting. We make the source code public available to simulate future research in this field.111https://github.com/hytseng0509/DropGrad

In this paper, we make the following contributions:

  • We propose a simple yet effective gradient dropout approach to improve the generalization ability of gradient-based meta-learning frameworks.

  • We present a general form for gradient dropout and show that both binary and Gaussian sampling schemes mitigate the overfitting issue.

  • We demonstrate the effectiveness and generalizability of the proposed method via extensive experiments on numerous computer vision tasks.

2 Related Work

Meta-Learning.

Meta-learning aims to adapt the past knowledge learned from previous tasks to new tasks with few training instances. Most meta-learning algorithms can be categorized into three groups: 1) Memory-based approaches [27, 30] utilize recurrent networks to process few training examples of new tasks sequentially; 2) Metric-based frameworks [22, 31, 34, 39, 38] make predictions by referring to the features encoded from the input data and training instances in a generic metric space; 3) Gradient-based methods [1, 5, 6, 12, 20, 29, 25] learn to optimize the model via gradient descent with few examples, which is the focus of this work. In the third group, the MAML [5] approach learns model initialization (i.e., initial parameters) that is amenable to fast fine-tuning with few instances. In addition to model initialization, the MetaSGD [20] method learns a set of learning rates for different model parameters. Furthermore, the MAML++ [1] algorithm makes several improvements based on the MAML method to facilitate the training process with additional performance gain. However, these methods are still prone to overfitting as the dataset for the training tasks is insufficient for the model to adapt well. Recently, Kim et al[12] and Rusu et al[29] address this issue via the Bayesian approach and latent embeddings. Nevertheless, these methods employ additional parameters or networks which entail significant computational overhead and may not be applicable to arbitrary frameworks. In contrast, the proposed regularization does not impose any overhead and thus can be readily integrated into the gradient-based models mentioned above.

Dropout Regularization.

Built upon the Dropout [33] method, various schemes [7, 8, 18, 36, 40]

have been proposed to regularize the training process of deep neural networks for supervised learning. The core idea is to inject noise into intermediate activations when training deep neural networks. Several recent studies improve the regularization on convolutional neural networks by making the injected structural noise. For instance, the SpatialDropout 

[36] method drops the entire channel from an activation map, the DropPath [18, 46] scheme chooses to discard an entire layer, and the DropBlock [7] algorithm zeros multiple continuous regions in an activation map. Nevertheless, these approaches are designed for deep neural networks that aim to learn a single task, e.g., learning to recognize a fixed set of categories. In contrast, our algorithm aims to regularize the gradient-based meta-learning frameworks that suffer from the overfitting issue on the task-level, e.g., introducing new tasks.

3 Gradient Dropout Regularization


(a) (b)
Figure 1: Illustration of the proposed method. (a) The proposed DropGrad method imposes a noise term to augment the gradient in the inner-loop optimization during the meta-training stage. (b) The DropGrad method samples the noise term from either the Bernoulli or Gaussian distribution, in which the Gaussian distribution provides a better way to account for uncertainty.

Before introducing details of our proposed dropout regularization on gradients, we first review the gradient-based meta-learning framework.

3.1 Preliminaries for Meta-Learning

In meta-learning, multiple tasks are divided into meta-training , meta-validation , and meta-testing sets. Each task consists of a support set and a query set , where and are a set of input data and the corresponding ground-truth. The support set represents the set of few labeled data for learning, while the query set indicates the set of data to be predicted.

Given a novel task and a parametric model

, the objective of a gradient-based approach during the meta-training stage is to minimize the prediction loss on the query set according to the signals provided from the support set , and thus the model can be adapted. Figure 1(a) shows an overview of the MAML [5] method, which offers a general formulation of gradient-based frameworks. For each iteration of the meta-training phase, we first randomly sample a task from the meta-training set . We then adapt the initial parameters to be task-specific parameters via gradient descent:

(1)

where is the learning rate for gradient-based adaptation and is the operation of element-wise product, i.e., Hadamard product. The term in (1) is the set of gradients computed according to the objectives of model on the support set :

(2)

We call the step of (1) as the inner-loop optimization and typically, we can do multiple gradient steps for (1), e.g., smaller than in general. After the gradient-based adaptation, the initial parameters

are optimized according to the loss functions of the adapted model

on the query set :

(3)

where is the learning rate for meta-training. During the meta-testing stage, the model is adapted according to the support set and the prediction on query data is made without accessing the ground-truth in the query set. We note that several methods are built upon the above formulation introduced in the MAML method. For example, the learning rate for gradient-adaptation is viewed as the optimization objective [1, 20], and the initial parameters are not generic but conditional on the support set  [29].

3.2 Gradient Dropout

The main idea is to impose uncertainty to the core objective during the meta-training step, i.e., the gradient in the inner-loop optimization, such that receives gradients with noise to improve the generalization of gradient-based models. As described in Section 3.1, adapting the model to involves the gradient update in the inner-loop optimization formulated in (2). Based on this observation, we propose to randomly drop the gradient in (2), i.e., , during the inner-loop optimization, as illustrated in Figure 1. Specifically, we augment the gradient g as follows:

(4)

where is a noise regularization term sampled from a pre-defined distribution. With the formulation of (4), in the following we introduce two noise regularization strategies via sampling from different distributions, i.e., the Bernoulli and Gaussian distributions.

Binary DropGrad.

We randomly zero the gradient with the probability , in which the process can be formulated as:

(5)

where the denominator is the normalization factor. Note that, different from the Dropout [33] method which randomly drops the intermediate activations in a supervised learning network under a single task setting, we perform the dropout on the gradient level.

Gaussian DropGrad.

One limitation of the Binary DropGrad scheme is that the noise term is only applied in a binary form, which is either or

. To address this disadvantage and provide a better regularization with uncertainty, we extend the Bernoulli distribution to the Gaussian formulation. Since the expectation and variance of the noise term

in the Binary DropGrad method are respectively and , we can augment the gradient with noise sampled from the Gaussian distribution:

(6)

As a result, two noise terms and are statistically comparable with the same dropout probability . In Figure 1(b), we illustrate the difference between the Binary DropGrad and Gaussian DropGrad approaches. We also show the process of applying the proposed regularization using the MAML [5] method in Algorithm 1, while similar procedures can be applied to other gradient-based meta-learning frameworks, such as MetaSGD [20] and MAML++ [1].

1 Require: a set of training tasks , adaptation learning rate , meta-learning rate randomly initialize while training do
2       randomly sample a task from compute according to (5) or (6)
3 end while
Algorithm 1 Applying DropGrad on MAML [5]

4 Experimental Results

In this section, we evaluate the effectiveness of the proposed DropGrad method by conducting extensive experiments on three learning problems: few-shot classification, online object tracking, and few-shot viewpoint estimation. In addition, for the few-shot classification experiments, we analyze the effect of using binary and Gaussian noise, which layers to apply DropGrad, and performance in the cross-domain setting.

4.1 Few-Shot Classification

Few-shot classification aims to recognize a set of new categories, e.g., five categories (-way classification), with few, e.g., one (-shot) or five (-shot), example images from each category. In this setting, the support set contains the few images of the new categories and the corresponding categorical annotation . We conduct experiments on the mini-ImageNet [39] dataset, which is widely used for evaluating few-shot classification approaches. As a subset of the ImageNet [4], the mini-ImageNet dataset contains categories and images for each category. We use the -way evaluation protocol in [26] and split the dataset into training, validating, and testing categories.

Implementation Details.

We apply the proposed DropGrad regularization method to train the following gradient-based meta-learning frameworks: MAML [5], MetaSGD [20], and MAML++ [1]. We use the implementation from Chen et al[3] for MAML and use our own implementation for MetaSGD.222https://github.com/wyharveychen/CloserLookFewShot We use the ResNet-18 [10] model as the backbone network for both MAML and MetaSGD. As for MAML++, we use the original source code.333https://github.com/AntreasAntoniou/HowToTrainYourMAMLPytorch Similar to recent studies [29], we also pre-train the feature extractor of ResNet-18 by minimizing the classification loss on the training categories from the mini-ImageNet dataset for the MetaSGD method, which is denoted by MetaSGD*.

For all the experiments, we use the default hyper-parameter settings provided by the original implementation. Moreover, we select the model according to the validation performance for evaluation (i.e., early stopping).

Figure 2: Comparison between the proposed Binary and Gaussian DropGrad methods. We compare the -shot (left) and -shot (right) performance of MAML [5] trained with two different forms of DropGrad under various dropout rates on mini-ImageNet.

Comparison between Binary and Gaussian DropGrad.

We first evaluate how the proposed Binary and Gaussian DropGrad methods perform on the MAML framework with different values of the dropout probability . Figure 2 shows that both methods are effective especially when the dropout rate is in the range of , while setting the dropout rate to is to turn the proposed DropGrad method off. Since the problem of learning from only one instance (-shot) is more complicated, the overfitting effect is less severe compared to the -shot setting. As a result, applying the DropGrad method with a dropout rate larger than degrades the performance. Moreover, the Gaussian DropGrad method consistently outperforms the binary case on both -shot and -shot tasks, due to a better regularization term with uncertainty. We then apply the Gaussian DropGrad method with the dropout rate of or in the following experiments.

Comparison with Existing Dropout Methods.

To show that the proposed DropGrad method is effective for gradient-based meta-learning frameworks, we compare it with two existing dropout schemes applied on the network activations in both and . We choose the Dropout [33] and SpatialDropout [36] methods, since the former is a commonly-used approach while the latter is shown to be effective for applying to D convolutional maps. The performance of MAML on -shot classification on the mini-ImageNet dataset is: DropGrad , SpatialDropout , and Vanilla Dropout . This demonstrates the benefit of using the proposed DropGrad method, which effectively tackles the issue of inconsistent randomness between two different models and in the inner-loop optimization of gradient-based meta-learning frameworks.

Model -shot -shot
MAML [5]
MAML w/ Gaussian DropGrad
MetaSGD [20]
MetaSGD w/ Gaussian DropGrad
MetaSGD*
MetaSGD* w/ Gaussian DropGrad
MAML++ [1]
MAML++ w/ Gaussian DropGrad
Table 1: Few-shot classification results on mini-ImageNet. The Gaussian DropGrad method improves the performance of gradient-based models on -shot and -shot classification tasks.
Figure 3:

Validation loss over training epochs.

We show the validation curves of the MAML (left) and MetaSGD (right) frameworks trained on the -shot mini-ImageNet dataset.

Overall Performance on the Mini-ImageNet Dataset.

Table 1 shows the results of applying the proposed Gaussian DropGrad method to different frameworks. The results validate that the proposed regularization scheme consistently improves the performance of various gradient-based meta-learning approaches. In addition, we present the curve of validation loss over training episodes from MAML and MetaSGD on the -shot classification task in Figure 3. We observe that the overfitting problem is more severe in training the MetaSGD method since it consists of more parameters to be optimized. The DropGrad regularization method mitigates the overfitting issue and facilitates the training procedure.

Origin FC Block + FC Full Block + Conv Conv
Table 2: Performance of applying DropGrad to different layers. We conduct experiments on the -shot classification task using MAML on mini-ImageNet. It is more helpful to drop the gradients closer to the output layers (e.g., FC and Block + FC).

Layers to Apply DropGrad.

We study which layers in the network to apply the DropGrad regularization in this experiment. The backbone ResNet-18 model contains a convolutional layer (Conv) followed by residual blocks (Block, Block, Block, Block

) and a fully-connected layer (FC) as the classifier. We perform the Gaussian DropGrad method on different parts of the ResNet-18 model for MAML on the

-shot classification task. The results are presented in Table 2. We find that it is more critical to drop the gradients closer to the output layers (e.g., FC and Block + FC). Applying the DropGrad method to the input side (e.g., Block + Conv and Conv), however, may even negatively affect the training and degrade the performance. This can be explained by the fact that features closer to the output side are more abstract and thus tend to overfit. As using the DropGrad regularization term only increases a negligible overhead, we use the Full model, where our method is applied to all layers in the experiments unless otherwise mentioned.

, , (original) , , , ,
MAML [5]
MAML w/ DropGrad
Table 3: 5-shot classification results of MAML under various hyper-parameter settings. We study the learning rate and number of iterations in the inner-loop optimization of MAML using mini-ImageNet dataset.

Hyper-Parameter Analysis.

In all experiments shown in Section 4, we use the default hyper-parameter values from the original implementation of the adopted methods. In this experiment, we explore the hyper-parameter choices for MAML [5]. Specifically, we conduct an ablation study on the learning rate and the number of inner-loop optimizations in MAML. As shown in Table 3, the proposed DropGrad method improves the performance consistently under different sets of hyper-parameters.

Model -Shot -Shot
MAML [5]
MAML w/ Dropout [33]
MAML w/ DropGrad
MetaSGD [20]
MetaSGD w/ Dropout [33]
MetaSGD w/ DropGrad
MetaSGD*
MetaSGD* w/ DropGrad
MAML++ [1]
MAML++ w/ Dropout [33] %
MAML++ w/ DropGrad
Table 4: Cross-domain performance for few-shot classification. We use the mini-ImageNet and CUB datasets for the meta-training and meta-testing steps, respectively. The improvement of applying the proposed DropGrad method is more significant in the cross-domain cases than the intra-domain ones.
Figure 4: Class activation maps (CAMs) for cross-domain 5-shot classification. The mini-ImageNet and CUB datasets are used for the meta-training and meta-testing steps, respectively. Models trained with the proposed DropGrad (the third row for each example) focus more on the objects than the original models (the second row for each example).

4.2 Cross-Domain Few-Shot Classification

To further evaluate how the proposed DropGrad method improves the generalization ability of gradient-based meta-learning models, we conduct a cross-domain experiment, in which the meta-testing set is from an unseen domain. We use the cross-domain scenario introduced by Chen et al[3], where the meta-training step is performed on the mini-ImageNet [39] dataset while the meta-testing evaluation is conducted on the CUB [11] dataset. Note that, different from Chen et al[3] who select the model according to the validation performance on the CUB dataset, we pick the model via the validation performance on the mini-ImageNet dataset for evaluation. The reason is that we target at analyzing the generalization ability to the unseen domain, and thus we do not utilize any information provided from the CUB dataset.

Table 4 shows the results using the Gaussian DropGrad method. Since the domain shift in the cross-domain scenario is larger than that in the intra-domain case (i.e., both training and testing tasks are sampled from the mini-ImageNet dataset), the performance gains of applying the proposed DropGrad method reported in Table 4 are more significant than those in Table 1. The results demonstrate that the DropGrad scheme is able to effectively regularize the gradients and transfer them for learning new tasks in an unseen domain.

To further understand the improvement by the proposed method under the cross-domain setting, we visualize the class activation maps (CAMs) [45] of the images in the unseen domain (CUB). More specifically, during the testing time, we adapt the learner model with the support set . We then compute the class activation maps of the data in the query set from the last convolutional layer of the updated learner model . Figure 4 demonstrates the results of the MAML, MetaSGD, and MetaSGD* approaches. The models trained with the proposed regularization method show the activation on more discriminative regions. This suggests that the proposed regularization improves the generalization ability of gradient-based schemes, and thus enables these methods to adapt to the novel tasks sampled from the unseen domain.

Comparison with the Existing Dropout Approach.

We also compare the proposed DropGrad approach with existing Dropout [33] method under the cross-domain setting. We apply the existing Dropout scheme on the network activations in both and . As suggested by Ghiasi et al[7], we use the dropout rate of for the Dropout method. As the results shown in Table 4, the proposed DropGrad method performs favorably against the Dropout approach. The larger performance gain from the DropGrad approach validates effectiveness of imposing uncertainty on the inner-loop gradient for the gradient-based meta-learning framework. On the other hand, since applying the conventional Dropout causes the inconsistent randomnesses between two different sets of parameters and , which is less effective compared to the proposed scheme.

Model Precision Success rate
MetaCREST [23]
MetaCREST w/ DropGrad
MetaSDNet [23]
MetaSDNet w/ DropGrad
Table 5: Precision and success rate on the OTB2015 dataset. The DropGrad method can be applied to visual tracking and improve the tracking performance.
Figure 5: Qualitative results of object online tracking on the OTB2015 dataset. Red boxes are the ground-truth, yellow boxes represent the original results, and green boxes stand for the results where the DropGrad method is applied. Models trained with the proposed DropGrad scheme are able to track objects more accurately.

4.3 Online Object Tracking

Visual object tracking targets at localizing one particular object in a video sequence given the bounding box annotation in the first frame. To adapt the model to the subsequent frames, one approach is to apply online adaptation during tracking. The Meta-Tracker [23] method uses meta-learning to improve two state-of-the-art online trackers, including the correlation-based CREST [32] and the detection-based MDNet [21], which are denoted as MetaCREST and MetaSDNet. Based on the error signals from future frames, the Meta-Tracker updates the model during offline meta-training, and obtains a robust initial network that generalizes well over future frames. We apply the proposed DropGrad method to train the MetaCREST and MetaSDNet models with evaluation on the OTB2015 [43] dataset.

Implementation Details.

We train the models using the original source code.444https://github.com/silverbottlep/meta_trackers For meta-training, we use a subset of a large-scale video detection dataset [28], and the sequences from the VOT2013 [16], VOT2014 [17] and VOT2015 [15] datasets, excluding the sequences in the OTB2015 database, based on the same settings in the Meta-Tracker [23]. We apply the Gaussian DropGrad method with the dropout rate of . We use the default hyper-parameter settings and evaluate the performance with the models at the last training iteration.

Object Tracking Results.

The results of online object tracking on the OTB2015 dataset are presented in Table 5. The one-pass evaluation (OPE) protocol without restarts at failures is used in the experiments. We measure the precision and success rate based on the center location error and the bounding-box overlap ratio, respectively. The precision is calculated with a threshold , and the success rate is the averaged value with the threshold ranging from to with a step of . We show that applying the proposed DropGrad method consistently improves the performance in precision and success rate on both MetaCREST and MetaSDNet trackers.

We present sample results of object online tracking in Figure 5. We apply the proposed DropGrad method on the MetaCREST and MetaSDNet methods and evaluate these models on the OTB2015 dataset. Compared with the original MetaCREST and MetaSDNet, models trained with the DropGrad method track objects more accurately.

4.4 Few-Shot Viewpoint Estimation

Viewpoint estimation aims to estimate the viewpoint (i.e., 3D rotation), denoted as , between the camera and the object of a specific category in the image. Given a few examples (i.e., images in this work) of a novel category with viewpoint annotations, few-shot viewpoint estimation attempts to predict the viewpoint of arbitrary objects of the same category. In this problem, the support set contains few images of a new class and the corresponding viewpoint annotations . We conduct experiments on the ObjectNet3D [44] dataset, a viewpoint estimation benchmark dataset which contains categories. Using the same evaluation protocol in [37], we extract and categories for training and testing, respectively.

Implementation Details.

We apply the proposed DropGrad on the MetaView [37] method, which is a meta-Siamese viewpoint estimator that applies gradient-based adaptation for novel categories. We obtain the source code from the authors, and keep all the default setting for training. We apply the Gaussian DropGrad scheme with the dropout rate of . Since there is no validation set available, we pick the model trained in the last epoch for evaluation.

Viewpoint Estimation Results.

We show the viewpoint estimation results in Table 6

. The evaluation metrics include Acc30 and MedErr, which represent the percentage of viewpoints with rotation error under

and the median rotation error, respectively. The overall performance is improved by applying the proposed DropGrad method to the MetaView model during training.

Model Acc30 () MedErr ()
MetaView [37]
MetaView w/ DropGrad
Table 6: Viewpoint estimation results. The DropGrad method can be applied to few-shot viewpoint estimation frameworks to mitigate the overfitting problem.

5 Conclusions

In this work, we propose a simple yet effective gradient dropout approach for regularizing the training of gradient-based meta-learning frameworks. The core idea is to impose uncertainty by augmenting the gradient in the adaptation step during meta-training. We propose two forms of noise regularization terms, including the Bernoulli and Gaussian distributions, and demonstrate that the proposed DropGrad improves the model performance in three learning tasks. In addition, extensive analysis and studies are provided to further understand the benefit of our method. One study on cross-domain few-shot classification is also conducted to show that the DropGrad method is able to mitigate the overfitting issue under a larger domain gap.

References

  • [1] A. Antoniou, H. Edwards, and A. Storkey (2019) How to train your maml. In ICLR, Cited by: §1, §2, §3.1, §3.2, §4.1, Table 1, Table 4.
  • [2] Y. Balaji, S. Sankaranarayanan, and R. Chellappa (2018) MetaReg: towards domain generalization using meta-regularization. In NeurIPS, Cited by: §1.
  • [3] W. Chen, Y. Liu, Z. Kira, Y. Wang, and J. Huang (2019) A closer look at few-shot classification. In ICLR, Cited by: Appendix B, Appendix B, §4.1, §4.2.
  • [4] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) ImageNet: a large-scale hierarchical image database. In CVPR, Cited by: §4.1.
  • [5] C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, Cited by: Appendix B, Appendix C, Appendix C, §1, §1, §2, §3.1, §3.2, Figure 2, §4.1, §4.1, Table 1, Table 3, Table 4, 1.
  • [6] C. Finn, K. Xu, and S. Levine (2018) Probabilistic model-agnostic meta-learning. In NeurIPS, Cited by: §2.
  • [7] G. Ghiasi, T. Lin, and Q. V. Le (2018) DropBlock: a regularization method for convolutional networks. In NeurIPS, Cited by: §1, §2, §4.2.
  • [8] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio (2013) Maxout networks. In ICML, Cited by: §2.
  • [9] A. Gupta, R. Mendonca, Y. Liu, P. Abbeel, and S. Levine (2018) Meta-reinforcement learning of structured exploration strategies. In NeurIPS, Cited by: §1.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §4.1.
  • [11] N. Hilliard, L. Phillips, S. Howland, A. Yankov, C. D. Corley, and N. O. Hodas (2018) Few-shot learning with metric-agnostic conditional embeddings. arXiv preprint arXiv:1802.04376. Cited by: §4.2.
  • [12] T. Kim, J. Yoon, O. Dia, S. Kim, Y. Bengio, and S. Ahn (2018) Bayesian model-agnostic meta-learning. In NeurIPS, Cited by: §1, §1, §2.
  • [13] D. Kinga and J. B. Adam (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: Appendix B, Appendix B.
  • [14] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi (1983) Optimization by simulated annealing. Science 220 (4598), pp. 671–680. Cited by: Appendix E.
  • [15] M. Kristan, J. Matas, A. Leonardis, M. Felsberg, L. Č. Zajc, G. Fernandez, T. Vojir, G. Häger, and G. N. et al. (2015) The visual object tracking vot2015 challenge results. In ICCV Workshop, Cited by: §4.3.
  • [16] M. Kristan, R. Pflugfelder, A. Leonardis, J. Matas, F. Porikli, L. Č. Zajc, G. Nebehay, G. Fernandez, and T. V. et al. (2013) The visual object tracking vot2013 challenge results. In ICCV Workshop, Cited by: §4.3.
  • [17] M. Kristan, R. Pflugfelder, A. Leonardis, J. Matas, L. Č. Zajc, G. Nebehay, T. Vojir, G. Fernandez, and A. L. et al. (2014) The visual object tracking vot2014 challenge results. In ECCV Workshop, Cited by: §4.3.
  • [18] G. Larsson, M. Maire, and G. Shakhnarovich (2017) Fractalnet: ultra-deep neural networks without residuals. In ICLR, Cited by: §1, §2.
  • [19] D. Li, Y. Yang, Y. Song, and T. M. Hospedales (2018) Learning to generalize: meta-learning for domain generalization. In AAAI, Cited by: §1.
  • [20] Z. Li, F. Zhou, F. Chen, and H. Li (2017) Meta-sgd: learning to learn quickly for few shot learning. arXiv preprint arXiv:1707.09835. Cited by: Appendix B, §1, §2, §3.1, §3.2, §4.1, Table 1, Table 4.
  • [21] H. Nam and B. Han (2016) Learning multi-domain convolutional neural networks for visual tracking. In CVPR, Cited by: §4.3.
  • [22] B. Oreshkin, P. R. López, and A. Lacoste (2018) Tadam: task dependent adaptive metric for improved few-shot learning. In NeurIPS, Cited by: §2.
  • [23] E. Park and A. C. Berg (2018) Meta-tracker: fast and robust online adaptation for visual object trackers. In ECCV, Cited by: Appendix B, §1, §4.3, §4.3, Table 5.
  • [24] K. Rakelly, A. Zhou, D. Quillen, C. Finn, and S. Levine (2019) Efficient off-policy meta-reinforcement learning via probabilistic context variables. In ICML, Cited by: §1.
  • [25] S. Ravi and A. Beatson (2019) Amortized bayesian meta-learning. In ICLR, Cited by: §2.
  • [26] S. Ravi and H. Larochelle (2017) Optimization as a model for few-shot learning. In ICLR, Cited by: Appendix B, §4.1.
  • [27] D. J. Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. Wierstra (2016) One-shot generalization in deep generative models. JMLR 48. Cited by: §2.
  • [28] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet large scale visual recognition challenge. IJCV. Cited by: §4.3.
  • [29] A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell (2019) Meta-learning with latent embedding optimization. In ICLR, Cited by: §2, §3.1, §4.1.
  • [30] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap (2016) Meta-learning with memory-augmented neural networks. In ICML, Cited by: §1, §2.
  • [31] J. Snell, K. Swersky, and R. Zemel (2017) Prototypical networks for few-shot learning. In NIPS, Cited by: §1, §2.
  • [32] Y. Song, C. Ma, L. Gong, J. Zhang, R. W. H. Lau, and M. Yang (2017) CREST: convolutional residual learning for visual tracking. In ICCV, Cited by: §4.3.
  • [33] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. JMLR 15 (1), pp. 1929–1958. Cited by: §1, §2, §3.2, §4.1, §4.2, Table 4.
  • [34] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales (2018) Learning to compare: relation network for few-shot learning. In CVPR, Cited by: §2.
  • [35] E. Todorov, T. Erez, and Y. Tassa (2012) Mujoco: a physics engine for model-based control. In IEEE International Conference on Intelligent Robots and Systems, Cited by: Appendix C.
  • [36] J. Tompson, R. Goroshin, A. Jain, Y. LeCun, and C. Bregler (2015) Efficient object localization using convolutional networks. In CVPR, Cited by: §1, §2, §4.1.
  • [37] H. Tseng, S. De Mello, J. Tremblay, S. Liu, S. Birchfield, M. Yang, and J. Kautz (2019) Few-shot viewpoint estimation. In BMVC, Cited by: §1, §4.4, §4.4, Table 6.
  • [38] H. Tseng, H. Lee, J. Huang, and M. Yang (2020) Cross-domain few-shot classification via learned feature-wise transformation. In ICLR, Cited by: §2.
  • [39] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. (2016) Matching networks for one shot learning. In NIPS, Cited by: §1, §1, §2, §4.1, §4.2.
  • [40] L. Wan, M. Zeiler, S. Zhang, Y. Le Cun, and R. Fergus (2013) Regularization of neural networks using dropconnect. In ICML, Cited by: §1, §2.
  • [41] S. Wang and C. Manning (2013) Fast dropout training. In ICML, Cited by: §1.
  • [42] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona (2010) Caltech-ucsd birds 200. Technical report Technical Report CNS-TR-2010-001, California Institute of Technology. Cited by: §1.
  • [43] Y. Wu, J. Lim, and M. Yang (2015) Object tracking benchmark. TPAMI. Cited by: §4.3.
  • [44] Y. Xiang, W. Kim, W. Chen, J. Ji, C. Choy, H. Su, R. Mottaghi, L. Guibas, and S. Savarese (2016) ObjectNet3D: a large scale database for 3d object recognition. In ECCV, Cited by: §4.4.
  • [45] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016)

    Learning deep features for discriminative localization

    .
    In CVPR, Cited by: Appendix D, §4.2.
  • [46] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le (2018) Learning transferable architectures for scalable image recognition. In CVPR, Cited by: §1, §2.

Appendix A Appendix

In this appendix, we first supplement the implementation details. We then present additional experimental results of reinforcement learning and cross-domain few-shot classification. Finally, we compare the proposed DropGrad regularization algorithm with the simulated annealing methods.

Appendix B Supplementary Implementation Details

Few-Shot Classification.

We use the implementation from [3] to train and evaluate MAML [5] on few-shot classification tasks.555https://github.com/wyharveychen/CloserLookFewShot In the meanwhile, we modify the same implementation for MetaSGD [20] by ourselves. We verify our implementation by evaluating the MetaSGD model using Conv4, which is the same backbone network adopted in the original paper. The -way -shot classification results on the mini-ImageNet dataset [26] reported by our implementation and the original paper are and , respectively.

To train both the MAML and MetaSGD models, we keep the default settings in the original implementation by [3]. We apply the Adam [13] optimizer with the learning rate of . The mini-batch size is set to be . We train the model with epochs and do not apply the learning rate decay strategy.

Online Object Tracking.

We conduct experiments of online object tracking based on the PyTorch implementation by

[23]. For MetaSDNet, the first three convolutional layers of VGG-16 are used as the feature extractor. During meta-training, the last three fully-connected layers are randomly initialized. We only update the last three fully-connected layers in the first 5,000 iterations, and then train the entire network for the remaining iterations. We adopt the Adam optimizer [13] with an initial learning rate of , and decrease the learning rate to after iterations. In total, we train the network for iterations. For MetaCREST, we use the Adam optimizer with a learning rate of , and train the model for iterations.

Appendix C Reinforcement Learning

We adopt the few-shot reinforcement learning (RL) setting as in [5], which aims to make the system adapt to new experiences and learn the corresponding policy quickly with limited prior experience (i.e., trajectories). In this setting, the support set contains few trajectories and the corresponding rewards, while the query set is formed by a set of new trajectories sampled from the running policy. We conduct the experiment with the locomotion tasks simulated by the MuJoCo [35] simulator. Two environments are considered in this experiment: HalfCheetah robot and Ant robot with forward/backward movement, i.e., HalfCheetah-Dir and Ant-Dir.

Implementation Details.

We adopt the MAML-TPRO [5] framework as the baseline method. Since the rewards are usually not differentiable, policy gradients are calculated for adapting the RL models to new experiences in both inner- and outer-loop optimization. For applying the proposed DropGrad scheme in the RL framework, we augment the policy gradients calculated according to rewards in the support set during the inner-loop optimization. We use a public PyTorch implementation with the default hyper-parameter settings in the experiments.666https://github.com/tristandeleu/pytorch-maml-rl

Reinforcement Learning Results.

In Figure 6, we present the rewards after the model is optimized with the few trajectories, i.e., in each iteration we perform a one-step policy gradient update for the inner-loop optimization. In both environments, the training process with the proposed DropGrad regularization method converges to favorable rewards compared to the original training without the proposed regularization. This improvement could be attributed to the uncertainty on gradients that provides a better exploration of the policy.

Figure 6: Few-shot reinforcement learning results. Two settings, HalfCheetah-Dir (left) and Ant-Dir (right), are considered in our experiments using the MAML-TPRO framework. We show the reward curves after the model is updated with the few trajectories and both rewards converge favorably against the original training.

Appendix D Cross-Domain Few-Shot Classification

In Figure 7, Figure 8 and Figure 9, we provide more results of the class activation maps (CAMs) [45] for the cross-domain few-shot classification task. The meta-training and meta-testing steps are conducted on the mini-ImageNet and CUB datasets, respectively. We apply the DropGrad scheme on the MAML, MetaSGD, and MetaSGD* approaches. The results show that models trained with the proposed DropGrad regularization focus on more discriminative regions.

Figure 7: Class activation maps (CAMs) for cross-domain 5-shot classification. The mini-ImageNet and CUB datasets are used for the meta-training and meta-testing steps, respectively. Models trained with the proposed DropGrad (the third row for each example) focus more on the objects than the original models (the second row for each example).
Figure 8: Class activation maps (CAMs) for cross-domain 5-shot classification. The mini-ImageNet and CUB datasets are used for the meta-training and meta-testing steps, respectively. Models trained with the proposed DropGrad (the third row for each example) focus more on the objects than the original models (the second row for each example).
Figure 9: Class activation maps (CAMs) for cross-domain 5-shot classification. The mini-ImageNet and CUB datasets are used for the meta-training and meta-testing steps, respectively. Models trained with the proposed DropGrad (the third row for each example) focus more on the objects than the original models (the second row for each example).

Appendix E Comparison to Simulated Annealing

The proposed DropGrad algorithm is also related to simulated annealing (SA) [14]. While conceptually similar to a certain extent, the goals and formulations are significantly different. SA modulates gradients by exploring uncertain solutions to escape from the local minimum during the training stage. On the other hand, our DropGrad method drops the inner gradient to introduce uncertainty in the forward pass of the gradient-based meta-learning framework.