1 Introduction
The ability to learn from a few examples is a hallmark of human intelligence, yet it remains a challenge for modern machine learning systems. This problem has received significant attention from the machine learning community recently where fewshot learning is cast as a metalearning problem (
e.g., [22, 8, 33, 28]). The goal is to minimize generalization error across a distribution of tasks with few training examples. Typically, these approaches are composed of an embedding model that maps the input domain into a feature space and a base learner that maps the feature space to task variables. The metalearning objective is to learn an embedding model such that the base learner generalizes well across tasks.While many choices for base learners exist, nearestneighbor classifiers and their variants (e.g., [28, 33]) are popular as the classification rule is simple and the approach scales well in the lowdata regime. However, discriminatively trained linear classifiers often outperform nearestneighbor classifiers (e.g., [4, 16]) in the lowdata regime as they can exploit the negative examples which are often more abundant to learn better class boundaries. Moreover, they can effectively use high dimensional feature embeddings as model capacity can be controlled by appropriate regularization such as weight sparsity or norm.
Hence, in this paper, we investigate linear classifiers as the base learner for a metalearning based approach for fewshot learning. The approach is illustrated in Figure 1
where a linear support vector machine (SVM) is used to learn a classifier given a set of labeled training examples and the generalization error is computed on a novel set of examples from the same task. The key challenge is computational since the metalearning objective of minimizing the generalization error across tasks requires training a linear classifier in the inner loop of optimization (see Section
3). However, the objective of linear models is convex and can be solved efficiently. We observe that two additional properties arising from the convex nature that allows efficient metalearning: implicit differentiation of the optimization [2, 11]and the lowrank nature of the classifier in the fewshot setting. The first property allows the use of offtheshelf convex optimizers to estimate the optima and implicitly differentiate the optimality or KarushKuhnTucker (KKT) conditions to train embedding model. The second property means that the number of optimization variables in the
dual formation is far smaller than the feature dimension for fewshot learning.To this end, we have incorporated a differentiable quadratic programming (QP) solver [1] which allows endtoend learning of the embedding model with various linear classifiers, e.g., multiclass support vector machines (SVMs) [5]
or linear regression, for fewshot classification tasks. Making use of these properties, we show that our method is practical and offers substantial gains over nearest neighbor classifiers at a modest increase in computational costs (see Table
3). Our method achieves stateoftheart performance on 5way 1shot and 5shot classification for popular fewshot benchmarks including miniImageNet [33, 22], tieredImageNet [23], CIFARFS [3], and FC100 [20].2 Related Work
Metalearning studies what aspects of the learner (commonly referred to as bias or prior) effect generalization across a distribution of tasks [26, 31, 32]. Metalearning approaches for fewshot learning can be broadly categorized these approaches into three groups. Gradientbased methods [22, 8] use gradient descent to adapt the embedding model parameters (e.g., all layers of a deep network) given training examples. Nearestneighbor methods [33, 28] learn a distancebased prediction rule over the embeddings. For example, prototypical networks [28] represent each class by the mean embedding of the examples, and the classification rule is based on the distance to the nearest class mean. Another example is matching networks [33]
that learns a kernel density estimate of the class densities using the embeddings over training data (the model can also be interpreted as a form of
attention over training examples). Modelbased methods [18, 19] learn a parameterized predictor to estimate model parameters, e.g., a recurrent network that predicts parameters analogous to a few steps of gradient descent in parameter space. While gradientbased methods are general, they are prone to overfitting as the embedding dimension grows [18, 25]. Nearestneighbor approaches offer simplicity and scale well in the fewshot setting. However, nearestneighbor methods have no mechanisms for feature selection and are not very robust to noisy features.
Our work is related to techniques for backpropagation though optimization procedures. Domke [6] presented a generic method based on unrolling gradientdescent for a fixed number of steps and automatic differentiation to compute gradients. However, the trace of the optimizer (i.e., the intermediate values) needs to be stored in order to compute the gradients which can be prohibitive for large problems. The storage overhead issue was considered in more detail by Maclaurin et al. [15] where they studied low precision representations of the optimization trace of deep networks. If the argmin of the optimization can be found analytically, such as in unconstrained quadratic minimization problems, then it is also possible to compute the gradients analytically. This has been applied for learning in lowlevel vision problems [30, 27]. A concurrent and closely related work [3]
uses this idea to learn fewshot models using ridgeregression base learners which have closedform solutions. We refer readers to Gould
et al. [11] which provides an excellent survey of techniques for differentiating argmin and argmax problems.Our approach advocates the use of linear classifiers which can be formulated as convex learning problems. In particular, the objective is a quadratic program (QP) which can be efficiently solved to obtain its global optima using gradientbased techniques. Moreover, the solution to convex problems can be characterized by their KarushKuhnTucker (KKT) conditions which allow us to backpropagate through the learner using the implicit function theorem [12]. Specifically, we use the formulation of Amos and Kolter [1] which provides efficient GPU routines for computing solutions to QPs and their gradients. While they applied this framework to learn representations for constraint satisfaction problems, it is also wellsuited for fewshot learning as the problem sizes that arise are typically small.
While our experiments focus on linear classifiers with hinge loss and
regularization, our framework can be used with other loss functions and nonlinear kernels. For example, the ridge regression learner used in
[3] can be implemented within our framework allowing a direct comparison.3 Metalearning with Convex Base Learners
We first derive the metalearning framework for fewshot learning following prior work (e.g., [28, 22, 8]) and then discuss how convex base learners, such as linear SVMs, can be incorporated.
3.1 Problem formulation
Given the training set , the goal of the base learner is to estimate parameters of the predictor so that it generalizes well to the unseen test set . It is often assumed that the training and test set are sampled from the same distribution and the domain is mapped to a feature space using an embedding model parameterized by . For optimizationbased learners, the parameters are obtained by minimizing the empirical loss over training data along with a regularization that encourages simpler models. This can be written as:
(1) 
where is a loss function, such as the negative loglikelihood of labels, and is a regularization term. Regularization plays an important role in generalization when training data is limited.
Metalearning approaches for fewshot learning aim to minimize the generalization error across a distribution of tasks sampled from a task distribution. Concretely, this can be thought of as learning over a collection of tasks: , often referred to as a metatraining set. The tuple describes a training and a test dataset, or a task. The objective is to learn an embedding model that minimizes generalization (or test) error across tasks given a base learner . Formally, the learning objective is:
(2) 
Figure 1 illustrates the training and testing for a single task. Once the embedding model is learned, its generalization is estimated on a set of heldout tasks (often referred to as a metatest set) computed as:
(3) 
Following prior work [22, 8], we call the stages of estimating the expectation in Equation 2 and 3 as metatraining and metatesting respectively. During metatraining, we keep an additional heldout metavalidation
set to choose the hyperparameters of the metalearner and pick the best embedding model.
3.2 Episodic sampling of tasks
Standard fewshot learning benchmarks such as miniImageNet [22] evaluate models in way, shot classification tasks. Here denotes the number of classes, and denotes the number of training examples per class. Fewshot learning techniques are evaluated for small values of , typically . In practice, these datasets do not explicitly contain tuples , but each task for metalearning is constructed “on the fly” during the metatraining stage, commonly described as an episode.
For example, in prior work [33, 22], a task (or episode) is sampled as follows. The overall set of categories is . For each episode, categories containing categories from the are first sampled (with replacement); then training (support) set consisting of images per category is sampled; and finally, the test (query) set consisting of images per category is sampled.
We emphasize that we need to sample without replacement, i.e., , to optimize the generalization error. In the same manner, metavalidation set and metatest set are constructed on the fly from and , respectively. In order to measure the embedding model’s generalization to unseen categories, , , and are chosen to be mutually disjoint.
3.3 Convex base learners
The choice of the base learner has a significant impact on Equation 2. The base learner that computes has to be efficient since the expectation has to be computed over a distribution of tasks. Moreover, to estimate parameters of the embedding model the gradients of the task test error with respect to have to be efficiently computed. This has motivated simple base learners such as nearest class mean [28] for which the parameters of the base learner are easy to compute and the objective is differentiable.
We consider base learners based on multiclass linear classifiers (e.g., support vector machines (SVMs) [5, 34]
, logistic regression, and ridge regression) where the baselearner’s objective is
convex. For example, a class linear SVM can be written as . The Crammer and Singer [5] formulation of the multiclass SVM is:(4) 
where , is the regularization parameter and is the Kronecker delta function.
Gradients of the SVM objective.
From Figure 1, we see that in order to make our system endtoend trainable, we require that the solution of the SVM solver should be differentiable with respect to its input, i.e., we should be able to compute . The objective of SVM is convex and has a unique optimum. This allows for the use of implicit function theorem (e.g., [12, 7, 2]) on the optimality (KKT) conditions to obtain the necessary gradients. For the sake of completeness, we derive the form of the theorem for convex optimization problems as stated in [2]. Consider the following convex optimization problem:
minimize  (5)  
subject to  
where the vector is the optimization variable of the problem, the vector is the input parameter of the optimization problem, which is in our case. We can optimize the objective by solving for the saddle point of the following Lagrangian:
(6) 
In other words, we can obtain the optimum of the objective function by solving where
(7) 
Given a function , denote as its Jacobian .
Theorem 1
(From Barratt [2]) Suppose . Then, when all derivatives exist,
(8) 
This result is obtained by applying the implicit function theorem to the KKT conditions. Thus, once we compute the optimal solution , we can obtain a closedform expression for the gradient of with respect to the input data. This obviates the need for backpropagating through the entire optimization trajectory since the solution does not depend on the trajectory or initialization due to its uniqueness. This also saves memory, an advantage that convex problems have over generic optimization problems.
Time complexity.
The forward pass (i.e., computation of Equation 4) using our approach requires the solution to the QP solver whose complexity scales as where is the number of optimization variables. This time is dominated by factorizing the KKT matrix required for primaldual interior point method. Backward pass requires the solution to Equation 8 in Theorem 1, whose complexity is given the factorization already computed in the forward pass. Both forward pass and backward pass can be expensive when the dimension of embedding is large.
Dual formulation.
The dual formulation of the objective in Equation 4 allows us to address the poor dependence on the embedding dimension and can be written as follows. Let
(9) 
We can instead optimize in the dual space:
(10) 
This results in a quadratic program (QP) over the dual variables . We note that the size of the optimization variable is the number of training examples times the number of classes. This is often much smaller than the size of the feature dimension for fewshot learning. We solve the dual QP of Equation 10 using [1] which implements a differentiable GPUbased QP solver. In practice (as seen in Table 3) the time taken by the QP solver is comparable to the time taken to compute features using the ResNet12 architectures so the overall speed per iteration is not significantly different from those based on simple base learners such as nearest class prototype (mean) used in Prototypical Networks [28].
Concurrent to our work, Bertinetto et al. [3] employed ridge regression as the base learner which has a closedform solution. Although ridge regression may not be best suited for classification problems, their work showed that training models by minimizing squared error with respect to onehot labels works well in practice. The resulting optimization for ridge regression is also a QP and can be implemented within our framework as:
(11) 
where is defined as Equation 9. A comparison of linear SVM and ridge regression in Section 4 shows a slight advantage of the linear SVM formation.
3.4 Metalearning objective
To measure the performance of the model we evaluate the negative loglikelihood of the test data sampled from the same task. Hence, we can reexpress the metalearning objective of Equation 2 as:
(12)  
where and is a learnable scale parameter. Prior work in fewshot learning [20, 3, 10] suggest that adjusting the prediction score by a learnable scale parameter provides better performance under nearest class mean and ridge regression base learners.
We empirically find that inserting is beneficial for the metalearning with SVM base learner as well. While other choices of test loss, such as hinge loss, are possible, loglikelihood worked the best in our experiments.
4 Experiments
We first describe the network architecture and optimization details used in our experiments (Section 4.1
). We then present results on standard fewshot classification benchmarks including derivatives of ImageNet (Section
4.2) and CIFAR (Section 4.3), followed by a detailed analysis of the impact of various base learners on accuracy and speed using the same embedding network and training setup (Section 4.44.6).4.1 Implementation details
Metalearning setup. We use a ResNet12 network following
[20, 18] in our experiments.
Let Rk
denote a residual block that consists of three
33 convolution with k
filters, batch normalization, Leaky ReLU(0.1)
; letMP
denote a 22 max pooling. We use DropBlock regularization
[9], a form of structured Dropout. LetDB(k, b)
denote a DropBlock layer with keep_rate=k
and block_size=b
.
The network architecture for ImageNet derivatives is:
R64MPDB(0.9,1)R160MPDB(0.9,1)R320
MPDB(0.9,5)R640MPDB(0.9,5)
,
while the network architecture used for CIFAR derivatives is: R64MPDB(0.9,1)R160MPDB(0.9,1)R320
MPDB(0.9,2)R640MPDB(0.9,2)
.
We do not apply a global average pooling after the last residual block.
As an optimizer, we use SGD with Nesterov momentum of 0.9 and weight decay of 0.0005. Each minibatch consists of 8 episodes. The model was metatrained for 60 epochs, with each epoch consisting of 1000 episodes. The learning rate was initially set to 0.1, and then changed to 0.006, 0.0012, and 0.00024 at epochs 20, 40 and 50, respectively, following the practice of
[10].During metatraining, we adopt horizontal flip, random crop, and color (brightness, contrast, and saturation) jitter data augmentation as in [10, 21]. For experiments on miniImageNet with ResNet12, we use label smoothing with . Unlike [28] where they used higher way classification for metatraining than metatesting, we use a 5way classification in both stages following recent works [10, 20]. Each class contains 6 test (query) samples during metatraining and 15 test samples during metatesting. Our metatrained model was chosen based on 5way 5shot test accuracy on the metavalidation set.
Metatraining shot. For prototypical networks, we match the metatraining shot to metatesting shot following the usual practice [28, 10]. For SVM and ridge regression, we observe that keeping metatraining shot higher than metatesting shot leads to better test accuracies as shown in Figure 2. Hence, during metatraining, we set training shot to 15 for miniImageNet with ResNet12; 5 for miniImageNet with 4layer CNN (in Table 3); 10 for tieredImageNet; 5 for CIFARFS; and 15 for FC100.
Baselearner setup. For linear classifier training, we use the quadratic programming (QP) solver OptNet [1]. Regularization parameter of SVM was set to . Regularization parameter of ridge regression was set to . For the nearest class mean (prototypical networks), we use squared Euclidean distance normalized with respect to the feature dimension.
Early stopping. Although we can run the optimizer until convergence, in practice we found that running the QP solver for a fixed number of iterations (just three) works well in practice. Early stopping acts an additional regularizer and even leads to a slightly better performance.
4.2 Experiments on ImageNet derivatives
The miniImageNet dataset [33] is a standard benchmark for fewshot image classification benchmark, consisting of 100 randomly chosen classes from ILSVRC2012 [24]. These classes are randomly split into , and classes for metatraining, metavalidation, and metatesting respectively. Each class contains images of size . Since the class splits were not released in the original publication [33], we use the commonlyused split proposed in [22].
The tieredImageNet benchmark [23] is a larger subset of ILSVRC2012 [24], composed of 608 classes grouped into 34 highlevel categories. These are divided into categories for metatraining, categories for metavalidation, and categories for metatesting. This corresponds to , and classes for metatraining, metavalidation, and metatesting respectively. This dataset aims to minimize the semantic similarity between the splits. All images are of size .
miniImageNet 5way  tieredImageNet 5way  

model  backbone  1shot  5shot  1shot  5shot 
MetaLearning LSTM^{∗} [22]  64646464  43.44 0.77  60.60 0.71     
Matching Networks^{∗} [33]  64646464  43.56 0.84  55.31 0.73     
MAML [8]  32323232  48.70 1.84  63.11 0.92  51.67 1.81  70.30 1.75 
Prototypical Networks^{∗}^{†} [28]  64646464  49.42 0.78  68.20 0.66  53.31 0.89  72.69 0.74 
Relation Networks^{∗} [29]  6496128256  50.44 0.82  65.32 0.70  54.48 0.93  71.32 0.78 
R2D2 [3]  96192384512  51.2 0.6  68.8 0.1     
Transductive Prop Nets [14]  64646464  55.51 0.86  69.86 0.65  59.91 0.94  73.30 0.75 
SNAIL [18]  ResNet12  55.71 0.99  68.88 0.92     
Dynamic Fewshot [10]  6464128128  56.20 0.86  73.00 0.64     
AdaResNet [19]  ResNet12  56.88 0.62  71.94 0.57     
TADAM [20]  ResNet12  58.50 0.30  76.70 0.30     
Activation to Parameter^{†} [21]  WRN2810  59.60 0.41  73.74 0.19     
LEO^{†} [25]  WRN2810  61.76 0.08  77.59 0.12  66.33 0.05  81.44 0.09 
MetaOptNetRR (ours)  ResNet12  61.41 0.61  77.88 0.46  65.36 0.71  81.34 0.52 
MetaOptNetSVM (ours)  ResNet12  62.64 0.61  78.63 0.46  65.99 0.72  81.56 0.53 
MetaOptNetSVMtrainval (ours)^{†}  ResNet12  64.09 0.62  80.00 0.45  65.81 0.74  81.75 0.53 
Average fewshot classification accuracies (%) with 95% confidence intervals on miniImageNet and tieredImageNet metatest splits. abcd denotes a 4layer convolutional network with a, b, c, and d filters in each layer.
^{∗}Results from [22]. ^{†}Used the union of metatraining set and metavalidation set to metatrain the metalearner. “RR” stands for ridge regression.Results. Table 1 summarizes the results on the 5way miniImageNet and tieredImageNet. Our method achieves stateoftheart performance on 5way miniImageNet and tieredImageNet benchmarks. Note that LEO [25] make use of encoder and relation network in addition to the WRN2810 backbone network to produce sampledependent initialization of gradient descent. TADAM [20] employs a task embedding network (TEN) block for each convolutional layer – which predicts elementwise scale and shift vectors.
We also note that [25, 21] pretrain the WRN2810 feature extractor [36] to jointly classify all 64 classes in miniImageNet metatraining set; then freeze the network during the metatraining. [20] make use of a similar strategy of using standard classification: they cotrain the feature embedding on fewshot classification task (5way) and standard classification task (64way). In contrast, our system is metatrained endtoend, explicitly training the feature extractor to work well on fewshot learning tasks with regularized linear classifiers. This strategy allows us to clearly see the effect of metalearning. Our method is arguably simpler and achieves strong performance.
4.3 Experiments on CIFAR derivatives
The CIFARFS dataset [3] is a recently proposed fewshot image classification benchmark, consisting of all 100 classes from CIFAR100 [13]. The classes are randomly split into , and for metatraining, metavalidation, and metatesting respectively. Each class contains images of size .
The FC100 dataset [20] is another dataset derived from CIFAR100 [13], containing 100 classes which are grouped into 20 superclasses. These classes are partitioned into classes from superclasses for metatraining, classes from superclasses for metavalidation, and classes from superclasses for metatesting. The goal is to minimize semantic overlap between classes similar to the goal of tieredImageNet. Each class contains 600 images of size .
Results. Table 2 summarizes the results on the 5way classification tasks where our method MetaOptNetSVM achieves the stateoftheart performance. On the harder FC100 dataset, the gap between various base learners is more significant, which highlights the advantage of complex base learners in the fewshot learning setting.
CIFARFS 5way  FC100 5way  

model  backbone  1shot  5shot  1shot  5shot 
MAML^{∗} [8]  32323232  58.9 1.9  71.5 1.0     
Prototypical Networks^{∗}^{†} [28]  64646464  55.5 0.7  72.0 0.6  35.3 0.6  48.6 0.6 
Relation Networks^{∗} [29]  6496128256  55.0 1.0  69.3 0.8     
R2D2 [3]  96192384512  65.3 0.2  79.4 0.1     
TADAM [20]  ResNet12      40.1 0.4  56.1 0.4 
ProtoNets (our backbone) [28]  ResNet12  72.2 0.7  83.5 0.5  37.5 0.6  52.5 0.6 
MetaOptNetRR (ours)  ResNet12  72.6 0.7  84.3 0.5  40.5 0.6  55.3 0.6 
MetaOptNetSVM (ours)  ResNet12  72.0 0.7  84.2 0.5  41.1 0.6  55.5 0.6 
MetaOptNetSVMtrainval (ours)^{\mathparagraph}  ResNet12  72.8 0.7  85.0 0.5  47.2 0.6  62.5 0.6 
miniImageNet 5way  tieredImageNet 5way  
1shot  5shot  1shot  5shot  
model  acc. (%)  time (ms)  acc. (%)  time (ms)  acc. (%)  time (ms)  acc. (%)  time (ms) 
4layer conv (feature dimension=1600) 

Prototypical Networks [17, 28]  53.47  6  70.68  7  54.28  6  71.42  7 
MetaOptNetRR (ours)  53.23  20  69.51  27  54.63  21  72.11  28 
MetaOptNetSVM (ours)  52.87  28  68.76  37  54.71  28  71.79  38 
ResNet12 (feature dimension=16000)  
Prototypical Networks [17, 28]  59.25  60  75.60  66  61.74  61  80.00  66 
MetaOptNetRR (ours)  61.41  68  77.88  75  65.36  69  81.34  77 
MetaOptNetSVM (ours)  62.64  78  78.63  89  65.99  78  81.56  90 
4.4 Comparisons between base learners
Table 3 shows the results where we vary the base learner for two different embedding architectures. When we use a standard 4layer convolutional network where the feature dimension is low (1600), we do not observe a substantial benefit of adopting discriminative classifiers for fewshot learning. Indeed, nearest class mean classifier [17] is proven to work well under a lowdimensional feature as shown in Prototypical Networks [28]. However, when the embedding dimensional is much higher (16000), SVMs yield better fewshot accuracy than other base learners. Thus, regularized linear classifiers provide robustness when highdimensional features are available.
The added benefits come at a modest increase in computational cost. For ResNet12, compared to nearest class mean classifier, the additional overhead is around 13% for the ridge regression base learner and around 3050% for the SVM base learner. As seen in from Figure 2, the performance of our model on both 1shot and 5shot regimes generally increases with increasing metatraining shot. This makes the approach more practical as we can metatrain the embedding once with a high shot for all metatesting shots.
As noted in the FC100 experiment, SVM base learner seems to be beneficial when the semantic overlap between test and train is smaller. We hypothesize that the class embeddings are more significantly more compact for training data than test data (e.g., see [35]); hence flexibility in the base learner allows robustness to noisy embeddings and improves generalization.
4.5 Reducing metaoverfitting
Augmenting metatraining set. Despite sampling tasks, at the end of metatraining MetaOptNetSVM with ResNet12 achieves nearly % test accuracy on all the metatraining datasets except the tieredImageNet. To alleviate overfitting, similarly to [25, 21], we use the union of the metatraining and metavalidation sets to metatrain the embedding, keeping the hyperparameters, such as the number of epochs, identical to the previous setting. In particular, we terminate the metatraining after 21 epochs for miniImageNet, 52 epochs for tieredImageNet, 21 epochs for CIFARFS, and 21 epochs for FC100. Tables 1 and 2 show the results with the augmented metatraining sets, denoted as MetaOptNetSVMtrainval. On minImageNet, CIFARFS, and FC100 datasets, we observe improvements in test accuracies. On tieredImageNet dataset, the difference is negligible. We suspect that this is because our system has not yet entered the regime of overfitting (In fact, we observe 94 test accuracy on tieredImageNet metatraining set). Our results suggest that metalearning embedding with more metatraining “classes” helps reduce overfitting to the metatraining set.
Various regularization techniques. Table 4 shows the effect of regularization methods on MetaOptNetSVM with ResNet12. We note that early works on fewshot learning [28, 8] did not employ any of these techniques. We observe that without the use of regularization, the performance of ResNet12 reduces to the one of the 4layer convolutional network with 64 filters per layer shown in Table 3. This shows the importance of regularization for metalearners. We expect that performances of fewshot learning systems would be further improved by introducing novel regularization methods.
4.6 Efficiency of dual optimization
To see whether the dual optimization is indeed effective and efficient, we measure accuracies on metatest set with varying iteration of the QP solver. Each iteration of QP solver [1] involves computing updates for primal and dual variables via LU decomposition of KKT matrix. The results are shown in Figure 3. The QP solver reaches the optima of ridge regression objective in just one iteration. Alternatively one can use its closedform solution as used in [3]. Also, we observe that for 1shot tasks, the QP SVM solver reaches optimal accuracies in 1 iteration, although we observed that the KKT conditions are not exactly satisfied yet. For 5shot tasks, even if we run QP SVM solver for 1 iteration, we achieve better accuracies than other base learners. When the iteration of SVM solver is limited to 1 iteration, 1 episode takes 69 17 ms for an 1shot task, and 80 17 ms for a 5shot task, which is on par with the computational cost of the ridge regression solver (Table 3). These experiments show that solving dual objectives for SVM and ridge regression is very effective under fewshot settings.
5 Conclusion
In this paper, we presented a metalearning approach with convex base learners for fewshot learning. The dual formulation and KKT conditions can be exploited to enable computational and memory efficient metalearning that is especially wellsuited for fewshot learning problems. Linear classifiers offer better generalization than nearestneighbor classifiers at a modest increase in computational costs (as seen in Table 3). Our experiments suggest that regularized linear models allow significantly higher embedding dimensions with reduced overfitting. For future work, we aim to explore other convex baselearners such as kernel SVMs. This would allow the ability to incrementally increase model capacity as more training data becomes available for a task.





1shot  5shot  
51.13  70.88  
✓  55.80  75.76  
✓  56.65  73.72  
✓  ✓  60.33  76.61  
✓  ✓  ✓  61.11  77.40  
✓  ✓  ✓  ✓  62.64  78.63  
✓  ✓  ✓  ✓  ✓  64.09  80.00  
Acknowledgements. The authors thank Yifan Xu, Jimmy Yan, Weijian Xu, Justin Lazarow, and Vijay Mahadevan for valuable discussions. Also, we appreciate the anonymous reviewers for their helpful and constructive comments and suggestions. Finally, we would like to thank Chuyi Sun for help with Figure 1.
References

[1]
Brandon Amos and J. Zico Kolter.
OptNet: Differentiable optimization as a layer in neural networks.
In ICML, 2017.  [2] Shane Barratt. On the Differentiability of the Solution to Convex Optimization Problems. arXiv:1804.05098, 2018.
 [3] Luca Bertinetto, João F. Henriques, Philip H. S. Torr, and Andrea Vedaldi. Metalearning with differentiable closedform solvers. In ICLR, 2019.

[4]
Rich Caruana, Nikos Karampatziakis, and Ainur Yessenalina.
An empirical evaluation of supervised learning in high dimensions.
In ICML, 2008.  [5] Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernelbased vector machines. J. Mach. Learn. Res., 2:265–292, Mar. 2002.
 [6] Justin Domke. Generic methods for optimizationbased modeling. In AISTATS, 2012.
 [7] Asen L. Dontchev and R. Tyrrell Rockafellar. Implicit functions and solution mappings. Springer Monogr. Math., 2009.
 [8] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Modelagnostic metalearning for fast adaptation of deep networks. In ICML, 2017.
 [9] Golnaz Ghiasi, TsungYi Lin, and Quoc V. Le. Dropblock: A regularization method for convolutional networks. In NeurIPS, 2018.
 [10] Spyros Gidaris and Nikos Komodakis. Dynamic fewshot visual learning without forgetting. In CVPR, 2018.
 [11] Stephen Gould, Basura Fernando, Anoop Cherian, Peter Anderson, Rodrigo Santa Cruz, and Edison Guo. On differentiating parameterized argmin and argmax problems with application to bilevel optimization. arXiv preprint arXiv:1607.05447, 2016.
 [12] Steven G. Krantz and Harold R. Parks. The implicit function theorem: history, theory, and applications. Springer Science & Business Media, 2012.
 [13] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar100 (canadian institute for advanced research).
 [14] Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, and Yi Yang. Transductive propagation network for fewshot learning. In ICLR, 2019.
 [15] Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradientbased hyperparameter optimization through reversible learning. In ICML, 2015.
 [16] Tomasz Malisiewicz, Abhinav Gupta, and Alexei A. Efros. Ensemble of exemplarsvms for object detection and beyond. In ICCV, 2011.
 [17] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriella Csurka. Distancebased image classification: Generalizing to new classes at nearzero cost. IEEE Trans. Pattern Anal. Mach. Intell., 35(11):2624–2637, Nov. 2013.
 [18] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive metalearner. In ICLR, 2018.

[19]
Tsendsuren Munkhdalai, Xingdi Yuan, Soroush Mehri, and Adam Trischler.
Rapid adaptation with conditionally shifted neurons.
In ICML, 2018.  [20] Boris N. Oreshkin, Pau Rodríguez, and Alexandre Lacoste. Tadam: Task dependent adaptive metric for improved fewshot learning. In NeurIPS, 2018.
 [21] Siyuan Qiao, Chenxi Liu, Wei Shen, and Alan L. Yuille. Fewshot image recognition by predicting parameters from activations. In CVPR, 2018.
 [22] Sachin Ravi and Hugo Larochelle. Optimization as a model for fewshot learning. In ICLR, 2017.
 [23] Mengye Ren, Sachin Ravi, Eleni Triantafillou, Jake Snell, Kevin Swersky, Josh B. Tenenbaum, Hugo Larochelle, and Richard S. Zemel. Metalearning for semisupervised fewshot classification. In ICLR, 2018.

[24]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma,
Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein,
Alexander C. Berg, and Li FeiFei.
Imagenet large scale visual recognition challenge.
Int. J. Comput. Vision
, 115(3):211–252, Dec. 2015.  [25] Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Metalearning with latent embedding optimization. In ICLR, 2019.
 [26] Jurgen Schmidhuber. Evolutionary principles in selfreferential learning. on learning now to learn: The metametameta…hook. Diploma thesis, Technische Universitat Munchen, Germany, 14 May 1987.
 [27] Uwe Schmidt and Stefan Roth. Shrinkage fields for effective image restoration. In CVPR, 2014.
 [28] Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for fewshot learning. In NIPS, 2017.
 [29] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, and Timothy M. Hospedales. Learning to compare: Relation network for fewshot learning. In CVPR, 2018.
 [30] Marshall F. Tappen, Ce Liu, Edward H. Adelson, and William T. Freeman. Learning gaussian conditional random fields for lowlevel vision. In CVPR, 2007.
 [31] Sebastian Thrun. Lifelong Learning Algorithms, pages 181–209. Springer US, Boston, MA, 1998.
 [32] Ricardo Vilalta and Youssef Drissi. A perspective view and survey of metalearning. Artificial Intelligence Review, 18(2):77–95, Jun 2002.
 [33] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In NIPS, 2016.

[34]
Jason Weston and Chris Watkins.
Support Vector Machines for Multiclass Pattern Recognition.
In European Symposium On Artificial Neural Networks, 1999.  [35] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, 2014.
 [36] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
Comments
There are no comments yet.