DeepAI
Log In Sign Up

Large Margin Few-Shot Learning

The key issue of few-shot learning is learning to generalize. In this paper, we propose a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the softmax classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning models, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.

READ FULL TEXT VIEW PDF
05/28/2020

Boosting Few-Shot Learning With Adaptive Margin Loss

Few-shot learning (FSL) has attracted increasing attention in recent yea...
07/25/2021

Will Multi-modal Data Improves Few-shot Learning?

Most few-shot learning models utilize only one modality of data. We woul...
03/25/2021

Recent Advances in Large Margin Learning

This paper serves as a survey of recent advances in large margin trainin...
10/31/2022

GPS: Genetic Prompt Search for Efficient Few-shot Learning

Prompt-based techniques have demostrated great potential for improving t...
06/17/2022

Large-Margin Representation Learning for Texture Classification

This paper presents a novel approach combining convolutional layers (CLs...
11/06/2014

Large-Margin Determinantal Point Processes

Determinantal point processes (DPPs) offer a powerful approach to modeli...
06/01/2017

Discriminative k-shot learning using probabilistic models

This paper introduces a probabilistic framework for k-shot image classif...

1 Introduction

Few-shot learning Fei-Fei et al. (2006)

is a very challenging problem as it aims to learn from very few labeled examples. Due to data scarcity, training a conventional end-to-end supervised model such as deep learning models

Krizhevsky et al. (2012); He et al. (2016) from scratch will easily lead to overfitting, and techniques such as data augmentation and regularization cannot solve this problem.

One successful perspective for tackling few-shot learning is meta-learning. Unlike traditional supervised learning that requires a large labeled set for training, meta-learning trains a classifier that can generalize to new tasks by distilling knowledge from a large number of similar tasks and then transferring the knowledge to quickly adapt to new tasks. Several directions have been explored for meta learning, including learn to fine-tune

Ravi & Larochelle (2017); Finn et al. (2017); Munkhdalai & Yu (2017); Li et al. (2017), sequence based methods Santoro et al. (2016); Mishra et al. (2018), and metric based learning Vinyals et al. (2016); Koch et al. (2015).

Metric based few-shot learning has attracted a lot of interest recently Vinyals et al. (2016); Snell et al. (2017); Fort (2017); Sung et al. (2018); Garcia & Bruna (2018); Mehrotra & Dukkipati (2017); Koch et al. (2015); Ren et al. (2018)

, probably due to its simplicity and effectiveness. The basic idea is to learn a metric which can map similar samples close and dissimilar ones distant in the metric space so that a query can be easily classified. Various metric based methods such as siamese networks

Koch et al. (2015), matching networks Vinyals et al. (2016), prototypical networks Snell et al. (2017), and graph neural networks Garcia & Bruna (2018) differ in their ways of learning the metric.

The success of metric based methods relies on learning a discriminative metric space. However, due to data scarcity in the training tasks, it is difficult to learn a good metric space. To reach the full potential of metric based few-shot learning, we propose a large margin principle for learning a more discriminative metric space. The key insight is that samples from different classes should be mapped as far apart as possible in the metric space to improve generalization and prevent overfitting. The large margin constraint has not been enforced in existing metric based methods.

To fill this gap, we develop a unified framework to impose the large margin constraint. In particular, we augment the linear classification loss function of a metric learning method with a distance loss function – the triplet loss Schroff et al. (2015) to train a more metric space. Our framework is simple, robust, very easy to implement, and can be potentially applied to many metric learning methods that adopts a linear classifier. Applications on two state-of-the-art metric learning methods – graph neural networks Garcia & Bruna (2018) and prototypical networks Snell et al. (2017) show that the large margin constraint can substantially improve the generalization capacity of the original models with little computational overhead. Besides the triplet loss, we also explore other loss functions to enforce the large margin constraint. All experimental results confirm the effectiveness of the large margin principle.

Figure 1: Training and testing process of few-shot learning.

Although large margin methods have been widely studied in many areas of machine learning, this paper is the first to investigate its applicability and usefulness in few-shot learning (meta-learning), to our best knowledge. It should be noted that the few-shot learning problem considered here has very different setup with the attributed-based few-shot

Li & Guo (2015) or zero-shot learning Lampert et al. (2009); Akata et al. (2013); Fu et al. (2015); Xian et al. (2018). The contributions of this paper include 1) proposing a large margin principle to improve metric based few-shot learning, 2) developing an effective and efficient framework for large margin few-shot learning, and 3) conducting extensive experiments to validate our proposals.

2 Large Margin Few-Shot Learning

2.1 Few-Shot Learning

Few-shot learning aims to train a classifier which can quickly adapt to new classes and learn from only a few examples. It consists of two phases: meta-training and meta-testing (Fig. 1). In meta-training, a large amount of training data from a set of classes are used for training a classifier, where

is the feature vector of an example,

is the label, and is the number of training examples. In meta-testing, a support set of labeled examples from a set of new classes is given, i.e., and . The goal is to predict the labels of a query set , where is the number of queries. If the support set contains examples from each of classes, i.e., , the few-shot learning problem is called -way -shot learning. Typically, is a small number such as 1 or 5.

To improve generalization, an episode-based training strategy Vinyals et al. (2016) is commonly adopted to better exploit the training set . In particular, in meta-training, a model is trained by support/query sets, where a support set is formed by sampling examples from each of classes from , and a query set is formed by sampling from the rest of the classes’ samples. The purpose is to mimic the test scenario during training.

2.2 Large Margin Principle

Figure 2: Large margin few-shot learning. (a) Classifier trained without the large margin constraint. (b) Classifier trained with the large margin constraint. (c) Gradient of the triplet loss.

How Few-Shot Learning Works. To learn quickly from only a couple of examples whose classes are unseen in meta-training, a model should acquire some transferable knowledge in meta-training. In metric-based few-shot learning Koch et al. (2015); Garcia & Bruna (2018); Snell et al. (2017), the basic idea is to learn a nonlinear mapping which can model the class relationship among data samples, i.e., similar samples are mapped to nearby points in the metric space while dissimilar ones are mapped far apart. Usually, the mapping embeds a sample into a relatively low dimensional space, and the embedded point is then classified by a linear classifier, e.g., the softmax classifier. Note that the softmax classifier can be considered as the last fully connected layer in a neural network. Both the mapping and the classifier parameters are learned by minimizing the cross entropy loss:

(1)

where is a classifier weight vector corresponding to the -th column of the weight matrix of the softmax classifier. Without loss of generality, we omit the bias to simplify the analysis. Note that can be considered as the class center of samples in class in the embedding space.

After learning and , the model can be used for testing. Fig. 2 shows a 3-way 5-shot test case, where the support samples are indicated by dots and the query sample is indicated by a cross. Samples in the same class are indicated by the same color. We can see that samples from each class are mapped to cluster around the corresponding classifier weight vector . However, the query sample, which belongs to class 1, may be wrongly classified to class 2, due to the small margin between and .

How Can It Work Better?

As each training episode consists very few samples of each class, the standard error of the sample mean is high

Mood & Graybill (1974)

. In other words, the class average may be a poor estimate of the true class center, and some samples may not well represent its own class. Hence, the model may not be able to learn a discriminative metric space.

To alleviate this problem and improve the model generalization capacity on new classes, we propose to enforce a large margin between the classifier weight vectors (or class centers). The idea is that samples from different classes should be mapped as far apart as possible in the metric space. As shown in Fig. 2, the query sample can be correctly classified by enlarging the margin between and . It is worth noting that the large margin principle makes the classifier weight vectors distributed in a balanced manner (Fig. 2), which leads to balanced decision boundaries.

2.3 Model

To enforce the large margin constraint, we propose to augment a large margin loss function to the classification loss function, and the total loss is given by

(2)

where is a balancing parameter. We choose the triplet loss Schroff et al. (2015) to be the large margin distance function, which acts on the embeddings of training samples in the metric space:

(3)

where is a parameter for margin. forms a triplet, where is called the anchor sample, is the positive sample w.r.t. the anchor sample, and is the negative sample w.r.t. the anchor sample. and have same labels, and and have different labels. is the number of triplets. Any sample in the training set can be chosen as anchor. Once selected, a plurality of positive samples and negative samples are paired with the anchor sample to form a plurality of triplets.

Intuitively, the augmented triplet loss will help train a more discriminative mapping which embeds samples of the same class closer and those of different classes farther in the metric space. Next we provide a theoretical analysis to show how the triplet loss reshapes the embeddings.

2.4 Analysis

We analyze the influence of the augmented triplet loss on the embeddings by studying the gradient of with respect to , the embedding of sample , during back propagation.

Since terms with zero loss in (3) have no effect on the gradients, we only need to consider the triplets whose loss within the square brackets is positive and so the hinge operation can be removed. To find the gradient of with respect to , we need to find the terms in which is the anchor sample, or the positive sample, or the negative sample. We partition the samples into three multisets. The first set contains all samples that are paired with with and have the same label as . The second set contains all samples that are paired with with but have different label with . The third set contains samples that are not paired with . The multiplicity of a element in these multisets is the number of triplets in which the element is paired with . Note that if a sample , the distance is added to the loss, while if a sample , the distance is subtracted from the loss. After some rearrangements, (3) can be written as

(4)

where const denotes a constant independent of . Then the gradient of with respect to can be derived as:

(5)
(6)

where is the center of embedded points in , and is the center of embedded points in . By (6), the gradient consists of two parts. The first part is a vector pointing from to the center of embedded points in , which pulls to its own class during training, as indicated by the brown arrow in Fig. 2. The other part is a vector pointing from the center of embedded points in to , which pushes away from other classes, as indicated by the red arrow in Fig. 2. This shows that the augmented triplet loss can effectively enforce the large margin constraint.

2.5 Discussion

When Does It Work? The working assumption of the triplet loss is that the similarity/dissimilarity between the embedded points can be measured by Euclidean distance. If the embedded points lie on a nonlinear manifold in the metric space, Euclidean distance cannot reflect their similarity. This indicates that the embedded points should be linearly separable in the metric space, which suggests that the triplet loss should work with a linear model such as the softmax classifier in the metric space.

Computational Overhead. The computational overhead of the augmented triplet loss lies in two aspects: triplet selection and loss computation. It is well known that online selection of triplets could be very time-consuming when the training set is large. However, in few-shot learning, the number of samples for each class is fixed in each update iteration, so we can use an offline strategy for triplet selection. In fact, we only need to form the triplets once, and then store the parings and use them for indexing the embeddings in each update. In this way, the computational overhead for triplet selection is negligible. For the loss computation, the time complexity for each update is where is the number of triplets and is the dimensionality of embeddings. Hence, the computation is efficient if the embeddings are low-dimensional (see Section 3.3 and 4.3 for the report of running times).

Alternative Loss Functions. Besides the triplet loss, we also explore other loss functions for incorporating the large margin constraint for few-shot learning, including the normalized triplet loss, the normalized contrastive loss Hadsell et al. (2006); Sun et al. (2014), the normface loss Wang et al. (2017), the cosface loss Wang et al. (2018b), and the arcface loss Deng et al. (2018). We discuss and compare these methods in the section 6.

3 Large Margin Graph Neural Networks

In this section, we apply the proposed large margin framework on the recently proposed graph neural networks (GNN) Garcia & Bruna (2018) for few-shot learning, which achieved state-of-the-art performance on some benchmark datasets for both few-shot learning and semi-supervised few-shot learning.

3.1 Graph Neural Networks

In the training of GNN, each mini-batch consists of multiple episodes, and the number of episodes is the batch size. For each episode, the meta-train samples are given as:

(7)

where is the support set with labels and is the query set with only one query. As we also consider the semi-supervised setting for few-shot learning, is the set of samples without labels. For few-shot learning, is ; and for semi-supervised few-shot learning, . For each sample, the initial feature vector is given by:

where

is a trainable convolutional neural network, and

is a one-hot vector encoding the label information. Denote by the initial feature matrix.

A graph is constructed by taking each sample as a node, and the adjacency matrix is updated by:

where

is a multilayer perceptron with learnable parameters

and is the absolute value function. The operator family is formed by , where is an all-’s matrix. The new features will be obtained by combining the features of its adjacent nodes and then projected by a projection layer:

where is the learnable parameters of the projection layer, is the length of , and is the length of . This process can be repeated for several times. After iterations, we obtain the final embedding matrix .

To classify the embeddings, GNN uses a parametric softmax classifier. Namely, for -way learning, in each training episode, the probability of the query being classified as the -th class is:

where . For all episodes in a mini-batch, the softmax loss is:

where is the predicted label for the query in the -th episode, is the feature of the query, and is the ground truth label.

3.2 Large Margin Graph Neural Networks

GNN models the class relationship of input samples using graph operators, and maps samples with the same label representation – the one hot vector to a fixed weight vector in the embedding space. To make GNN more discriminative, we augment a triplet loss on its objective function to train a large margin graph neural network (L-GNN). The triplet loss is defined as in (3), where is the embedding of by GNN.

Triplet Selection. In the training of GNN, each mini-batch consists of multiple episodes. For -way -shot learning, the support set of each episode is formed by sampling examples from each of the classes, resulting in examples. The label representation of each example is a -dimensional one-hot vector. For any two examples, if their label representations are the same, they should be mapped to the same class, and vice versa. To form the triplets, for each example (anchor) in the support set, we sample positive examples from the training batch which have the same label representation with the anchor; and for each positive example, we sample negative examples which have different label representation with the anchor. Hence, for each anchor example, triplets are formed. For -way -shot learning, with a batch size of , there will be a total of

triplets. In our experiments on Mini-Imagenet, it only takes

s to form the triplets. Since the selection of triplets only needs to be done once at the beginning of training, it incurs almost no computational overhead.

3.3 Experimental Results

Datasets. The experiments are conducted on two benchmark datasets: Omniglot and Mini-Imagenet. The Omniglot dataset consists of characters from different alphabets drawn by different individuals. Each character has samples and each sample is rotated by , , to enlarge the dataset by four times. The dataset is split into characters plus rotations for training and characters plus rotations for testing. On Omniglot, we consider -way and -way for both -shot and -shot learning. The Mini-Imagenet dataset is composed of images from the Imagenet dataset, and it has classes with samples from each class. It is split into , , and disjoint classes for training, validation, and testing respectively. On Mini-Imagenet, we consider -way and -way for both -shot and -shot learning. Fig. 3 shows some image samples of Omniglot and Mini-Imagenet.

(a) Samples of Omniglot.
(b) Samples of Mini-Imagenet.
Figure 3: Image samples of Omniglot and Mini-Imagenet.

Parameter Setup. To make sure our method can work well in practice, we use fixed parameters in all experiments. We set the balancing parameter in (2). For the triplet loss, we set the margin as , the average of the -norm of all embeddings in one mini-batch with randomly initialized model parameters at the beginning of training, where is the number of all samples in one mini-batch.

Dataset Model 1-shot 5-shot 1-shot 5-shot
5-Way 20-Way
Omniglot GNN
L-GNN
5-Way 10-Way
Miniimagenet GNN
L-GNN
Table 1: Few-shot learning on Omniglot and Mini-Imagenet with GNN and L-GNN. Results are averaged with confidence intervals. ’’: not reported.
Omniglot Mini-Imagenet
Model -labeled -labeled -labeled -labeled
Trained only with GNN
labeled samples L-GNN
Semi-supervised GNN
L-GNN
Table 2: -way -shot semi-supervised few-shot learning on Omniglot and Mini-Imagenet. “Trained only with labeled samples” means that the unlabeled samples are not used in training and testing; “Semi-supervised” means that the unlabeled samples are used in training and testing. Results are averaged with confidence intervals.

Results on Few-Shot Learning. The results on the two benchmark datasets are shown in Tabel 1. On Omniglot, we can see that although GNN has already achieved very high accuracy, our method L-GNN can still improvement further, especially for the more challenging -way classification tasks. On Mini-Imagenet, L-GNN improves GNN on every learning task, especially for the more difficult -way classification tasks where the largest improvement is 2.3% in terms of absolute accuracy. The results clearly demonstrate the benefit of incorporating the large margin loss in training.

Results on Semi-Supervised Few-Shot Learning. Table 2 shows the results of semi-supervised 5-way 5-shot learning on Omniglot and Mini-Imagenet. We can see that our method L-GNN consistently outperforms GNN on all semi-supervised classification tasks. The improvements are significant on Mini-Imagenet. This shows that adding large margin constraint helps learn a better embedding space for both labeled and unlabeled data, and again demonstrates the effectiveness of our method.

Parameter Sensitivity. We also study the sensitivity of the balancing parameter and the margin in L-GNN. The experimental results for -way -shot and -shot learning are shown in Fig. 4. Our method L-GNN consistently improves GNN in all cases for a wide range of and , demonstrating its robustness.

Running Time. The computational overhead of our method is very small. We evaluate the running time on the platform of Intel(R) Xeon(R) CPU E5-2640 v4, 2.40GHz with GeForce GTX 1080 Ti. For -way -shot learning on Mini-Imagenet, one update of L-GNN takes s versus s of GNN, which only incurs computational overhead.

4 Large Margin Prototypical Networks

(a) Test accuracy as changes.
(b) Test accuracy as changes.
Figure 4: -way learning using GNN on Mini-Imagenet. (Left: -shot, right: -shot)

In this section, we apply the proposed large margin framework on the popular prototypical networks (PN) Snell et al. (2017) for few-shot learning. PN is very easy to implement and efficient to train, and achieved very competitive performance on some benchmark datasets.

4.1 Prototypical Networks

PN is constructed based on the following steps. A training set with labeled examples is given. First, randomly sample classes from , and denote by the set of class indices. Denote by the -th element in and by the subset of all training samples with . For each class in , randomly select some samples from to form which is a subset of the support set; and randomly select some other samples from to form which is a subset of the query set, and make sure . Then for each class, compute the prototype where the mapping is typically a convolutional neural network with learnable parameters .

PN uses a non-parametric softmax classifier. Namely, for a query sample in , the probability of it being classified to the -th class is:

(8)

where is a metric measuring the distance between any two vectors, which can be cosine distance or Euclidean distance. For all query samples in an episode, the classification loss is:

(9)

If is Euclidean distance, PN is actually a linear model in the embedding space Snell et al. (2017). Since

where and , (8) can be considered as a linear classifier.

4.2 Large Margin Prototypical Networks

PN models the class relationship between the query and support samples by measuring the distance between the query and the class centers of support samples in the embedding space. To make PN more discriminative, we augment a triplet loss on its objective function to train a large margin prototypical network (L-PN). The triplet loss is defined as in (3), where is the embedding of input sample by PN.

Triplet Selection. The implementation of prototypical networks does not use mini-batch in an update iteration. Take -shot learning for example, in one update iteration, for each class, the general practice Snell et al. (2017) is to sample support examples and extra query examples, so the number of samples in each class is . For each sample (anchor) in the support set and the query set, we sample positive examples from the class of anchor; and for each positive sample, we sample negative examples from other classes. Hence, for each sample, triplets are formed. For -way -shot learning, with classes, a total of triplets are formed. In our experiments on Mini-Imagenet, it only takes s to form the triplets. Since the selection of triplets only needs to be done once at the beginning of training, it incurs almost no computational overhead.

4.3 Experimental Results

Dataset Model Dist 1-shot 5-shot 1-shot 5-shot
5-Way 20-Way
Omniglot PN Euclid.
L-PN Euclid.
5-Way 10-Way
Miniimagenet PN Cosine
L-PN Cosine
PN Euclid.
L-PN Euclid.
Table 3: Few-shot learning on Omniglot and Mini-Imagenet with PN and L-PN. Results are averaged with confidence intervals. ’’: not reported.

Results on Few-Shot Learning. The experiments are also conducted on Omniglot and Mini-Imagenet, and the parameter setup is the same as in Section 3.3. The results are shown in Table 3. On Omniglot, L-PN performs comparably with PN. On Mini-Imagenet, L-PN consistently improves PN on every learning task. For PN with Euclidean distance, the improvement on 1-shot learning is more significant than on 5-shot learning, which is because taking the average of multiple support samples in the same class helps PN learn a more discriminative embedding.

For PN with cosine distance, L-PN improves PN by a huge margin. Originally, cosine PN performs much worse than Euclidean PN. Incorporating the large margin loss greatly boosts its performance and makes it comparable with or even better than Euclidean PN. This shows that the large margin distance loss function helps cosine PN learn a much better embedding space and presumably alleviate the gradient vanishing problem in training.

Parameter Sensitivity. We also study the sensitivity of the balancing parameter and the margin in L-PN. The results are shown in Fig. 5. We can see that for -shot learning, L-PN consistently outperforms PN as and changes. For -shot learning, L-PN outperforms PN as the margin changes from 5 to 50. It is only slightly worse than PN when the balancing parameter is larger than 5. Overall, the results show that L-PN is robust.

Running Time. The computational overhead of L-PN is very small. For -way -shot learning on Mini-Imagenet, one update of L-PN takes s versus s of PN, which only incurs computational overhead.

5 Related Works

5.1 Few-Shot Learning

Early work on few-shot learning focused on using generative models Fei-Fei et al. (2006) and inference methods Lake et al. (2011). With the recent success of deep learning, few-shot learning has been studied heavily with deep models and has made encouraging progress. Methods for few-shot learning can be roughly categorized as metric based methods, learning to fine-tune methods, and sequence based methods.

Metric Based Methods. The basic idea of metric based methods is to learn a metric to measure the similarity between samples Vinyals et al. (2016); Sung et al. (2018). Koch et al. (2015) proposed siamese neural networks for one-shot learning. It learns a network which employs a unique structure to naturally rank similarity between inputs. Mehrotra & Dukkipati (2017) proposed to use residual blocks to improve the expressive ability of siamese networks. It argues that having a learnable and more expressive similarity objective is an essential part for few-shot learning. Bertinetto et al. (2016) proposed to learn the parameters of a deep model (pupil network) in one shot by constructing a second neural network which predicts the parameters of the pupil network from a single sample. To make this approach feasible, it proposed a number of factorizations of the parameters of the pupil network. Vinyals et al. (2016)

proposed to learn a matching network, which maps a small labeled support set and an unlabeled example to its label by using a recurrent neural network. This network employs attention and memory mechanisms to enable rapid learning and it is based on a principle: test and train conditions must match. It proposed an episode-based training procedure for few-shot learning which has been followed by many papers.

Snell et al. (2017) proposed prototypical networks to do few-shot classification by computing distances to prototype representations of each class. The key idea is that the prototype is computed by taking the average of embedding vectors of samples from the same class. It was extended to Gaussian prototypical networks by Fort (2017). Ren et al. (2018) used prototypical networks to do semi-supervised few-shot learning. Kaiser et al. (2017) proposed to achieve few-shot learning and life-long learning through continuous updating of memory in the learning process. It employs a large-scale memory module which uses fast nearest-neighbor algorithms. It can achieve lifelong learning without resetting the memory module during training. Garcia & Bruna (2018)

proposed graph neural networks for few-shot classification. It uses graph structure to model the relation between samples and can be extended to semi-supervised few-shot learning and active learning.

Sung et al. (2018) argued that the embedding space should be classified by a non-linear classifier and proposed to compare a support set and a query using a relation network where a distance criterion is learned via a trainable neural network to measure the similarity between two samples. Our large margin method can be potentially applied to almost all these models.

Learning to Fine-Tune Methods. Munkhdalai & Yu (2017) proposed meta networks which learn meta-level knowledge across tasks and can produce a new model through fast parameterization. Ravi & Larochelle (2017) proposed an LSTM-based meta-learner model by learning both an initial condition and a general optimization strategy for few-shot learning, which can be used to update the learner network (classifier) in testing. Finn et al. (2017)

proposed a model-agnostic meta-learning (MAML) approach which learns the initialization of a model, and based on this initialization, the model can quickly adapt to new tasks with a small number of gradient steps. It can be incorporated into many learning problems, such as classification, regression, and reinforcement learning.

Li et al. (2017)

proposed Meta-SGD which learns not only the initialization, but also the update direction and the learning rate of stochastic gradient descent algorithms. Experiments show that it can learn faster and more accurately than MAML.

Sequence Based Methods. Sequence based methods for few-shot learning accumulate knowledge learned in the past and enable generalization on new samples with the learned knowledge. Santoro et al. (2016) introduced an external memory on recurrent neural networks to make predictions with only a few samples. With the external memory, it offers the ability to quickly encode and retrieve new information. Mishra et al. (2018) proposed a meta-learner architecture which uses temporal convolution and attention mechanism to accumulate past information. It can quickly incorporate past experience and can be applied to both few-shot learning and reinforcement learning.

(a) Test accuracy as changes.
(b) Test accuracy as changes.
Figure 5: -way learning using PN on Mini-Imagenet. (Left: -shot, right: -shot)

5.2 Large Margin Learning

Large margin methods Platt et al. (2000); Tsochantaridis et al. (2005); Weinberger et al. (2005); Zien & Candela (2005); Parameswaran & Weinberger (2010); Zhang et al. (2011); Schroff et al. (2015) have been widely used in machine learning, including multiclass classification Platt et al. (2000), multi-task learning Parameswaran & Weinberger (2010)

, transfer learning

Zhang et al. (2011), etc. Due to the vast literature on large margin methods, we only review the most relevant works. Weinberger et al. (2005) proposed to learn a Mahalanobis distance metric for -nearest neighbor classification by using semidefinite programming. The main idea is that -nearest neighbors always belong to the same class and samples from different classes should be separated by a large margin. Parameswaran & Weinberger (2010) extended the large margin nearest neighbor algorithm to the multi-task paradigm. Schroff et al. (2015)

proposed a large margin method for face recognition by using the triplet loss to learn a mapping from face images to a compact Euclidean space.

Zien & Candela (2005) proposed to maximize Jason-Shannon divergence for large margin nonlinear embeddings. The idea is to learn the embeddings of data with fixed decision boundaries, which is the opposite process of common classification methods. Our method is in spirit similar to this method in adding a large margin prior to learn the embeddings. There are also some works applying large margin methods for attribute-based zero-shot Akata et al. (2013) and few-shot learning Li & Guo (2015), but their problem setups are very different with the few-shot (meta) learning considered in this paper.

A number of recent works Ranjan et al. (2017); Hadsell et al. (2006); Sun et al. (2014); Liu et al. (2016, 2017a); Wang et al. (2018a, b); Deng et al. (2018); Liu et al. (2017b) realize large margin embedding by defining various loss functions. Hadsell et al. (2006) first proposed the contrastive loss and applied it to dimensionality reduction. Sun et al. (2014) combined the cross entropy loss and the contrastive loss to learn deep face representation. It reduces intra-personal variations and enlarges inter-personal differences by combining the identification task and the verification task. Liu et al. (2016) proposed the large margin softmax loss for training convolutional neural networks. It explicitly encourages inter-class separability and intra-class compactness between embeddings by defining an adjustable and multiplicative margin. Motivated by that the learned features should be both discriminative and polymerized, Liu et al. (2017b)

introduced the congenerous cosine algorithm to optimize the cosine similarity among data.

Wang et al. (2017) proposed the normface loss by introducing a vector for each class and optimizing the cosine similarity. Wang et al. (2018b) proposed the cosface loss by defining an additive margin on the cosine space of normalized embeddings and weight vectors. Deng et al. (2018) extended the cosface loss to the arcface loss by setting an additive margin on the angular space instead of the cosine space. All these loss functions can be applied to implement the large margin prior for few-shot learning. We discuss and compare these methods in the next section.

6 Discussion

5-Way
Model Dist 1-shot 5-shot
PN Euclid.
L-PN () Euclid.
PN+normalized triplet Euclid.
GNN
L-GNN ()
GNN+normalized triplet
GNN+normalized contrastive
GNN+normface
GNN+cosface ()
GNN+arcface ()
Table 4: -way few-shot learning on Mini-Imagenet. Results are averaged with confidence intervals. ’’: not reported. ’’: fail to converge.

In this section, we implement and compare several of the aforementioned loss functions for large margin few-shot learning, including the normalized triplet loss, the normalized contrastive loss Hadsell et al. (2006); Sun et al. (2014), the normface loss Wang et al. (2017), the cosface loss Wang et al. (2018b), and the arcface loss Deng et al. (2018). We test these models for -shot and -shot learning on Mini-Imagenet, and the results are summarized in Tables 4, 5, 6, 7.

All these models consider normalized weight vectors of the softmax classifier and the embeddings:

After normalization, the cosine value of the angle between the -th weight vector and the embedding vector is By introducing a scale factor , the softmax loss can be rewritten as:

(10)

Normalized triplet loss. Similarly, the normalized triplet loss can be defined as:

(11)

where is the margin and is the number of triplets. The normalized triplet loss (11) can be combined with (10) to train large margin graph neural networks (GNN) and the total training loss is:

(12)

Similarly, it can also be incorporated into the softmax loss to train large margin prototypical networks (PN), with the normalized embeddings scaled by a factor .

We test GNN and PN with the augmented normalized triplet loss on Mini-Imagenet. For experiments in Table 4, we set the margin , which is computed by , the average of the -norm of all embeddings in one mini-batch with randomly initialized parameters at the start of training, where is the number of all samples in one mini-batch. We set for all experiments, and following Wang et al. (2017). We also test the normalized triplet loss with different , , and , as summarized in Table 6. The results show that the normalized triplet loss consistently improves PN and GNN for all learning tasks, demonstrating the benefits of the large margin principle for few-shot learning. For PN, the normalized triplet loss performs even better than the unnormalized triplet loss. This shows that under some circumstances, normalizing the embedding space helps train a better classifier. However, for GNN, the normalized triplet loss is not as robust as the unnormalized triplet loss. As shown in Tables 5 and 6, the latter consistently outperforms the former.

L-GNN
-shot
GNN
-shot
GNN
Table 5: -way few-shot learning using L-GNN (with the unnormalized triplet loss) on Mini-Imagenet. Results are averaged with confidence intervals.

Normalized contrastive loss. The contrastive loss was first introduced by Hadsell et al. (2006) and is defined as:

(13)

where is the margin. The idea is to pull neighbors together and push non-neighbors apart. The normalized contrastive loss can be defined similarly by replacing the embeddings (, ) with the normalized ones (, ). Similar to the normalized triplet loss, the normalized contrastive loss can also be combined with the softmax loss to train large margin PN and large margin GNN.

We test GNN with the augmented normalized contrastive loss on Mini-Imagenet. We can see that the normalized contrastive loss consistently outperforms GNN for all learning tasks, which again confirms that the large margin principle is useful for few-shot learning. The results are summarized in Tables 4 and 6, where the parameter setup is the same as the normalized triplet loss. We can see from Table 5 and 6 that the normalized contrastive loss is comparable to the unnormalized triplet loss for -shot learning, but not as robust for the more challenging -shot learning. We have also tested the unnormalized contrastive loss in our experiments, but find that it is very unstable and easy to diverge in training.

Normface loss. The normface loss Wang et al. (2017) was proposed to improve the performance of face verification. It identifies and studies the issues related to applying normalization on the embeddings and the weight vectors of the softmax classifier. Four different kinds of loss functions were proposed by Wang et al. (2017), and here we use the best model reported in Wang et al. (2017). The normface loss consists of two parts. One is the softmax loss (10), and the other part is another form of the contrastive loss:

(14)

which is obtained by replacing in the normalized contrastive loss with the normalized weight vector of the softmax classifier. The normface loss can also be combined with the softmax loss (10) to train large margin GNN. However, it can not be directly applied on PN because PN uses a non-parametric classifier.

We test GNN with the augmented normface loss on Mini-Imagenet. The experimental setup is the same as the normalized triplet loss. Results in Table 4, 5 and 6 show that the normface loss significantly outperforms GNN for all learning tasks, and is robust and comparable to the unnormalized triplet loss. This suggests that the normface loss may also be a good alternative to implement the large margin principle for metric based few-shot learning methods which use a parametric classifier.

Normalized triplet loss Normalized contrastive loss Normface loss
-shot
GNN
-shot
GNN
Table 6: -way few-shot learning using GNN with various loss functions on Mini-Imagenet. Results are averaged with confidence intervals.

Cosface loss. Wang et al. (2018b) proposed the cosface loss, which is defined as:

(15)

It introduces a margin in the cosine space of normalized embeddings and weight vectors of the softmax classifier, which can render the learned features more discriminative. The cosface loss can be applied to train large margin GNN. However, it can not be directly applied on PN as PN uses a non-parametric classifier.

We test GNN with the cosface loss function on Mini-Imagenet. For the experiments in Table 4, we set the margin . We also test with different in Table 7. For all the experiments, we set as in the normalized triplet loss. Results in Table 4 show that the cosface loss can perform better than GNN when is chosen properly, which again shows the usefulness of the large margin principle. However, the selection of is non-trivial. It was suggested in Wang et al. (2018b) that for face recognition, the proper choice of is . However, for few-shot learning, as shown in Table 7, when increases, the performance of the cosface loss decreases significantly. This shows that the cosface loss is sensitive to the margin, and overall is not comparable to the unnormalized triplet loss.

Arcface loss. Deng et al. (2018) proposed the arcface loss, which is defined as:

(16)

The arcface loss extends the cosface loss by defining the margin in the angular space instead of the cosine space. The angular margin has a clearer geometric interpretation than the cosine margin. It was reported in Deng et al. (2018)

that the arcface loss can obtain more discriminative deep features compared to other multiplicative angular margin and additive cosine margin methods. Similar to the cosface loss, it can be applied to train large margin GNN, but can not be directly applied on PN.

We test GNN with the arcface loss function on Mini-Imagenet. For the experiments in Table 4, we set the margin . We also test with different as suggested by Deng et al. (2018) in Table 7. For all the experiments, we set as in the normalized triplet loss. The results show that the arcface loss can perform better than GNN with for -shot learning, which again confirms the effectiveness of the large margin principle. However, as shown in Table 7, the arcface loss diverges in training for most cases tested, and only converges in one case. This shows that the arcface loss is very sensitive to , and is not comparable to the unnormalized triplet loss for few-shot learning.

To summarize, the experiments show that all large margin losses can substantially improve the original few-shot learning model, which demonstrates the benefits of the large margin principle. Compared with other loss functions, the unnormalized triplet loss has two clear advantages. First, it is more general and can be easily incorporated into metric based few-shot learning methods. As mentioned above, the normface loss, the cosface loss, and the arcface loss can not be directly applied to few-shot learning methods using non-parametric classifiers. Second, it is more robust than other loss functions such as the normalized triplet loss, the normalized contrastive loss, the cosface loss, and the arcface loss. On the running time, the computational overheads of these loss functions are all small, similar to that of the unnormalized triplet loss.

Cosface loss Arcface loss
-shot -shot -shot -shot
GNN
Table 7: -way few-shot learning using GNN with various loss functions on Mini-Imagenet. Results are averaged with confidence intervals. ’’: fail to converge.

7 Conclusions

This paper proposes a large margin principle for metric based few-shot learning and demonstrates its usefulness in improving the generalization capacity of two state-of-the-art methods. Our framework is simple, efficient, robust, and can be applied to benefit many existing and future few-shot learning methods. Future work includes developing theoretical guarantees for large margin few-shot learning and applying our method to solve real-word problems.

References

  • Akata et al. (2013) Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. Label-embedding for attribute-based classification. In CVPR, 2013.
  • Bertinetto et al. (2016) Luca Bertinetto, João F. Henriques, Jack Valmadre, and Philip H. S. Torrand Andrea Vedaldi. Learning feed-forward one-shot learners. In NIPS, 2016.
  • Deng et al. (2018) Jiankang Deng, Jia Guo, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In arXiv:1801.07698, 2018.
  • Fei-Fei et al. (2006) Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. In TPAMI, 2006.
  • Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
  • Fort (2017) Stanislav Fort. Gaussian prototypical networks for few-shot learning on omniglot. In NIPS workshop, 2017.
  • Fu et al. (2015) Yanwei Fu, Timothy M. Hospedales, Tao Xiang, and Shaogang Gong. Transductive multi-view zero-shot learning. In TPAMI, 2015.
  • Garcia & Bruna (2018) Victor Garcia and Joan Bruna. Few-shot learning with graph neural networks. In ICLR, 2018.
  • Hadsell et al. (2006) Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • Kaiser et al. (2017) Łukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. Learning to remember rare events. In ICLR, 2017.
  • Koch et al. (2015) Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML, 2015.
  • Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
  • Lake et al. (2011) Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One shot learning of simple visual concepts. In CogSci, 2011.
  • Lampert et al. (2009) Christoph H. Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009.
  • Li & Guo (2015) Xin Li and Yuhong Guo. Max-margin zero-shot learning for multi-class classification. In AISTATS, 2015.
  • Li et al. (2017) Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta-sgd: Learning to learn quickly for few-shot learning. In arXiv:1707.09835, 2017.
  • Liu et al. (2016) Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. Large-margin softmax loss for convolutional neural networks. In ICML, 2016.
  • Liu et al. (2017a) Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. Sphereface: Deep hypersphere embedding for face recognition. In CVPR, 2017a.
  • Liu et al. (2017b) Yu Liu, Hongyang Li, and Xiaogang Wang. Rethinking feature discrimination and polymerization for large-scale recognition. In NIPS Workshop, 2017b.
  • Mehrotra & Dukkipati (2017) Akshay Mehrotra and Ambedkar Dukkipati. Generative adversarial residual pairwise networks for one shot learning. In arXiv:1703.08033, 2017.
  • Mishra et al. (2018) Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner. In ICLR, 2018.
  • Mood & Graybill (1974) Alexander M. Mood and Franklin A. Graybill. Introduction to the theory of statistics. In McGraw Hill: New York, 1974.
  • Munkhdalai & Yu (2017) Tsendsuren Munkhdalai and Hong Yu. Meta networks. In ICML, 2017.
  • Parameswaran & Weinberger (2010) Shibin Parameswaran and Kilian Q. Weinberger. Large margin multi-task metric learning. In NIPS, 2010.
  • Platt et al. (2000) John C. Platt, Nello Cristianini, and John Shawe-Taylor. Large margin dags for multiclass classification. In NIPS, 2000.
  • Ranjan et al. (2017) Rajeev Ranjan, Carlos D. Castillo, and Rama Chellappa. L2-constrained softmax loss for discriminative face verification. In arXiv:1703.09507, 2017.
  • Ravi & Larochelle (2017) Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
  • Ren et al. (2018) Mengye Ren, Eleni Triantafillou, Sachin Ravia, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle, and Richard S. Zemel. Meta-learning for semi-supervised few-shot classification. In ICLR, 2018.
  • Santoro et al. (2016) Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In ICML, 2016.
  • Schroff et al. (2015) Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, 2015.
  • Snell et al. (2017) J. Snell, K. Swersky, and R. S. Zemel. Prototypical networks for few-shot learning. In NIPS, 2017.
  • Sun et al. (2014) Yi Sun, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation by joint identification-verification. In NIPS, 2014.
  • Sung et al. (2018) Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, and Timothy M. Hospedales. Learning to compare: Relation network for few-shot learning. In CVPR, 2018.
  • Tsochantaridis et al. (2005) Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods for structured and interdependent output variables. In JMLR, 2005.
  • Vinyals et al. (2016) O. Vinyals, C. Blundell, T. Lillicrap, and D. Wierstra. Matching networks for one shot learning. In NIPS, 2016.
  • Wang et al. (2017) Feng Wang, Xiang Xiang, Jian Cheng, and Alan L. Yuille. Normface: L2 hypersphere embedding for face verification. In ACM MM, 2017.
  • Wang et al. (2018a) Feng Wang, Weiyang Liu, Haijun Liu, and Jian Cheng. Additive margin softmax for face verification. In arXiv:1801.05599, 2018a.
  • Wang et al. (2018b) Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cosface: Large margin cosine loss for deep face recognition. In CVPR, 2018b.
  • Weinberger et al. (2005) Kilian Q. Weinberger, John Blitzer, and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor classification. In NIPS, 2005.
  • Xian et al. (2018) Yongqin Xian, Christoph H. Lampert, Bernt Schiele, and Zeynep Akata. Zero-shot learning - a comprehensive evaluation of the good, the bad and the ugly. In TPAMI, 2018.
  • Zhang et al. (2011) Dan Zhang, Jingrui He, Yan Liu, Luo Si, and Richard D. Lawrence. Multi-view transfer learning with a large margin approach. In KDD, 2011.
  • Zien & Candela (2005) Alexander Zien and Joaquin Quinonero Candela. Large margin non-linear embedding. In ICML, 2005.