1 Introduction
Current state of the art on image classification [41, 11, 15], object detection [21, 36, 10], semantic segmentation [52, 2, 20]
, and practically most tasks with some degree of learning involved, rely on deep neural networks. Those are powerful highcapacity models with trainable parameters ranging from millions to tens of millions, which require vast amounts of annotated data to fit. When such data is plentiful, supervised learning is the solution of choice.
Tasks and classes with limited available data, i.e. from the longtail [48], are highly problematic for this type of approaches. The performance of deep neural networks poses several challenges in the lowdata regime, in particular in terms of overfitting and generalization. The subject of fewshot learning is to learn to recognize previously unseen classes with very few annotated examples. This is not a new problem [4], yet there is a recent resurgence in interest through metalearning [18, 44, 39, 1, 5] inspired by early work in learningtolearn [43, 13].
In metalearning settings, even when there is single large training set with a fixed number of class, it is treated as a collection of datasets of different classes, where each class has a few annotated examples. This is done so that both metalearning and metatesting are performed in a similar manner [44, 39, 5]. However this choice does not always come with best performance. We argue that a simple conventional pipeline using all available classes and data with a
parametric classifier
is effective and appealing.Most fewshot learning approaches do not deal explicitly with spatial information since feature maps are usually flattened or pooled before the classification layer. We show that performing a dense classification over feature maps leads to more precise classification and consistently improves performance on standard benchmarks.
While incremental learning touches similar aspects with fewshot learning by learning to adapt to new tasks using the same network [26, 25] or extending an existing network with new layers and parameters for each new task [38], few of these ideas have been adopted in few shot learning. The main impediment is the reduced number of training examples which make it difficult to properly define a new task. We propose a solution for leveraging incremental learning ideas for fewshot learning.
Contributions: We present the following contributions. First, we propose a simple extension for fewshot learning pipelines consisting of dense classification over feature maps. Through localized supervision, it enables reaping additional knowledge from the limited training data. Second, we introduce neural implants, which are layers attached to an already trained network, enabling it to quickly adapt to new tasks with few examples. Both are easy to implement and show consistent performance gains.
2 Problem formulation and background
Problem formulation. We are given a collection of training examples with each , and corresponding labels with each , where ^{1}^{1}1We use the notation for . is a set of base classes. On this training data we are allowed to learn a representation of the domain such that we can solve new tasks. This representation learning we shall call stage 1.
In fewshot learning, one new task is that we are given a collection of few support examples with each , and corresponding labels with each , where is a set of novel classes disjoint from and ; with this new data, the objective is to learn a classifier that maps a new query example from to a label prediction in . The latter classifier learning, which does not exclude continuing the representation learning, we shall call stage 2.
Classification is called way where is the number of novel classes; in case there is a fixed number of support examples per novel class, it is called shot. As in standard classification, there is typically a collection of queries for evaluation of each task. Fewshot learning is typically evaluated on a large number of new tasks, with queries and support examples randomly sampled from .
Network model. We consider a model that is conceptually composed of two parts: an embedding network and a classifier. The embedding network maps the input to an embedding, where
denotes its parameters. Since we shall be studying the spatial properties of the input, the embedding is not a vector but rather a tensor, where
represents the spatial dimensions and the feature dimensions. For a 2d input image and a convolutional network for instance, the embedding is a 3d tensor in taken as the activation of the last convolutional layer, where is its spatial resolution. The embedding can still be a vector in the special case .The classifier network can be of any form and depends on the particular model, but it is applied on top of and its output represents confidence over (resp. ) base (resp. novel) classes. If we denote by (resp. ) the network function mapping the input to class confidence, then a prediction for input is made by assigning the label of maximum confidence, ^{2}^{2}2Given vector , denotes the th element of . Similarly for , denotes the the element of for ..
Prototypical networks. Snell et al. [42] introduce a simple classifier for novel classes that computes a single prototype per class and then classifies a query to the nearest prototype. More formally, given and an index set , let the set index the support examples in labeled in class . The prototype of class is given by the average of those examples
(1) 
for . Then, the network function is defined as^{3}^{3}3We define for and any expression of variable .
(2) 
for , where and
is a similarity function that may be cosine similarity or negative squared Euclidean distance and
is the softmax function defined by(3) 
for and .
Given a new task with support data over novel classes (stage 2), the full index set is used and computing class prototypes (1) is the only learning to be done.
When learning from the training data over base classes (stage 1), a number of fictitious tasks called episodes are generated by randomly sampling a number classes from and then a number of examples in each class from with their labels from ; these collections, denoted as respectively and of length , are supposed to be support examples and queries of novel classes , where labels are now available for the queries and the objective is that queries are classified correctly. The set is partitioned into a support set and a query set . Class prototypes are computed on index set according to (1) and the network function is defined on these prototypes by (2). The network is then trained by minimizing over the cost function
(4) 
on the query set , where is the crossentropy loss
(5) 
for , and .
Learning with imprinted weights. Qi et al. [32] follow a simpler approach when learning on the training data over base classes (stage 1). In particular, they use a fullyconnected layer without bias as a parametric linear classifier on top of the embedding function followed by softmax and they train in a standard supervised classification setting. More formally, let be the weight parameter of class for . Then, similarly to (2), the network function is defined by
(6) 
for , where is the collection of class weights and is the scaled cosine similarity
(7) 
for ; is the normalized counterpart of for ; and denote Frobenius inner product and norm respectively; and is a trainable scale parameter. Then, training amounts to minimizing over the cost function
(8) 
Given a new task with support data () over novel classes (stage 2), class prototypes are computed on according to (1) and they are imprinted in the classifier, that is, is replaced by . The network can now make predictions on base and novel classes. The network is then finetuned based on (8), which aligns the class weights with the prototypes at the cost of having to store and retrain on the entire training data .
Fewshot learning without forgetting. Gidaris and Komodakis [6], concurrently with [32], develop a similar model that is able to classify examples of both base and novel classes. The main difference to [33] is that only the weight parameters of the base classes are stored and not the entire training data. They use the same parametric linear classifier as [32] in both stages, and they also use episodestyle training like [42] in stage 2.
3 Method
Given training data of base classes (stage 1), we use a parametric classifier like [32, 6], which however applies at all spatial locations rather than following flattening or pooling; a very simple idea that we call dense classification and discuss in §3.1. Given support data of novel classes (stage 2), we learn in episodes as in prototypical networks [42], but on the true task. As discussed in §3.2, the embedding network learned in stage 1 remains fixed but new layers called implants are trained to learn taskspecific features. Finally, §3.3 discusses inference of novel class queries.
3.1 Dense classification
As discussed in §2, the embedding network maps the input to an embedding that is a tensor. There are two common ways of handling this highdimensional representation, as illustrated in Figure 1.
The first is to apply one or more fully connected layers, for instance in networks C64F, C128F in fewshot learning [44, 42, 6]. This can be seen as flattening the activation into a long vector and multiplying with a weight vector of the same length per class; alternatively, the weight parameter is a tensor of the same dimension as the embedding. This representation is discriminative, but not invariant.
The second way is to apply global pooling and reduce the embedding into a smaller vector of length , for instance in small ResNet architectures used more recently in fewshot learning [27, 6, 31]. This reduces dimensionality significantly, so it makes sense if is large enough. It is an invariant representation, but less discriminative.
In this work we follow a different approach that we call dense classification and is illustrated in Figure 2. We view the embedding as a collection of vectors , where for ^{4}^{4}4Given tensor , denote by the th dimensional slice along the first group of dimensions for .. For a 2d image input and a convolutional network, consists of the activations of the last convolutional layer, that is a tensor in where is its spatial resolution. Then, is an embedding in that represents a single spatial location on the tensor.
When learning from the training data over base classes (stage 1), we adopt the simple approach of training a parametric linear classifier on top of the embedding function , like [32] and the initial training of [6]. The main difference in our case is that the weight parameters do not have the same dimensions as ; they are rather vectors in and they are shared over all spatial locations. More formally, let be the weight parameter of class for . Then, similarly to (6), the classifier mapping is defined by
(9) 
for , where is the collection of class weights and is the scaled cosine similarity defined by (7), with being a learnable parameter as in [32, 6]^{5}^{5}5Temperature scaling is frequently encountered in various formats in several works to enable softlabeling [12] or to improve cosine similarity in the final layer [46, 31, 6, 32, 14].. Here is a tensor: index ranges over spatial resolution and over classes .
This operation is a convolution followed by depthwise softmax. Then, at spatial location is a vector in representing confidence over the classes. On the other hand, is a vector in representing confidence of class for as a function of spatial location.^{6}^{6}6Given tensor , denote by the th dimensional slice along the second group of dimensions for . For a 2d image input, is like a class activation map (CAM) [54] for class , that is a 2d map roughly localizing the response to class , but differs in that softmax suppresses all but the strongest responses at each location.
Given the definition (9) of , training amounts to minimizing over the cost function
(10) 
where is crossentropy (5).
The loss function applies to all spatial locations and therefore the classifier is encouraged to make correct predictions everywhere
.Learning a new task with support data over novel classes (stage 2) and inference are discussed in §3.2.2 and §3.3 respectively.
Discussion. The same situation arises in semantic segmentation [23, 30], where given perpixel labels, the loss function applies per pixel and the network learns to make localized predictions on upsampled feature maps rather than just classify. In our case there is just one imagelevel label and the low resolution, e.g. , of fewshot learning settings allows us to assume that the label applies to all locations due to large receptive field.
Dense classification improves the spatial distribution of class activations, as shown in Figure 3. By encouraging all spatial locations to be classified correctly, we are encouraging the embedding network to identify all parts of the object of interest rather than just the most discriminative details. Since each location on a feature map corresponds to a region in the image where only part of the object may be visible, our model behaves like implicit data augmentation of exhaustive shifts and crops over a dense grid with a single forward pass of each example in the network.
3.2 Implanting
From the learning on the training data of base classes (stage 1) we only keep the embedding network and we discard the classification layer. The assumption is that features learned on base classes are generic enough to be used for other classes, at least for the bottom layers [51]. However, given a new fewshot task on novel classes (stage 2), we argue that we can take advantage of the support data to find new features that are discriminative for the task at hand, at least in the top layers.
3.2.1 Architecture
We begin with the embedding network , which we call base network. We widen this network by adding new convolution kernels in a number of its top convolutional layers. We call these new neurons implants. While learning the implants, we keep the base network parameters frozen, which preserves the representation of the base classes.
Let denote the output activation of the convolutional layer in the base network. The implant for this layer, if it exists, is a distinct convolutional layer with output activation . Then the input of an implant at the next layer is the depthwise concatenation if exists, and just otherwise. If are the parameters of the th implant, then we denote by the set of all new parameters, where is the first layer with an implant and the network depth. The widened embedding network is denoted by .
As illustrated in Figure 4, we are creating a new stream of data in parallel to the base network. The implant stream is connected to the base stream at multiple top layers and leverages the previously learned features by learning additional connections for the new tasks.
Why implanting? In several fewshot learning works, in particular metric learning, it is common to focus on the top layer of the network and learn or generate a new classifier for the novel classes. The reason behind this choice underpins a major challenge in fewshot learning: deep neural networks are prone to overfitting. With implanting, we attempt to diminish this risk by adding a limited amount of new parameters, while preserving the previously trained ones intact. Useful visual representations and parameters learned from base classes can be quickly squashed during finetuning on the novel classes. With implants, we freeze them and train only the new neurons added to the network, maximizing the contribution of the knowledge acquired from base classes.
3.2.2 Training
To learn the implants only makes sense when a new task is given with support data over novel classes (stage 2). Here we use an approach similar to prototypical networks [42] in the sense that we generate a number of fictitious subtasks of the new task, the main difference being that we are now working on the novel classes.
We choose the simple approach of using each one of the given examples alone as a query in one subtask while all the rest are used as support examples. This involves no sampling and the process is deterministic. Because only one example is missing from the true support examples, each subtask approximates the true task very well.
In particular, for each , we define a query set and a support set . We compute class prototypes on index set according to (1), where we replace by and are the implanted parameters. We define the widened network function on these prototypes by (2) with a similar replacement. We then freeze the base network parameters and train the implants by minimizing a cost function like (4). Similarly to (4) and taking all subtasks into account, the overall cost function we are minimizing over is given by
(11) 
where is crossentropy (5).
In (11), activations are assumed flattened or globally pooled. Alternatively, we can densely classify them and apply the loss function to all spatial locations independently. Combining with (10), the cost function in this case is
(12) 
Prototypes in (11) or (12) are recomputed at each iteration based on the current version of implants. Note that this training setup does not apply to the 1shot scenario as it requires at least two support samples per class.
3.3 Inference on novel classes
Inference is the same whether the embedding network has been implanted or not. Here we adopt the prototypical network model too. What we have found to work best is to perform global pooling of the embeddings of the support examples and compute class prototypes by (1). Given a query , the standard prediction is then to assign it to the nearest prototype
(13) 
where is cosine similarity [42]. Alternatively, we can densely classify the embedding , softassigning independently the embedding of each spatial location, then average over all locations according to
(14) 
where is the scaled cosine similarity (7), and finally classify to .
4 Related work
Metric learning is common in fewshot learning. Multiple improvements of the standard softmax and crossentropy loss are proposed by [49, 22, 53, 45, 9] to this end. Traditional methods like siamese networks are also considered [3, 40, 18] along with models that learn by comparing multiple samples at once [44, 50, 42]. Learning to generate new samples [9] is another direction. Our solution is related to prototypical networks [42] and matching networks [44] but we rather use a parametric classifier.
Metalearning is the basis of a large portion of the fewshot learning literature. Recent approaches can be roughly classified as: optimizationbased methods, that learn to initialize the parameters of a learner such that it becomes faster to finetune [5, 28, 29]; memorybased methods leveraging memory modules to store training samples or to encode adaptation algorithms [39, 35]; data generation methods that learn to generate new samples [47]; parameter generating methods that learn to generate the weights of a classifier [6, 33] or the parameters of a network with multiple layers [1, 7, 48, 8]. The motivation behind the latter is that it should be easier to generate new parameters rather than to finetune a large network or to train a new classifier from scratch. By generating a single linear layer at the end of the network [6, 32, 33], one neglects useful coarse visual information found in intermediate layers. We plug our neural implants at multiple depth levels, taking advantage of such features during finetuning and learning new ones.
Network adaptation is common when learning a new task or new domain. One solution is to learn to mask part of the network, keeping useful neurons and retraining/finetuning the remaining neurons on the newtask [25, 26]. Rusu et al. [38] rather widen the network by adding new neurons in parallel to the old ones at every layer. New neurons receive data from all hidden states, while previously generated weights are frozen when training for the new task. Our neural implants are related to [38] as we add new neurons in parallel and freeze the old ones. Unlike [38], we focus on lowdata regimes, keeping the number of new implanted neurons small to diminish overfitting risks and train faster, and adding them only at top layers, taking advantage of generic visual features from bottom layers [51].
5 Experiments
We evaluate our method extensively on the miniImageNet and FC100 datasets. We describe the experimental setup and report results below.
5.1 Experimental setup
Networks. In most experiments we use a ResNet12 network [31] as our embedding network, composed of four residual blocks [11], each having three 3
3 convolutional layers with batch normalization
[16]and swish1 activation function
[34]. Each block is followed by 22 maxpooling. The shortcut connections have a convolutional layer to adapt to the right number of channels. The first block has 64 channels, which is doubled at each subsequent block such that the output has depth 512. We also test dense classification on a lighter network C128F
[6] composed of four convolutional layers, the first (last) two having 64 (128) channels, each followed by 22 maxpooling.Network  Pooling  1shot  5shot  10shot 

C128F  GAP  54.28 0.18  71.60 0.13  76.92 0.12 
C128F  DC  49.84 0.18  69.64 0.15  74.61 0.13 
ResNet12  GAP  58.61 0.18  76.40 0.13  80.76 0.11 
ResNet12  DC  61.26 0.20  79.01 0.13  83.04 0.12 
Stage 1 training  Support/query pooling at testing  

Support  GMP  GAP  
Queries  GMP  DC  GAP  DC  
Global average pooling  Base classes  63.55 0.20  77.17 0.11  79.37 0.09  77.15 0.11 
Novel classes  72.25 0.13  70.71 0.14  76.40 0.13  73.28 0.14  
Both classes  37.74 0.07  38.65 0.05  56.25 0.10  54.80 0.09  
Base classes  79.28 0.10  80.67 0.10  80.61 0.10  80.70 0.10  
Dense classification  Novel classes  79.01 0.13  77.93 0.13  78.55 0.13  78.95 0.13 
Both classes  42.45 0.07  57.98 0.10  67.53 0.10  67.78 0.10  
GMP: global maxpooling; GAP: global average pooling; DC: dense classification. Bold: accuracies in the confidence interval of the best one.
Stage 2 training  Query pooling at testing  

Support  Queries  GAP  GMP  DC 
GMP  GMP  79.03 0.19  78.92 0.19  79.04 0.19 
GMP  DC  79.06 0.19  79.37 0.18  79.15 0.19 
GAP  GAP  79.62 0.19  74.57 0.22  79.77 0.19 
GAP  DC  79.56 0.19  74.58 0.22  79.52 0.19 
Datasets. We use miniImageNet [44], a subset of ImageNet ILSVRC12 [37] of 60,000 images of resolution
, uniformly distributed over 100 classes. We use the split proposed in
[35]: classes for training, 16 for validation and 20 for testing.We also use FC100, a fewshot version of CIFAR100 recently proposed by Oreshkin et al. [31]. Similarily to miniImageNet, CIFAR100 [19] has 100 classes of 600 images each, although the resolution is . The split is classes for training, 20 for validation and 20 for testing. Given that all classes are grouped into 20 superclasses, this split does not separate superclasses: classes are more similar in each split and the semantic gap between base and novel classes is larger.
Evaluation protocol. The training set comprises images of the base classes . To generate the support set of a fewshot task on novel classes, we randomly sample classes from the validation or test set and from each class we sample images. We report the average accuracy and the corresponding 95% confidence interval over a number of such tasks. More precisely, for all implanting experiments, we sample 5,000 fewshot tasks with 30 queries per class, while for all other experiments we sample 10,000 tasks. Using the same task sampling, we also consider fewshot tasks involving base classes , following the benchmark of [6]. We sample a set of extra images from the base classes to form a test set for this evaluation, which is performed in two ways: independently of the novel classes and jointly on the union . In the latter case, base prototypes learned at stage 1 are concatenated with novel prototypes [6].
Implementation details. In stage 1, we train the embedding network for 8,000 (12,500) iterations with minibatch size 200 (512) on miniImageNet (FC100). On mini
ImageNet, we use stochastic gradient descent with Nesterov momentum. On FC100, we rather use Adam optimizer
[17]. We initialize the scale parameter at () on miniImageNet (FC100). For a given fewshot task in stage 2, the implants are learned over 50 epochs with AdamW optimizer
[24] and scale fixed at .5.2 Results
Networks. In Table 1 we compare ResNet12 to C128F, with and without dense classification. We observe that dense classification improves classification accuracy on novel classes for ResNet12, but it is detrimental for the small network. C128F is only 4 layers deep and the receptive field at the last layer is significantly smaller than the one of ResNet12, which is 12 layers deep. It is thus likely that units from the last feature map correspond to nonobject areas in the image. Regardless of the choice of using dense classification or not, ResNet12 has a large performance gap over C128F. For the following experiments, we use exclusively ResNet12 as our embedding network.
Dense classification. To evaluate stage 1, we skip stage 2 and directly perform testing. In Table 2 we evaluate 5way 5shot classification on miniImageNet with global average pooling and dense classification at stage 1 training, while exploring different pooling strategies at inference. We also tried using global maxpooling at stage 1 training and got similar results as with global average pooling. Dense classification in stage 1 training outperforms global average pooling in all cases by a large margin. It also improves the ability of the network to integrate new classes without forgetting the base ones. Using dense classification at testing as well, the accuracy on both classes is 67.78%, outperforming the best result of 59.35% reported by [6]. At testing, dense classification of the queries with global average pooling of the support samples is the best overall choice. One exception is global maxpooling on both the support and query samples, which gives the highest accuracy for new classes but the difference is insignificant.
Method  1shot  5shot  10shot 

GAP  58.61 0.18  76.40 0.13  80.76 0.11 
DC (ours)  62.53 0.19  78.95 0.13  82.66 0.11 
DC + WIDE  61.73 0.19  78.25 0.14  82.03 0.12 
DC + IMP (ours)    79.77 0.19  83.83 0.16 
MAML [5]  48.70 1.8  63.10 0.9   
PN [42]  49.42 0.78  68.20 0.66   
Gidaris et al. [6]  55.45 0.7  73.00 0.6   
PN [31]  56.50 0.4  74.20 0.2  78.60 0.4 
TADAM [31]  58.50  76.70  80.80 
Method  1shot  5shot  10shot 

GAP  41.02 0.17  56.63 0.16  61.65 0.15 
DC (ours)  42.04 0.17  57.05 0.16  61.91 0.16 
DC + IMP (ours)    57.63 0.23  62.91 0.22 
PN [31]  37.80 0.40  53.30 0.50  58.70 0.40 
TADAM [31]  40.10 0.40  56.10 0.40  61.60 0.50 
Implanting. In stage 2, we add implants of 16 channels to all convolutional layers of the last residual block of our embedding network pretrained in stage 1 on the base classes with dense classification. The implants are trained on the few examples of the novel classes and then used as an integral part of the widened embedding network at testing. In Table 3, we evaluate different pooling strategies for support examples and queries in stage 2. Average pooling on both is the best choice, which we keep in the following.
Ablation study. In the top part of Table 4 we compare our best solutions with a number of baselines on 5way miniImageNet classification. One baseline is the embedding network trained with global average pooling in stage 1. Dense classification remains our best training option. In stage 2, the implants are able to further improve on the results of dense classification. To illustrate that our gain does not come just from having more parameters and greater feature dimensionality, another baseline is to compare it to widening the last residual block of the network by 16 channels in stage 1. It turns out that such widening does not bring any improvement on novel classes. Similar conclusions can be drawn from the top part of Table 5, showing corresponding results on FC100. The difference between different solutions is less striking here. This may be attributed to the lower resolution of CIFAR100, allowing for less gain from either dense classification or implanting, since there may be less features to learn.
Comparison with the stateoftheart. In the bottom part of Table 4 we compare our model with previous fewshot learning methods on the same 5way miniImageNet classification. All our solutions outperform by a large margin other methods on 1, 5 and 10shot classification. Our implanted network sets a new stateofthe art for 5way 5shot classification of miniImageNet. Note that prototypical network on ResNet12 [31] is already giving very competitive performance. TADAM [31] builds on top of this baseline to achieve the previous state of the art. In this work we rather use a cosine classifier in stage 1. This setting is our baseline GAP and is already giving similar performance to TADAM [31]. Dense classification and implanting are both able to improve on this baseline. Our best results are at least 3% above TADAM [31] in all settings. Finally, in the bottom part of Table 5 we compare our model on 5way FC100 classification against prototypical network [31] and TADAM [31]. Our model outperforms TADAM here too, though by a smaller margin.
6 Conclusion
In this work we contribute to fewshot learning by building upon a simplified process for learning on the base classes using a standard parametric classifier. We investigate for the first time in fewshot learning the activation maps and devise a new way of handling spatial information by a dense classification loss that is applied to each spatial location independently, improving the spatial distribution of the activation and the performance on new tasks. It is important that the performance benefit comes with deeper network architectures and highdimensional embeddings. We further adapt the network for new tasks by implanting neurons with limited new parameters and without changing the original embedding. Overall, this yields a simple architecture that outperforms previous methods by a large margin and sets a new state of the art on standard benchmarks.
References
 [1] L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi. Learning feedforward oneshot learners. In NIPS, 2016.
 [2] L.C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
 [3] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In CVPR, 2005.
 [4] L. FeiFei, R. Fergus, and P. Perona. Oneshot learning of object categories. IEEE Trans. PAMI, 2006.
 [5] C. Finn, P. Abbeel, and S. Levine. Modelagnostic metalearning for fast adaptation of deep networks. In ICML, 2017.
 [6] S. Gidaris and N. Komodakis. Dynamic fewshot visual learning without forgetting. In CVPR, 2018.
 [7] D. Ha, A. Dai, and Q. V. Le. Hypernetworks. ICLR, 2017.
 [8] C. Han, S. Shan, M. Kan, S. Wu, and X. Chen. Face recognition with contrastive convolution. In ECCV, 2018.
 [9] B. Hariharan and R. B. Girshick. Lowshot visual recognition by shrinking and hallucinating features. ICCV, 2017.
 [10] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask rcnn. ICCV, 2017.
 [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
 [12] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
 [13] S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, 2001.
 [14] E. Hoffer, I. Hubara, and D. Soudry. Fix your classifier: the marginal value of training the last weight layer. ICLR, 2018.
 [15] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In CVPR, 2017.
 [16] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
 [17] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv, 2014.
 [18] G. Koch, R. Zemel, and R. Salakhutdinov. Siamese neural networks for oneshot image recognition. In ICMLW, 2015.
 [19] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
 [20] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia. Path aggregation network for instance segmentation. In CVPR, 2018.
 [21] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In ECCV, 2016.

[22]
W. Liu, Y. Wen, Z. Yu, and M. Yang.
Largemargin softmax loss for convolutional neural networks.
In ICML, 2016.  [23] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
 [24] I. Loshchilov and F. Hutter. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101, 2017.
 [25] A. Mallya, D. Davis, and S. Lazebnik. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In ECCV, 2018.
 [26] A. Mallya and S. Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. CVPR, 2018.
 [27] N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel. Metalearning with temporal convolutions. arXiv preprint arXiv:1707.03141, 2017.
 [28] N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel. A simple neural attentive metalearner. ICLR, 2018.
 [29] A. Nichol, J. Achiam, and J. Schulman. On firstorder metalearning algorithms. CoRR, abs/1803.02999, 2018.
 [30] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In ICCV, 2015.
 [31] B. N. Oreshkin, A. Lacoste, and P. Rodriguez. Tadam: Task dependent adaptive metric for improved fewshot learning. arXiv preprint arXiv:1805.10123, 2018.
 [32] H. Qi, M. Brown, and D. G. Lowe. Lowshot learning with imprinted weights. In CVPR, 2018.
 [33] S. Qiao, C. Liu, W. Shen, and A. Yuille. Fewshot image recognition by predicting parameters from activations. CVPR, 2018.
 [34] P. Ramachandran, B. Zoph, and Q. V. Le. Searching for activation functions. ICLR, 2018.
 [35] S. Ravi and H. Larochelle. Optimization as a model for fewshot learning. ICLR, 2017.
 [36] J. Redmon and A. Farhadi. Yolo9000: better, faster, stronger. arXiv preprint arXiv:1612.08242, 2016.
 [37] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. arXiv, 2014.
 [38] A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
 [39] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Metalearning with memoryaugmented neural networks. In ICML, 2016.
 [40] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, 2015.
 [41] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.
 [42] J. Snell, K. Swersky, and R. Zemel. Prototypical networks for fewshot learning. In NIPS, 2017.
 [43] S. Thrun. Lifelong learning algorithms. In Learning to learn. 1998.
 [44] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In NIPS, 2016.
 [45] W. Wan, Y. Zhong, T. Li, and J. Chen. Rethinking feature distribution for loss functions in image classification. In CVPR, 2018.
 [46] F. Wang, X. Xiang, J. Cheng, and A. L. Yuille. Normface: l 2 hypersphere embedding for face verification. In ACM Multimedia, 2017.
 [47] Y.X. Wang, R. Girshick, M. Hebert, and B. Hariharan. Lowshot learning from imaginary data. 2018.
 [48] Y.X. Wang, D. Ramanan, and M. Hebert. Learning to model the tail. In NIPS, 2017.
 [49] Y. Wen, K. Zhang, Z. Li, and Y. Qiao. A discriminative feature learning approach for deep face recognition. In ECCV, 2016.
 [50] F. S. Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales. Learning to compare: Relation network for fewshot learning. In CVPR, 2018.
 [51] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In NIPS, 2014.
 [52] F. Yu, V. Koltun, and T. A. Funkhouser. Dilated residual networks. In CVPR, 2017.
 [53] Y. Zheng, D. K. Pal, and M. Savvides. Ring loss: Convex feature normalization for face recognition. In CVPR, 2018.

[54]
B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba.
Learning deep features for discriminative localization.
In CVPR, 2016.
Comments
There are no comments yet.