MultiGrain: a unified image embedding for classes and instances

02/14/2019 ∙ by Maxim Berman, et al. ∙ 0

MultiGrain is a network architecture producing compact vector representations that are suited both for image classification and particular object retrieval. It builds on a standard classification trunk. The top of the network produces an embedding containing coarse and fine-grained information, so that images can be recognized based on the object class, particular object, or if they are distorted copies. Our joint training is simple: we minimize a cross-entropy loss for classification and a ranking loss that determines if two images are identical up to data augmentation, with no need for additional labels. A key component of MultiGrain is a pooling layer that allow us to take advantage of high-resolution images with a network trained at a lower resolution. When fed to a linear classifier, the learned embeddings provide state-of-the-art classification accuracy. For instance, we obtain 79.3 accuracy with a ResNet-50 learned on Imagenet, which is a +1.7 improvement over the AutoAugment method. When compared with the cosine similarity, the same embeddings perform on par with the state-of-the-art for image retrieval at moderate resolutions.



There are no comments yet.


page 1

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image recognition is central to computer vision, with dozens of new approaches being proposed every year, each optimized for particular aspects of the problem. From coarse to fine, we may distinguish the recognition of (a) classes, where one looks for a certain type of object regardless of intra-class variations, (b) instances, where one looks for a particular object despite changes in the viewing conditions, and (c) copies, where one looks for a copy of a specific image despite edits. While these problems are in many ways similar, the standard practice is to use specialized, and thus incompatible, image representations for each case.

Specialized representations may be accurate, but constitute a significant bottleneck in some applications. Consider for example image retrieval, where the goal is to match a query image to a large database of other images. Very often one would like to search the same database with multiple granularities, by matching the query by class, instance, or copy. The performance of an image retrieval system depends primarily on the image embeddings it uses. These strike a trade-off between database size, matching and indexing speed, and retrieval accuracy. Adopting multiple embeddings, narrowly optimized for each type of query, means multiplying the resource usage.

Figure 1: Our goal is to extract an image descriptor incorporating different levels of granularity, so that we can solve, in particular, classification and particular object recognition tasks: The descriptor is either fed to a linear classifier, or directly compared with cosine similarity.

In this paper we present a new representation, MultiGrain, that can achieve the three tasks together, regardless of differences in their semantic granularity, see fig. 1. We learn MultiGrain by jointly training an image embedding for multiple tasks. The resulting representation is compact and outperforms narrowly-trained embeddings.

Instance retrieval has a wide range of industrial applications, including detection of copyrighted images and exemplar-based recognition of unseen objects. In settings where billion of images have to be treated, it is of interest to obtain image embeddings suitable for more than one recognition task. For instance, an image storage platform is likely to perform some classification of the input images, aside from detecting copies or instances of the same object. An embedding relevant to all these tasks advantageously reduces both the computing time per image and storage space.

In this perspective, convolutional neural networks (CNNs) trained only for classification already go a long way towards universal features extractors. The fact that we can learn image embeddings that are simultaneously good for classification and instance retrieval is surprising but not contradictory. In fact, there is a logical dependence between the tasks: images that contain the same instance also contain, by definition, the same class; and copied images contain the same instance. This is in contrast to multi-task settings where tasks are in competition and are thus difficult to combine. Instead, both class, instance, and copy congruency lead to embeddings that should be close in feature space. Still, the degree of similarity is different in the different cases, with classification requiring more invariance to appearance variations and copy detection sensitivity to small image details.

In order to learn an image representation that satisfies the different trade-offs, we start from an existing image classification network. We use a generalized mean layer that converts a spatial activation map to a fixed-size vector. Most importantly, we show that it is an effective way to learn an architecture that can adapt to different resolutions at test time, and offer higher accuracies. This circumvents the massive engineering and computational effort needed to learn networks for larger input resolutions [He2016IdentityMI]

The joint training of classification and instance recognition objectives is based on cross-entropy and contrastive losses, respectively. Remarkably, instance recognition is learned for free, without using labels specific to instance recognition or image retrieval: we simply use the identity of the images as labels, and data augmentation as a way to generate different versions of each image.

In summary, our main contributions are as follows:

  • We introduce the MultiGrain architecture, which outputs an image embedding incorporating different levels of granularity. Our dual classification+instance objective improves the classification accuracy on its own.

  • We show that part of this gain is due to the batching strategy, where each batch contains repeated instances of its images with different data augmentations for the purpose of the retrieval loss;

  • We incorporate a pooling layer inspired by image retrieval. It provides a significant boost in classification accuracy when provided with high-resolution images.

Overall, our architecture offers competing performance both for classification and image retrieval. Noticeably, we report a significant boost in accuracy on Imagenet with a ResNet-50 network over the state of the art.

The paper is organized as follows. Section 2 introduces related works. Section 3 introduces our architecture, the training procedure and explains how we adapt the resolution at test time. Section 4 reports the main experiments.

2 Related work

Image classification.

Most computer vision architectures designed for a wide range of tasks leverage a trunk architecture initially designed for classification, such as Residual networks [he16resnet]. An improvement on the trunk architecture eventually translates to better accuracies in other tasks [he2017mask], as shown on the detection task of the LSVRC’15 challenge. While recent architectures [hu2018squeeze, huang2017densely, Xie2017AggregatedRT] have exhibited some additional gains, other lines of research have been investigated successfully. For instance, a recent trend [mahajan2018exploring] is to train high capacity networks by leveraging much larger training sets of of weakly annotated data. To our knowledge, the state of the art on Imagenet ILSVRC 2012 benchmark for a model learned from scratch on Imagenet train data only is currently hold by the gigantic AmoebaNet-B architecture [huang2018gpipe] (557M parameters), which takes 480x480 images as input.

In our paper, we choose ResNet-50 [he16resnet] (25.6M parameters), as this architecture is adopted in the literature in many works both on image classification and instance retrieval.

Image search: from local features to CNN.

“Image search” is a generic retrieval task that is usually associated with and evaluated for more specific problems such as landmark recognition [Jegou2008HammingEA, Philbin07], particular object recognition [nister2006scalable] or copy detection [Douze2009EvaluationOG], for which the objective is to find the images most similar to the query in a large image collection. In this paper “image retrieval” will refer to instance-level retrieval, where object instances are as broad as possible, not restricted to buildings, as in the Oxford/Paris benchmark. Effective systems for image retrieval rely on accurate image descriptors. Typically, a query image is described by an embedding vector, and the task amounts to searching the nearest neighbors of this vector in the embedding space. Possible improvement include refinement steps such as geometric verification [Philbin07], query expansion [chum2007total, TOLIAS20143466], or database-side pre-processing or augmentation [tolias2016image, turcot2009better].

Local image descriptors are traditionally aggregated to global image descriptors suited for matching in an inverted database, as in the seminal bag-of-words model [Sivic2003VideoGA]. After the emergence of convolutional neural networks (CNNs) for large-scale classification on ImageNet [krizhevsky2012imagenet, ILSVRC15], it has become apparent that CNNs trained on classification datasets are very competitive image feature extractors for various vision tasks, including instance retrieval [babenko2014neural, gong2014multi, Razavian2014CNNFO].

Specific architectures for particular object retrieval

are built upon a regular classification trunk, and modified so the pooling stage gives more spatial locality in order to cope with small objects and clutter. For instance, a competitive baseline for instance retrieval on various datasets is the R-MAC image descriptor [Tolias2015ParticularOR]

. It aggregates regionally pooled features extracted from a CNN. The authors show that this specialized pooling combined with PCA whitening 

[jegou2012negative] leads to efficient many-to-many comparisons between image regions, highly beneficial to image retrieval. Gordo et al. [Gordo2016DeepIR, Gordo2017EndtoEndLO] show that fine-tuning these regionally-aggregated representations end-to-end on an external image retrieval dataset using a ranking loss yields significant improvements for instance retrieval.

Radenović  [radenovic2018fine] show that R-MAC pooling is advantageously replaced by a generalized mean pooling (see section 3.1), which is a spatial pooling of the features exponentiated to an exponent over the whole image. The exponentiation localizes the features on the point of interests in the image, replacing regional aggregation in R-MAC.

Multi-task training

is an active area of research [Kokkinos2017UberNetTA, Zamir2018TaskonomyDT], motivated by the observation that deep neural networks are transferable to a wide range of vision tasks [Razavian2014CNNFO]. Moreover trained deep neural networks exhibit a high level of compressibility [han2015deep]. In some cases, sharing the capacity of neural networks between different tasks through shared parameters helps the learning by allowing complementary training among datasets and low-level features. Despite some successes with multi-task networks for vision such as UberNet [Kokkinos2017UberNetTA]

, the design and training of multi-task networks still involve numerous heuristics. Ongoing lines of work include finding the right architecture for an efficient sharing of parameters 

[rebuffi2018efficient], and finding the right optimization parameters for such networks in order to depart from the traditional setting of single-task single-dataset end-to-end gradient descent, and efficiently weight the gradients in order to obtain a well-performing network in all tasks [guo2018dynamic].

Data augmentation

is a cornerstone of the training in large-scale vision applications [krizhevsky2012imagenet]

, which improves generalization and reduces over-fitting. In a stochastic gradient descent (SGD) optimization setting, we show that including multiple data-augmented instances of the same image in one optimization batch, rather than having only distinct images in the batch, significantly enhances the effect of data-augmentations and improve the generalization of the network. A related batch augmented (BA) sampling strategy was concurrently introduced by Hoffer et al. 

[2019arXiv190109335H]. When augmenting the size of the batches in a large-scale distributed optimization of a neural network, they show that filling these bigger batches with data-augmented copies of the image in the batch yields better generalization performance, and uses computing resources more efficiently through reduced data processing time. As discussed in section 3.3 and highlighted in our classification results (section 4.4), we show that a gain in performance under this sampling scheme is obtained using the same batch size, , with a lower number of distinct images per batch. We consider this scheme of repeated augmentations (RA) within the batch as a way to boost the effect of data augmentation over the course of the optimization. Our results indicate that RA is a technique of general interest, beyond large-scale distributed training applications, for improving the generalization of neural networks.

3 Architecture design

Our goal is to develop a convolutional neural network that is suitable for both image classification and instance retrieval. In the current best practices, the architectures and training procedures used for class and instance recognition differ in a significant manner.

This section describes such technical differences, summarized in table 1, together with our solutions to bridge them. This leads us to a unified architecture, shown in fig. 2, that we jointly train for both tasks in an end-to-end manner.

classification retrieval
spatial pooling avg. pooling RMAC [TOLIAS20143466] or GeM [radenovic2018fine]
loss cross-entropy triplet [Gordo2016DeepIR]
batch sampling diverse similar images in batch
whitening no yes
resolution low () high (kscaled)
Table 1: Differences between classification and image retrieval: Retrieval architectures incorporate a final pooling layer that is regionalized (RMAC) or magnifies activations (GeM). The triplet loss requires a batching strategy with pairs of matching images.

3.1 Spatial pooling operators

This section considers the final, global spatial pooling layer. Local pooling operators, usually max pooling, are found throughout the layers of most convolutional networks to achieve local invariance to small translations. By contrast, global spatial pooling converts a 3D tensor of activations produced by a convolutional trunk to a vector.


In early models such as LeNet-5 [lecun1989backpropagation] or AlexNet [krizhevsky2012imagenet], the final spatial pooling is just a linearization of the activation map. It is therefore sensitive to the absolute location. Recent architectures such as ResNet and DenseNet employ average pooling, which is permutation invariant and hence offers a more global translation invariance.

Figure 2: Overview of our Multigrain architecture.

Image retrieval

requires more localized geometric information: particular objects or landmarks are visually more similar, but the task suffers more from clutter, and a given query image has no specific training data devoted to it. This is why the pooling operator tries to favor more locality. Next we discuss the generalized mean pooling operator.

Let be the feature tensor computed by a convolutional neural network for a given image, where is the number of feature channels and and are the height and width of the map, respectively. We denote by a “pixel” in the map, by the channel, and by the corresponding tensor element: . The generalized mean pooling (GeM) layer computes the generalized mean of each channel in a tensor. Formally, the GeM embedding is given by


where is a parameter. Setting this exponent as increases the contrast of the pooled feature map and focuses on the salient features of the image [Bo2009EfficientMK, Boureau2010ATA, dollar2009integral]. GeM is a generalization of the average pooling commonly used in classification networks () and of spatial max-pooling layer (). It is employed in the original R-MAC as an approximation of max pooling [dollar2009integral], yet only recently [radenovic2018fine] it was shown to be competitive on its own with R-MAC for image retrieval.

To the best of our knowledge, this paper is the first to apply and evaluate GeM pooling in an image classification setting. More importantly, we show later in this paper that adjusting the exponent is an effective way to change the input image resolution between train and test time for all tasks, which explains why image retrieval has benefited from it considering that this task employs higher-resolution images.

3.2 Training objective

In order to combine the classification and retrieval tasks, we use a joint objective function composed of a classification loss and an instance retrieval loss. The two-branch architecture is illustrated in  fig. 2 and detailed next.

Classification loss.

For classification, we adopt the standard cross-entropy loss. Formally, let be the embedding computed by the deep network for image , the parameters of a linear classifier for class , and be the ground-truth class for that image. Then


where . We omit it for simplicity, but by adding a constant channel to the feature vector, the bias of the classification layer is incorporated in its weight matrix.

Retrieval loss.

For image retrieval, the embeddings of two matching images (a positive pair) should have distances smaller than embeddings of non-matching images (a negative pair). This can be enforced in two ways. The contrastive loss [hadsell2006dimensionality] requires distances between positive pairs to be smaller than a threshold, and distances between negative pairs to be greater. The triplet loss instead requires an image to be closer to a positive sibling than to a negative sibling [schroff2015facenet], which is relative property of image triplets. These losses requires adjusting multiple parameters, including how pairs and triplets are sampled. These parameters are sometimes hard to tune, especially for the triplet loss.

Wu et al. [wu2017sampling] proposed an effective method that addresses these difficulties. Given a batch of images, they re-normalize their embeddings to the unit sphere, sample negative pairs as a function of the embedding similarity, and use those pairs in a margin loss, a variant of contrastive loss that shares some of the benefits of the triplet loss.

In more detail, given images in a batch with embeddings , the margin loss is expressed as


where is the Euclidean distance between the normalized embeddings, the label is equal to 1 if the two images match and otherwise, the margin (a constant hyper-parameter), and is a parameter (learned during training together with the model parameters), controlling the volume of the embedding space occupied embedding vectors. Due to the normalization, is equivalent to a cosine similarity, which, up to whitening (section 3.4), is also used in retrieval.

Loss (3) is computed on a subset of positive and negative pairs selected with the sampling [wu2017sampling]


where the conditional probability of choosing a negative

for image is where is a parameter and is a PDF that depends on the embedding dimension .

The use of distance weighted-sampling with margin loss is very suited to our joint training setting: this framework tolerates relatively small batch sizes ( to instances) while requiring only a small amount of positives images (3 to 5) of each instance in the batch, without the need for elaborate parameter tuning or offline sampling.

Joint loss and architecture.

The joint loss is a combination of classification and retrieval loss weighted by a factor . For a batch of images, the joint loss writes as


, losses are normalized by the number of items in the corresponding summations.

3.3 Batching with repeated augmentation (RA)

Here, we propose to use only a training dataset for image classification, and train instance recognition via data augmentation. The rationale is that data augmentation produces another image that contains the same object instance. This approach does not require more annotation beyond the standard classification set.

We introduce a new sampling scheme for training with SGD and data augmentation, which we refer to as repeated augmentations. In RA we form an image batch by sampling different images from the dataset, and transform them up to times by a set of data augmentations to fill the batch. Thus, the instance level ground-truth iff images and are two augmented versions of the same training image. The key difference with the standard sampling scheme in SGD is that samples are not independent, as augmented versions of the same image are highly correlated. While this strategy reduces the performance if the batch size is small, for larger batch sizes RA outperforms the standard i.i.d. scheme – while using the same batch size and learning rate for both schemes. This is different from the observation of [2019arXiv190109335H], who also consider repeated samples in a batch, but simultaneously increase the size of the latter.

We conjecture that the benefit of correlated RA samples is to facilitate learning features that are invariant to the only difference between the repeated images — the augmentations. By comparison, with standard SGD sampling, two versions of the same image are seen only in different epochs. A study of an idealized problem illustrates this phenomenon in the supplementary material 


3.4 PCA whitening

In order to transfer features learned via data augmentation to standard retrieval datasets, we apply a step of PCA whitening, in accordance with previous works in image retrieval [Gordo2017EndtoEndLO, jegou2012negative]. The Euclidean distance between transformed features is equivalent to the Mahalanobis distance between the input descriptors. This is done after training the network, using an external dataset of unlabelled images.

The effect of PCA whitening can be undone in the parameters of the classification layer, so that the whitened embeddings can be used for both classification and instance retrieval. In detail, let be an image embedding vector and the weight vector for class , such that are the outputs of the classifier as in eq. 2. The whitening operation can be written as [Gordo2017EndtoEndLO] given the whitening matrix and centering vector ; hence

where and are the modified weight and bias for class . We observed that inducing decorrelation via a loss [Cogswell2016ReducingOI] is insufficient to ensure that features generalize well, which concurs with prior works [Gordo2017EndtoEndLO, radenovic2018fine].

3.5 Input sizes

The standard practice in image classification is to resize and center-crop input images to a relatively low resolution, e.g.  pixels [krizhevsky2012imagenet]. The benefits are a smaller memory footprint, faster inference, and the possibility of batching the inputs if they are cropped to a common size. On the other hand, image retrieval is typically dependent on finer details in the images, as an instance can be seen under a variety of scales, and cover only a small amount of pixels. The currently best-performing feature extractors for image retrieval therefore commonly use input sizes of  [Gordo2017EndtoEndLO] or  [radenovic2018fine] pixels for the largest side, without cropping the image to a square. This is impractical for end-to-end training of a joint classification and retrieval network.

Instead, we train our architecture at the standard resolution, and use larger input resolutions at test time only. This is possible due to a key advantage of our architecture: a network trained with a pooling exponent and resolution can be evaluated at a larger resolution using a larger pooling exponent , see our validation in section 4.4.

Proxy task for cross-validation of .

In order to select the exponent , suitable for all tasks, we create a synthetic retrieval task IN-aug in between classification and retrieval. We sample images from the training set of ImageNet, per class, and create 5 augmented copies of each of them, using the “full” data augmentation described before.

We evaluate the retrieval accuracy on IN-aug in a fashion similar to UKBench [nister2006scalable], with an accuracy ranging from 0 to 5 depending measuring how many of the first 5 augmentations are ranked in top 5 positions. We pick the best-performing on IN-aug, which provides the following choices as a function of and :

The optimal obtained on IN-aug provides a trade-off between retrieval and classification. Experimentally, we observed that other choices are suitable for setting this parameter: fine-tuning the parameter alone using training inputs at a given resolution by back-propagation of the cross-entropy loss provides similar results and values of .

4 Experiments and Results

After presenting the datasets, we provide a parametric study and our results in image classification and retrieval.

4.1 Experimental settings

Base architecture and training settings.

The convolutional trunk is ResNet-50 [he16resnet]. SGD starts with a learning rate of which is reduced tenfold at epochs for a total of epochs (a standard setting [paszke2017automatic]). The batch size is set to and an epoch is defined as a fixed number of iterations. With uniform batch sampling, one epoch corresponds to two passes over the training set; with RA and , one epoch corresponds to of the images of the training set. All classification baselines are trained using this longer schedule for a fair comparison.

Data augmentation.

We use standard flips, random resized crops [howard2013some], random lighting noise and a color jittering of brightness, contrast and saturation [krizhevsky2012imagenet, howard2013some]. We refer to this set of augmentations as “full”, see details in supplemental C. As indicated in table 2 our network reaches % top-1 validation error under our chosen schedule and data augmentation when trained with cross-entropy alone and uniform batch sampling. This figure is on the high end of accuracies reported for the ResNet-50 network [goyal2017accurate, he16resnet] without specially-crafted regularization terms [zhang2018mixup] or data augmentations [cubuk2018autoaugment].

Input image resolution            full resolution           
Figure 3: An off-the-shelf ResNet-50 reacts strongly on channel 909 of the last activation map for class “racing car”. The image on the left is a hard example for the class. We show channel 909 for that image, at several resolutions and with GeM parameters and . In the low resolution version, the cars are too small to be visible individually on the activation map. In the full resolution version, the location of the cars is more clear. In addition, reduces the noisy detections relative to the true locations.

Pooling exponent.

During the end-to-end training of our network, we consider two settings for the pooling exponent in the GeM layer of section 3.1: we set either or . corresponds to average pooling, as used in classification architectures. The relevant literature [radenovic2018fine] and our preliminary experiments on off-the-shelf classification networks suggest that the value improves the retrieval performance on standard benchmarks. Figure 3 illustrates this choice. By setting , the car is detected with high confidence and without spurious detections. Boureau  [Boureau2010ATA] analyse average- and max-pooling of sparse features. They find that when the number of pooled features increases, it is beneficial to make them more sparse, which is consistent with the observation we make here.

Input size and cropping.

As described in section 3.5, we train our network on crops of size pixels. For testing, we experiment with computing MultiGrain embeddings at resolutions . For resolution , we follow the classical image classification protocol “resolution 224”: the smallest side of an image is resized to and then a central crop is extracted. For resolution , we instead follow the protocol common in image retrieval and resize the largest side of the image to the desired number of pixels and evaluate the network on the rectangular image, without cropping.

Margin loss and batch sampling.

We use

 3 data-augmented repetitions per batch. We use the default margin loss hyperparameters of 

[wu2017sampling] (details in supplementary B). As in [wu2017sampling] the distance-weighted sampling is performed independently on each of the 4 GPUs used for training.


We train our networks on the ImageNet-2012 training set of 1.2 million images labelled into object categories [ILSVRC15]. Classification accuracies are reported on the validation images of this dataset. For image retrieval, we report the mean average precision on the Holidays dataset [Jegou2008HammingEA], with images rotated manually when necessary, as in prior evaluations on this dataset [Gordo2016DeepIR]. We also report the accuracy on the UKB object recognition benchmark [nister2006scalable], which contains instances of objects under varying viewpoints each; each image is used as a query to find its 4 closest neighbors in embedding space; the number of correct neighbors is averaged across all images, yielding a maximum score of . We also report the performance of our network in a copy detection setting, indicating the mean average precision on the “strong” subset of the INRIA Copydays dataset [Douze2009EvaluationOG]. We add 10K distractor images randomly sampled from the YFCC100M large-scale collection of unlabelled images [Thomee2016YFCC100MTN]. We call the combination C10k.

The PCA whitening transformations are computed from the features of 20K images from YFCC100M, distinct from the C10k distractors.

4.2 Expanding resolution with pooling exponent

(a) Holidays (mAP)
(b) ImageNet val (top-1)

Figure 4: Retrieval and classification accuracies as a function of pooling exponent and the image resolution. At training time, the pooling was . Note the clear interaction between the resolution and the pooling exponent .

As our reference scheme, we train the network at resolution 224x224 with RA sampling and pooling exponent . When testing on images with the same 224x224 resolution, this gives a top-1 validation accuracy on Imagenet, % points above the non-RA baseline, see table 2.

We now feed larger images at test time, , we consider resolutions and vary the exponents at test time. Figures 3(a) and 3(b) show the classification accuracy on ImageNet validation and the retrieval accuracy on Holidays at different resolutions, for different values of the test pooling exponent . As expected, at  = , the pooling exponent yielding best accuracy in classification is the exponent with which the network has been trained,  = . Observe that testing at larger scale requires an exponent , both for classification and for retrieval.

In the following, we adopt the values obtained by our cross-validation on IN-aug, see section 3.5.

4.3 Analysis of the tradeoff parameter

We now analyze the impact of the tradeoff parameter . Note, this parameter does not directly reflect the relative importance of the two loss terms during training, since these are not homogeneous:  = 0.5 does not mean that they have equal importance. Figure 5 analyzes the actual relative importance of the classification and margin loss terms, by measuring the average norm of the gradient back-propagated through the network at epochs 0 and 120. One can see that  =  means that the classification has slightly more weight at the beginning of the training. The classification term becomes dominant at the end of the training, meaning that the network has already learned to cancel data augmentation.

In terms of performance,  = leads to a poor classification accuracy. Interestingly, the classification performance is higher for the intermediate  =  ( at  = ) than for  = , see Table 2. Thus, the margin loss leads to a performance gain for the classification task.

Figure 5: Fraction of the classification and retrieval terms, measured as , where the vector is the gradient from the component. Note, how the retrieval loss’ influence is decreasing over epochs.

We set in our following experiments, as it gives the best classification accuracy at the practical resolutions and pixels. As a reference, we also report a few results with .

4.4 Classification results

From now on, our MultiGrain nets are trained at resolution  =  with exponent  =  (standard average pooling) or  =  in the GeM pooling. For each evaluation resolutions  = , the same exponent is selected according to section 3.5, yielding a single embedding for classification and for the retrieval. Table 2 presents the classification results. There is a large improvement in classification performance from our baseline Resnet-50 with  = ,  = , “full” data augmentation (76.2% top-1 accuracy), to a MultiGrain model at  = ,  = ,  =  (78.6% top-1). We identify four sources for this improvement:

  1. Repeated augmentations: adding RA batch sampling (section 3.3) yields an improvement of .

  2. Margin loss: the retrieval loss helps the generalizing effect of data augmentation: .

  3. pooling: GeM at training (section 3.1) allows the margin loss to have a much stronger effect thanks to increased localization of the features: .

  4. Expanding resolution: evaluating at resolution adds to the MultiGrain network, reaching the top-1 accuracy. This is made possible by the training – which yields sparser features, more generalizable over different resolutions, and by the pooling adaptation – without it the performance at this resolution is only .

The selection for evaluation at higher resolutions has its limits: at pixels, due to the large discrepancy between the training and testing scale for the feature extractor, the accuracy drops to ( without the adaptation).

Architecture data resol. train-time pooling
ResNet-50 full / /
MultiGrain full / /
MultiGrain full / /
MultiGrain AA / /
MultiGrain full / /
MultiGrain AA / /
MultiGrain full / /
MultiGrain AA / /
PyTorch model zoo /
mixup [zhang2018mixup] /
BA ([2019arXiv190109335H] /
AutoAugment [cubuk2018autoaugment] /
Table 2: ImageNet 2012 validation performance at top-1 / top-5 accuracies (). Resnet-50 is a classification baseline trained with cross-entropy with our training schedule, data augmentation, and uniform batch sampling. MultiGrain uses the same Resnet-50 trunk. At resolutions  224 we evaluate with exponent as described in section 3.5.



(AA) is a method to learn data-augmentation using reinforcement learning techniques to improve the accuracy of classification networks on ImageNet. We directly integrate the data-augmentations found by the algorithm

[cubuk2018autoaugment] trained on their Resnet-50 model using a long schedule of 270 passes over the dataset, with batch size . We have observed that this longer training gives more impact to the AA-generated augmentations. We therefore use a longer schedule of 7508 iterations per epoch, keeping the batch size to .

Our method benefits from this data-augmentation: MultiGrain reaches top-1 accuracy at resolution with  3,  0.5. To the best of our knowledge, this is the best top-1 accuracy reported for Resnet-50 when training and evaluating at this resolution, significantly higher than the reported with AutoAugment alone [cubuk2018autoaugment] or for mixup [zhang2018mixup]. Using a higher resolution at test time improves the accuracy further: we obtain top-1 accuracy at resolution . Our strategy of adapting the pooling exponent to a larger resolution is still effective, and significantly outperforms the state of the art performance for a ResNet-50 learned on ImageNet at training resolution .

  Method resol. Holidays UKB CD10k
  MultiGrain 500 91.8 3.89 81.1
  MultiGrain 800 91.6 3.91 82.5
  MultiGrain 500 91.5 3.90 80.7
  MultiGrain 800 92.5 3.91 78.6
  Fisher vectors [jegou2012aggregating] 800 63.4 3.35 42.7
  Neural codes [babenko2014neural] 224 79.3 3.56
  ResNet-50 RMAC [Gordo2016DeepIR] 724 90.9
  ResNet-50 RMAC [Gordo2016DeepIR] 1024 93.3
  ResNet-101 RMAC [Gordo2017EndtoEndLO] 800 91.4 3.89
  GeM [radenovic2018fine] 1024 93.9
Table 3: Instance search results and baselines, on Holidays ( mAP) and UKB (). We set pooling at training time for our MultiGrain models, and set as given in section 3.5. GeM is fine-tuned at resolution 362x362 on additional images tailored to the retrieval task. Their best result is obtained with multi-scale input and implies additional processing.

4.5 Retrieval results

We present our retrieval results in table 3, with an ablation study and copy-detection results in the supplemental material (D). Our MultiGrain nets improve accuracies on all datasets with respect to the Resnet-50 baseline for comparable resolutions. Repeated augmentations (RA) is again a key ingredient in this context.

We compare with baselines where no annotated retrieval dataset is used. [Gordo2016DeepIR, Gordo2017EndtoEndLO] give off-the-shelf network accuracies with R-MAC pooling. MultiGrain compares favorably with their results at a comparable resolution (800). They reach accuracies above mAP on Holidays but this requires a resolution  1000 pixels.

It is also worth noting that we reach reasonable retrieval performance at resolution  500, which is a interesting operating point with respect to the traditional inference resolutions for retrieval. Indeed, a forward pass of Resnet-50 on 16 processor cores takes s at resolution , against s at resolution ( slower). Because of this quadratic increase in timing, and the single embedding computed by MultiGrain, our solution is particularly adapted to large-scale or low-resource vision applications.

For comparison, we also report some older related results on the UKB and C10k datasets, that are not competitive with MultiGrain. Neural codes [babenko2014neural]

is one of the first works on retrieval with deep features. The Fisher vector 

[jegou2012aggregating] is a pooling method that uses local SIFT descriptors.

At resolutions we see that the results with the margin loss ( 0.5) are slightly lower than without (1). This is partly due to the limited transfer from the IN-aug task to the variations observed in retrieval datasets.

5 Conclusion

In this work we have introduced MultiGrain, a unified embedding for image classification and instance retrieval. MultiGrain relies on a classical convolutional neural network trunk, with a GeM layer topped with two heads at training time. We have discovered that by adjusting this pooling layer we are able to increase the resolution of images used a inference time, while maintaining a small resolution at training time. We have shown that MultiGrain embeddings can perform well on classification and retrieval. Interestingly, MultiGrain also sets a new state of the art on pure classification compared to all results obtained with the same convolutional trunk. Overall, our results show that retrieval and classification tasks can benefit from each other.

An implementation of our method is open-sourced at


Appendix A Data-augmented batches: toy model

We have observed in Sections 4.3 and 3.3 that training our architecture (ResNet-50 trunk) with data-augmented batches yields improvements with respect to the vanilla uniform sampling scheme, despite the decrease in image diversity.

This observation holds even in the absence of ranking triplet loss, all things being equal otherwise: same number of iterations per epoch, number of epochs, learning rate schedule, and batch size. As an example, fig. A.1 shows the evolution of the validation accuracy of our network trained under cross-entropy with our training schedule and a pooling, batches of size 512, with the data augmentation introduced in section 4.1, with uniform batches vs. with batch sampling. While initial epochs suffer from the reduced diversity of the batches compared to the uniformly-sampled variant, the reinforced effect on data augmentation compensates for this in the long run, and makes the batch-augmented variant reach a higher final accuracy.

Figure A.1: Evolution of the validation accuracy on ImageNet-val with and without data-augmented batches.

Since we observe this better performance even for a pure image classification task, an interesting question is whether this benefit is specific to our architecture and training method (batch-norm, etc), or if it is more generally applicable? Hereafter we analyse a linear model and synthetic classification task that seems to align with the second hypothesis.

We consider an idealized model of the effect of including different data-augmented instances of the same image in one batch using standard stochastic gradient descent. We create a synthetic training set of points pictured in fig. A.2 of  =  positive and  =  negative training points

by sampling from two 2D Gaussian distributions:


with being the ground truth label. We sample a test dataset in the same manner.

Figure A.2: Training set for the toy model in appendix A.

We consider the SGD training of an SVM


using the Hinge loss


We consider the symmetry across the x-axis


as a label-preserving data-augmentation suited to our synthetic dataset. We train the SVM (A.2) using one pass through the data-augmented dataset of size , using batches of size .

Figure A.3: Evolution of the test accuracy of the SVM trained on the synthetic data, averaged accross 100 runs.

The only difference between the two optimization schedules is the order in which the samples are batched and presented to the optimizer. We consider two batch sampling strategies:

  • Uniform sampling: we sample the elements of the batch randomly from , without replacement;

  • Paired sampling: we generate a batch by pairing a random element from and its data-augmentation, removing these two elements from .

Figure A.3 shows the evaluation of the accuracy with the iterations in both of these cases, averaged across 100 runs. It is clear that pairing the data-augmented pairs in one batch accelerates the convergence of this model.

This idealized experiment demonstrates that there are cases in which the repeated augmentation scheme provides an optimization and generalization boost, and reinforces the effect of data augmentation.

Appendix B Margin loss hyper-parameters

Table B.1 gives the value of the hyper-parameters for the margin loss used during the training of our models.

parameter value
learning rate
Table B.1: Margin loss hyper-parameters

Appendix C Data augmentation hyper-parameters

transformation parameter range
horizontal flip
random resized crop
color jitter brightness contrast saturation
lighting transform intensity
Table C.1: full data-augmentation transforms and parameters

Table C.1 gives the transformations in the full data augmentation used in our experiments (section 4.1), along with their parameters.

Holidays UKB CD10k
Method 224 500 800 224 500 800 224 500 800
PyTorch model zoo
Resnet-50 trained with pooling
Resnet-50 trained with pooling
MultiGrain + AA
Table D.1: Full results including Copydays + 10k distractors (CD10k,  mAP), and ablation study for the MultiGrain models. The Pytorch model simply extract the last activation layer as a descriptor [babenko2014neural]. Resnet-50 corresponds to features extracted from a classification baseline with or GeM pooling, trained with cross-entropy with our training schedule, data augmentation, and uniform batch sampling.

Appendix D Additional results and ablation study for Multigrain in retrieval

original evaluation
Architecture acc. (%) acc. (%) acc. (%) acc. (%) acc. (%)
NASNet-A-Mobile zoph2018learning 224 / / /
SENet154 [hu2018squeeze] / / /
PNASNet-5-Large liu2018progressive / / / /
Table E.1: Additional top-1/top-5 validation classification accuracies obtained by finetuning for higher evaluation scales on off-the-shelf networks. The first column indicates the training resolution and the accuracy we measured at this resolution, with standard evaluation (resize of the largest scale to + center crop). The subsequent columns show the accuracy measured at higher resolutions without cropping, together with the found by finetuning for these resolutions (appendix E).

Table D.1 reports additional results of the MultiGrain architecture, with an ablation study analyzing the effect of each component.

As already reported in the main paper, for some datasets the choice of not using the triplet loss () is as good or better than our generic choice (). Of course, then the embedding is not multi-purpose anymore. Overall, the different elements employed in our architecture (RA and the layers specific to Multigrain) still give a significant improvement over simply using the activations, and is competitive with the state of the art for the same resolution/complexity.

Note, the AutoAugment data augmentation does not transfer well to the retrieval tasks. This can be explained by their specificity to Imagenet classification. This shows the limitation of a particular choice of data-augmentation if a single embedding for classification and retrieval datasets is desired. Learning AutoAugment specifically for the retrieval task would certainly help, but would probably also result in less general embeddings. Hence, data-augmentation is a limiting factor for multi-purpose embeddings: improving for one task like classification hurts the performance for other tasks.

Appendix E Evaluation of off-the-shelf classifiers at higher resolutions

In this section, we present some additional classification results using off-the-shelf pretrained classification networks trained with standard average pooling ().

As outlined in sections 3.5 and 4.2, one of our contributions is a strategy for evaluating classifier networks trained with GeM pooling at scale and exponent at a higher resolution and adapted exponent . It can be used on pretrained networks as well.

For an evaluation scale , we use the alternative strategy described in section 3.5 to choose : we finetune the parameter

by stochastic gradient descent, backpropagating the cross-entropy loss on training images from imagenet, rescaled to the desired input resolution. Compared to a full finetuning at this input resolution, this strategy has a limited memory footprint, given that the backpropagation only has to be done on the ultimate classification layer before reaching the pooling layer, allowing for an efficient computation of the gradient of

. Experimentally we also found that this process converges on a few thousands of training samples, while a finetuning of the classification layer would require several data-augmented epochs on the full training set.

The finetuning is done using SGD with batches of (non-cropped) images, with momentum and initial learning rate , decayed under a polynomial learning rate decay


with the total number of iterations.

We select images from the training set ( per category) for the fine-tuning and do one pass on this reduced dataset. We use off-the-shelf pretrained convnets from the Cadene/pretrained-models.pytorch GitHub repository111Url: Table E.1 outlines the resulting validation accuracies. We see that for each network there is a scale and choice of that performs better than the standard evaluation.

These networks have not been trained using GeM pooling with ; as exhibited in our classification results (table 2) we found this to be another key ingredient in ensuring a higher scale insensitivity and better performance at larger resolution. As in our main experiments with the MultiGrain architecture with a ResNet-50 backbone, it is likely that these networks would reach higher values when training from scratch with a pooling, and adding repeated augmentations and margin loss. However, running training experiments on these large networks is significantly more expensive. Therefore, we leave this for future work.