Cross-Batch Memory for Embedding Learning

12/14/2019 ∙ by Xun Wang, et al. ∙ Malong Technologies 11

Mining informative negative instances are of central importance to deep metric learning (DML). However, the hard-mining ability of existing DML methods is intrinsically limited by mini-batch training, where only a mini-batch of instances are accessible at each iteration. In this paper, we identify a "slow drift" phenomena by observing that the embedding features drift exceptionally slow even as the model parameters are updating throughout the training process. It suggests that the features of instances computed at preceding iterations can considerably approximate to their features extracted by current model. We propose a cross-batch memory (XBM) mechanism that memorizes the embeddings of past iterations, allowing the model to collect sufficient hard negative pairs across multiple mini-batches - even over the whole dataset. Our XBM can be directly integrated into general pair-based DML framework. We demonstrate that, without bells and whistles, XBM augmented DML can boost the performance considerably on image retrieval. In particular, with XBM, a simple contrastive loss can have large R@1 improvements of 12%-22.5% on three large-scale datasets, easily surpassing the most sophisticated state-of-the-art methods by a large margin. Our XBM is conceptually simple, easy to implement - using several lines of codes, and is memory efficient - with a negligible 0.2 GB extra GPU memory.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 8

page 15

page 16

page 17

page 18

Code Repositories

research-xbm

XBM: Cross-Batch Memory for Embedding Learning


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep metric learning (DML) aims to learn an embedding space where instances from the same class are encouraged to be closer than those from different classes. As a fundamental problem in computer vision, DML has been applied to various tasks, including image retrieval

[38, 12, 7]

, face recognition

[37], zero-shot learning [46, 1, 16], visual tracking [17, 33] and person re-identification [43, 13].

A family of DML approaches are pair-based, whose objectives can be defined in term of pair-wise similarities within a mini-batch, such as contrastive loss [3], triplet loss [28], lifted-structure loss[21], n-pairs loss [29], multi-similarity (MS) loss [36] and etc. Moreover, most existing pair-based DML methods can be unified as weighting schemes under a general pair weighting (GPW) framework [36].

The performance of pair-based methods heavily rely on their capability of mining informative negative pairs. To collect sufficient informative negative pairs from each mini-batch, many efforts have been devoted to improving the sampling scheme, which can be categorized into two main directions: (1) sampling informative mini-batches based on global data distribution [31, 6, 27, 31, 9]; (2) weighting informative pairs within each individual mini-batch [21, 29, 36, 34, 39].

However, no matter how sophisticated the sampling scheme is, the hard mining ability is essentially limited by the size of mini-batch, which determines the number of possible training pairs. Therefore, to improve the sampling scheme, it is straightforward to enlarge the mini-batch size, which can boost the performance of pair-based DML methods immediately. We demonstrate that the performance of both basic pair-based approaches, contrastive loss, and recent pair-weighting method, MS loss, are improved strikingly when the mini-batch size grows larger on large-scale datasets (Figure 1, left and middle). It is not surprising because the number of negative pairs grows quadratically w.r.t. the mini-batch size. However, enlarging mini-batch is not an ideal solution to solve the hard mining problem because of two drawbacks: (1) the mini-batch size is limited by the GPU memory and computational cost; (2) a large mini-batch (e.g. 1800 used in [28]) often requires cross-device synchronization, which is a challenging engineering task. A naive solution to collect abundant informative pairs is to compute the features of instances in the whole training set before each training iteration, and then search hard negative pairs from the whole dataset. Obviously, this solution is extremely time consuming, especially for a large-scale dataset, but it inspires us to break the limit of mining hard negatives within a single mini-batch.

In this paper, we identify an interesting “slow drift”

phenomena that the embedding of an instance actually drifts at a relatively slow rate throughout the training process. It suggests that the deep features of a mini-batch computed at past iterations can be approximated to those extracted by current model. Based on the

“slow drift” phenomena, we propose a cross-batch memory (XBM) module to record and update the deep features of recent mini-batches, allowing for mining informative examples across mini-batches. Our cross-batch memory can provide plentiful hard negative pairs by directly connecting each anchor in the current mini-batch with embeddings from recent mini-batches.

Our XBM is conceptually simple, easy to implement and memory efficient. The memory module can be updated using a simple enquene-dequene mechanism by leveraging the computation-free features computed at the past iterations, with about a negligible 0.2 GB extra GPU memory. More importantly, our XBM can be directly integrated into most existing pair-based methods with just several lines of codes, and can boost their performances considerably. We evaluate our memory scheme with various conventional pair-based DML techniques on three widely used large-scale image retrieval datasets: Stanford Online Products (SOP) [21], In-shop Clothes Retrieval (In-shop) [19], PKU VehicleID (VehicleID) [18]. In Figure 1 (middle and right), our approach exhibits excellent robustness and brings consistent performance improvements across all settings: under the same configurations, our memory module obtains extraordinary R@1 improvements (e.g. over 20% for contrastive loss) on all three datasets compared with the corresponding conventional pair-based methods. Furthermore, with our XBM, a simple contrastive loss can easily outperform the state-of-the-art sophisticated methods, such as [36, 25, 2], by a large margin.

In parallel to our work, in [10], He et al

. built a dynamic dictionary as a queue of preceding mini-batches to provide a rich set of negative samples for unsupervised learning (also with a contrastive loss). However, unlike

[10] that uses a specific encoding network to compute the features of current mini-batch, our features are computed more efficiently by taking them directly from the forward of current model with no additional computational cost. More importantly, to solve the problem of feature drift, He et al. designed a momentum update that slowly progresses the key encoder to ensure consistence between different iterations, while we identify the “slow drift” phenomena that suggests that the features can become stable by itself when it finishes the early phase of training.

2 Related Work

Pair-based DML. Pair-based DML methods can be optimized by computing the pair-wise similarities between instances in the embedding space [8, 21, 28, 34, 29, 36]. Contrastive loss [8] is one of the classic pair-based DML methods, which learns a discriminative metric via Siamese networks. It encourages the deep features of positive pairs to be closer to each other and those of negative pairs to be farther than a fixed threshold. Triplet loss [28] requires the similarity of a positive pair to be higher than that of a negative pair (with the same anchor) by a given margin.

Inspired by contrastive loss and triplet loss, a number of pair-based DML algorithms have been developed to weight all pairs in a mini-batch, such as up-weighting informative pairs (e.g. N-pair loss [29], MS loss [36]) through a log-exp formulation, or sampling negative pairs uniformly w.r.t. pair-wise distance [39]. Generally, pair-based methods can be cast into a unified weighting formulation through GPW framework[36].

However, most deep models are trained with SGD where only a mini-batch of samples are accessible at each iteration, and the size of a mini-batch can be relatively small compared to the whole dataset, especially when the dataset grows larger. Moreover, a large fraction of the pairs is less informative as the model learns to embed most trivial pairs correctly. Thus the conventional pair-based DML techniques suffer from the lack of hard negative pairs which are critical to promote the model training.

To alleviate the aforementioned problems, a number of approaches have been developed to increase the potential information contained in a mini-batch, such as building a class-level hierarchical tree [6], updating class-level signatures [31] to select hard negative instances, or obtaining samples from an individual cluster [27]. Unlike these approaches which aim to enrich a mini-batch, our XBM are designed to directly mine hard negative examples across multiple mini-batches.

Proxy-based DML. The other branch of DML methods is to optimize the embedding by comparing each sample with proxies, including proxy NCA [20], NormSoftmax [45] and SoftTriple [24]. In fact, our XBM module can be regarded as proxies to some extent. However, there are two main differences between the proxy-based methods and our XBM module: (1) proxies are often optimized along with the model weights, while the embeddings of our memory are directly taken from past mini-batches; (2) proxies are used to represent the class-level information, whereas the embedding of our memory computes the information for each instance. Both proxy-based methods and our XBM augmented pair-based methods are able to capture the global distrubution of the whole dataset during training.

Feature Memory Module. Non-parametric memory module of embeddings has shown power in various computer visual tasks [35, 42, 40, 41, 47]. For examples, the external memory can be used to address the unaffordable computational demand of conventional NCA [40] in large-scale recognition, and encourage instance-invariance in domain adaptation [47, 41]. Only positive pairs are optimized, while negatives are ignored in [40]. However, our XBM is to provide a rich set of negative examples for pair-based DML methods, which is more generalized and can make full use of past embeddings. The key distinction is that existing memory modules either only store the embeddings of current mini-batch [35], or maintain the whole dataset [40, 47] with moving average update, while our XBM is maintained as a dynamic queue of mini-batches, which is more flexible and applicable in extremely large-scale datasets.

3 Cross-Batch Memory Embedding Networks

Figure 2: Cross-Batch Memory (XBM) trains an embedding network by comparing each anchor with the memory bank using a pair-based loss. The memory bank is maintained as a queue with the current mini-batch enqueued and the oldest mini-batch dequeued. Our XBM enables a large amount of valid negatives for each anchor to benefit the model training with many pair-based methods.

In this section, we first analyze the limitation of existing pair-based DML methods, then we introduce the “slow drift” phenomena, which provides the underlying evidence that supports our cross-batch mining approach. Finally, we describe our XBM module and integrate it into pair-based DML methods.

3.1 Delving into Pair-based DML

Let denotes the training instances, and is the corresponding label of . The embedding function, , projects a data point onto a -dimensional unit hyper-sphere,

. We measure the similarity of a pair of instances through the cosine similarity of their embeddings. During training, we denote the affinity matrix of all pairs within current mini-batch as

, whose element is the cosine similarity between the embeddings of the -th sample and the -th sample: .

To facilitate further analysis, we delve into pair-based DML methods by the GPW framework described in [36]. With GPW, a pair-based function can be cast to a unified pair-weighting form:

(1)

where is the mini-batch size and is the weight assigned to . Eq. 1 claims that any pair-based methods is intrinsically a weighting scheme focusing on informative pairs. Here, we list the weighting schemes of contrastive loss, triplet loss and MS loss.

  • Contrastive loss. For each negative pair, if , otherwise . The weights of all positive pairs are 1.

  • Triplet loss. For each negative pair, , where is the valid positive set sharing the anchor. Formally, and is the predefined margin in triplet loss. Similarly, we can obtain the triplet weight for a positive pair.

  • MS loss. Unlike contrastive loss and triplet loss that only assign an integer weight value, MS loss [36] is able to weight the pairs more properly by jointly considering multiple similarities. The MS weight for a negative pair is computed as:

    where and are hyper-parameters, and is the valid negative set of the anchor . the MS weights of the positive pairs are similar.

In fact, the main path of developing pair-based DML is to design a better weighting mechanism for pairs within a mini-batch. Under a small mini-batch (e.g. 16 or 32), the sophisticated weighting schemes can perform much better (Figure 1, left). However, beyond the weighting scheme, the mini-batch size is also of great importance to DML. Figure 1 (left and middle) shows the R@1s of many pair-based methods are increased considerably by using a larger mini-batch size on large-scale benchmarks. Intuitively, the number of negative pairs increases quadratically when the mini-batch size grows, which naturally provides more informative pairs. Instead of developing another sophisticated but highly complicated algorithms to weight the informative pairs, our intuition is to simply collect sufficient informative negative pairs, where a simple weighting scheme, such as contrastive loss, can easily outperform the stage-of-the-art weighting approaches. This provides a new path that is straightforward yet more efficient to solve the hard mining problem in DML.

Naively, a straightforward solution to collect more informative negative pairs is to increase mini-batch size. However, training deep networks with a large mini-batch is limited by GPU memory, and often requires massive data flow communication between multiple GPUs. To this end, we attempt to achieve the same goal by introducing an alternative approach with very low GPU memory and minimum computation burden. We propose a XBM module that allows the model to collect informative pairs over multiple past mini-batches, based on the “slow drift” phenomena as described below.

3.2 Slow Drift Phenomena

The embeddings of past mini-batches are usually considered as out-of-date since the model parameters are changing throughout the training process [10, 31, 24]. Such out-of-date features are always discarded, but we learn that they can be an important yet computation-free resource by identifying the “slow drift” phenomena. We study drifting speed of the embeddings by measuring the difference of features for a same instance computed at different training iterations. Formally, the feature drift of an input at -th iteration with step is defined as:

(2)

We train GoogleNet [32] from scratch with contrastive loss, and compute the average feature drift for a set of randomly sampled instances with different steps: (in Figure 3). The feature drift is consistently small within a small amount of, e.g. 10 iterations. For the large steps, e.g. 100 and 1000, the features change drastically at the early phase, but become relatively stable within about 3K iterations. Furthermore, when the learning rate decreases, the drift gets extremely slow. We define such phenomena as “slow drift”, which suggests that with a certain number of training iterations, the embeddings of instances can drift very slowly, resulting in marginal difference between the features computed at different training iterations.

Figure 3: Feature drift with different steps on SOP. The embeddings of training instances drift within a relatively small distance even under a large interval, e.g. .

Furthermore, we demonstrate that such “slow drift” phenomena can provide a strict upper bound for the error of gradients of a pair-based loss. For simplicity, we consider the contrastive loss of one single negative pair , where , are the embeddings of current model and is an approximation of .

Lemma 1.

Assume , and satisfies Lipschitz continuous condition, then the error of gradients related to is,

(3)

where is the Lipschitz constant.

Proof and discussion of Lemma 1 are provided in Supplementary Materials. Empirically, is often less than 1 with the backbones used in our experiments. Lemma 1 suggests that the error of gradients is controlled by the error of embeddings under Lipschitz assumption. Thus, the “slow drift” phenomenon ensures that mining across mini-batches can provide negative pairs with valid information for pair-based methods.

In addition, we discover that the “slow drift” of embeddings is not a special phenomena in DML, and also exists in other conventional tasks, as shown in Supplementary Materials.

3.3 Cross-Batch Memory Module

train network f conventionally with K epochs
initialize XBM as queue M
for x, y in loader:  #  x: data, y: labels
    anchors = f.forward(x)
    # memory update
    enqueue(M, (anchors.detach(), y))
    dequeue(M)
    # compare anchors with M
    sim = torch.matmul(anchors.transpose(), M.feats)
    loss = pair_based_loss(sim, y, M.labels)
    loss.backward()
    optimizer.step()

 

Algorithm 1 Pseudocode of XBM.

We first describe our cross-batch memory (XBM) module, with model initialization and updating mechanism. Then we show that our memory module is easy to implement, can be directly integrated into existing pair-based DML framework as a plug-and-play module, by simply using several lines of codes (in Algorithm 1).

XBM.

As the feature drift is relatively large at the early epochs, we warm up the neural networks with 1k iterations, allowing the model to reach a certain local optimal field where the embeddings become more stable. Then we initialize the memory module

by computing the features of a set of randomly sampled training images with the warm-up model. Formally, , where is initialized as the embedding of the i-th sample , and is the memory size. We define a memory ratio as , the ratio of memory size to the training size.

We maintain and update our XBM module as a queue: at each iteration, we enqueue the embeddings and labels of current mini-batch, and dequeue the entities of the earliest mini-batch. Thus our memory module is updated with embeddings of current mini-batch directly, without any additional computation. Furthermore, the whole training set can be cached in the memory module, because very limited memory is required for storing the embedding features, e.g. -

float vectors. See other update strategy in Supplementary Materials.

XBM augmented Pair-based DML. We perform hard negative mining with our XBM on the pair-based DML. For a pair-based loss, based on GPW in [36], it can be cast into a unified weighting formulation of pair-wise similarities within a mini-batch in Eqn.(1), where a similarity matrix is computed within a mini-batch, . To perform our XBM mechanism, we simply compute a cross-batch similarity matrix between the instances of current mini-batch and the memory bank.

Formally, the memory augmented pair-based DML can be formulated as below:

(4)

where . The memory augmented pair-based loss in Eqn.(4) is in the same form as the normal pair-based loss in Eqn.(1), by computing a new similarity matrix . Each instance in current mini-batch is compared with all the instances stored in the memory, enabling us to collect sufficient informative pairs for training. The gradient of the loss w.r.t. is,

(5)

and the gradients w.r.t. model parameters (

) can be computed through a chain rule:

(6)

Finally, the model parameters

are optimized through stochastic gradient descent. Lemma 

1 ensures that the gradient error raised by embedding drift can be strictly constrained with a bound, which minimizes the side effect to the model training.

Hard Mining Ability. We investigate the hard mining ability of our XBM mechanism. We study the amount of valid negative pairs produced by our memory module at each iteration. A negative pair with non-zero gradient is considered as valid. The statistical result is illustrated in Figure 4. Throughout the training procedure, our memory module steadily contributes about 1,000 hard negative pairs per iteration, whereas less than 10 valid pairs are generated by original mini-batch mechanism.

Qualitative hard mining results are shown in Figure 5. Given a bicycle image as an anchor, the mini-batch provides limited and different images, e.g. roof and sofa, as negatives. On the contrary, our XBM offers both semantically bicycle-related images and other samples, e.g. wheel and clothes. These results clearly demonstrate that the proposed XBM can provide diverse, related, and even fine-grained samples to construct negative pairs.

Our results confirm that (1) existing pair-based approaches suffer from the problem of lacking informative negative pairs to learn a discriminative model, and (2) our XBM module can significantly strengthen the hard mining ability of existing pair-based DML techniques in a very simple yet efficient manner. See more examples in Supplementary Materials.

Figure 4: The number of valid negative examples from mini-batch and that from memory per iteration. Model is trained on SOP with , mini-batch size 64 and GoogleNet as the backbone.
Figure 5: Given an anchor image ( yellow), examples of positive ( green) and negative from mini-batch ( gray) and that from memory ( purple). Current mini-batch can only bring few valid negatives with less information, while our XBM module can provide a wide variety of informative negative examples.

4 Experiments

4.1 Implementation Details

We follow the standard settings in [21, 29, 22, 14] for fair comparison. Specifically, we adopt GoogleNet [32] as the default backbone networks if not mentioned. The weights of the backbone were pre-trained on ILSVRC 2012-CLS dataset [26]. A 512-d fully-connected layer with normalization is added after the global pooling layer. The default embedding dimension is set as 512. For all datasets, the input images are first resized to , and then cropped to . Random crops and random flip are utilized as data augmentation during training. For testing, we only use the single center crop to compute the embedding for each instance as [21]. In all experiments, we use the Adam optimizer [15] with weight decay and the PK sampler (P categories, K samples/category) to construct mini-batches.

4.2 Datasets

Our methods are evaluated on three datasets which are widely-used on large-scale few-shot image retrieval. The Recall performance is reported. The training and testing protocol follow the standard setups:

Stanford Online Products (SOP) [21] contains 120,053 online product images in 22,634 categories. There are only 2 to 10 images for each category. Following [21], we use 59,551 images (11,318 classes) for training, and 60,502 images (11,316 classes) for testing.

In-shop Clothes Retrieval (In-shop) contains 72,712 clothing images of 7,986 classes. Following [19], we use 3,997 classes with 25,882 images as the training set. The test set is partitioned to a query set with 14,218 images of 3,985 classes, and a gallery set having 3,985 classes with 12,612 images.

PKU VehicleID (VehicleID) [18] contains 221,736 surveillance images of 26,267 vehicle categories, where 13,134 classes (110,178 images) are used for training. Following the test protocol described in [18], evaluation is conducted on a predefined small, medium and large test sets which contain 800 classes (7,332 images), 1600 classes (12,995 images) and 2400 classes (20,038 images) respectively.

SOP In-shop VehicleID
Small Medium Large
Recall (%) 1 10 100 1000 1 10 20 30 40 50 1 5 1 5 1 5
Contrastive 64.0 81.4 92.1 97.8 77.1 93.0 95.2 96.1 96.8 97.1 79.5 91.6 76.2 89.3 70.0 86.0
Contrastive w/ M 77.8 89.8 95.4 98.5 89.1 97.3 98.1 98.4 98.7 98.8 94.1 96.2 93.1 95.5 92.5 95.5
Triplet 61.6 80.2 91.6 97.7 79.8 94.8 96.5 97.4 97.8 98.2 86.9 94.8 84.8 93.4 79.7 91.4
Triplet w/ M 74.2 87.4 94.2 98.0 82.9 95.7 96.9 97.4 97.8 98.0 93.3 95.8 92.0 95.0 91.3 94.8
MS 69.7 84.2 93.1 97.9 85.1 96.7 97.8 98.3 98.7 98.8 91.0 96.1 89.4 94.8 86.7 93.8
MS w/ M 76.2 89.3 95.4 98.6 87.1 97.1 98.0 98.4 98.7 98.9 94.1 96.7 93.0 95.8 92.1 95.6
Table 1: Retrieval results of memory augmented (‘w/ M’) pair-based methods compared with their respective baselines on three datasets.

4.3 Ablation Study

We provide ablation study on SOP dataset with GoogleNet to verify the effectiveness of the proposed XBM module.

Memory Ratio. The search space of our cross-batch hard mining can be dynamically controlled by memory ratio . We illustrate the impact of memory ratio to XBM augmented contrastive loss on three benchmarks (in Figure 1, right). Firstly, our method significantly outperforms baseline (with ), with over 20% improvements on all three datasets using various configurations of . Secondly, our method with mini-batch of 16 can achieve better performance than the non-memory counterpart using 256 mini-batch, e.g. with an improvement of 71.7%78.2% on recall@1, while our method saves GPU memory considerably.

More importantly, our XBM can boost the contrastive loss largely with small (e.g. on In-shop, 52.0% 79.4% on recall@1 with ) and its performance is going to be saturated when the memory expands to a moderate size. It makes sense, since the memory with a small (e.g. 1%) already contains thousands of embeddings to generate sufficient valid negative instances on large-scale datasets, especially fine-grained ones, such as In-shop or VehicleID. Therefore, our memory scheme can have consistent and stable performance improvements with a wide range of memory ratios.

Mini-batch Size. Mini-batch size is critical to the performance of many pair-based approaches (Figure 1, left). We further investigate its impact to our memory augmented pair-based methods (shown in Figure 6). Our method has 3.2% performance gain by increasing a mini-batch size from 16 to 256, while the original contrastive method has a significantly larger improvement of 25.1%. Obviously, with the proposed memory module, the impact of mini-batch size is reduced largely. This indicates that the effect of mini-batch size can be strongly compensated by our memory module, which provides a more principle solution to address the hard mining problem in DML.

Figure 6: Performance of contrastive loss by training with different mini-batch sizes. Unlike conventional pair-based methods, XBM augmented contrastive loss is equally effective under random shuffle mini-batch sampler (denoted with superscript *).

With General Pair-based DML. Our memory module can be directly applied to GPW framework. We evaluate it with contrastive loss, triplet loss and MS loss. As shown in Table 1, our memory module can improve the original DML approaches significantly and consistently on all benchmarks. Specifically, the memory module remarkably boost the performance of contrastive loss by 64.0%77.8% and MS loss by 69.7%76.2%. Furthermore, with sophisticated sampling and weighting approach, MS loss has 16.7% recall@1 performance improvement over contrastive loss on VehicleID Large test set. Such a large gap can be simply filled by our memory module, with a further 5.8% improvement

. MS loss has a smaller improvement because it weights extremely hard negatives heavily which might be outliers, while such harmful influence is weakened by the equally weighting scheme of contrastive loss. For detailed analysis see Supplementary Materials.

The results suggest that (1) both straightforward (e.g. contrastive loss) and carefully designed weighting scheme (e.g. MS loss) can be improved largely by our memory module, and (2) with our memory module, a simple pair-weighting method (e.g. contrastive loss) can easily outperform the state-of-the-art sophisticated methods such as MS loss [36] by a large margin.

Method Time GPU Mem. R@1 Gain
Cont. bs. 64 2.10 h. 5.12 GB 63.9 -
Cont. bs. 256 4.32 h. +15.7 GB 71.7 +7.8
Cont. w/ 1% 2.48 h. +0.01 GB 69.8 +5.9
Cont. w/ 100% 3.19 h. +0.20 GB 77.4 +13.5
Table 2: Training time and GPU memory cost on 64, 256 mini-batch size and 1%, 100% memory ratio with 64 mini-batch size.

Memory and Computational Cost. We analyze the complexity of our XBM module on memory and computational cost. On memory cost, The XBM module () and affinity matrix () requires a negligible 0.2 GB GPU memory for caching the whole training set (Table 2). On computational complexity, the cost of () increases linearly with memory size . With a GPU implementation, it takes a reasonable 34% amount of extra training time w.r.t. the forward and backward procedure.

It is also worth noting that XBM does not act in inference phase. It only requires 1 hour extra training time and 0.2GB memory, to achieve a surprising 13.5% performance gain by using a single GPU. Moreover, our method can be scalable to an extremely large-scale dataset, e.g. with 1 billion samples, since XBM module can generate a rich set of valid negatives with a small-memory-ratio XBM, which requires acceptable cost.

4.4 Quantitative and Qualitative Results

In this section, we compare our XBM augmented contrastive loss with the state-of-the-art DML methods on three benchmarks on image retrieval. Even though our method can reach better performance with a larger mini-batch size (Figure 6), we only use 64 mini-batch which can be implemented with a single GPU with ResNet50 [11]. Since the backbone architecture and embedding dimension can effect the recall metric, we list the results of our method with various configurations for fair comparison in Table 34 and 5. See results on more datasets in Supplementary Materials.

As can be found, with our XBM module, a contrastive loss can surpass the state-of-the-art methods on all datasets by a large margin. On SOP, our method with R outperforms the current state-of-the-art method: MIC [25] by 77.2% 80.6%. On In-shop, our method with R achieves even higher performance than FastAP [2] with R, and improves by 88.2%91.3% compared to MIC. On VehicleID, our method outperforms existing approaches considerably. For example, on the large test dataset, by using a same G, it improves the R@1 of recent A-BIER [23] largely by 81.9%92.5%. With R, our method surpass the best results by 87%93%, which is obtained by FastAP [2] using R.

Figure 7 shows that our memory module promotes to learn a more discriminative encoder. For example, at the first row, our model is aware of the deer under the lamp which is a specific character of the query product, and retrieves the correct images. In addition, we also present some bad cases in the bottom rows, where our retrieved results are visually closer to the query than that of baseline model. See more results in Supplementary Materials.

Recall (%) 1 10 100 1000
HDC [44] G 69.5 84.4 92.8 97.7
A-BIER [23] G 74.2 86.9 94.0 97.8
ABE [14] G 76.3 88.4 94.8 98.2
SM [31] G 75.2 87.5 93.7 97.4
Clustering [30] B 67.0 83.7 93.2 -
ProxyNCA [20] B 73.7 - - -
HTL [6] B 74.8 88.3 94.8 98.4
MS [36] B 78.2 90.5 96.0 98.7
SoftTriple [24] B 78.6 86.6 91.8 95.4
Margin [39] R 72.7 86.2 93.8 98.0
Divide [27] R 75.9 88.4 94.9 98.1
FastAP [2] R 73.8 88.0 94.9 98.3
MIC [25] R 77.2 89.4 95.6 -
Cont. w/ M G 77.4 89.6 95.4 98.4
Cont. w/ M B 79.5 90.8 96.1 98.7
Cont. w/ M R 80.6 91.6 96.2 98.7
Table 3: Recall@ performance on SOP. ‘G’, ‘B’ and ‘R’ denotes applying GoogleNet, InceptionBN and ResNet50 as backbone respectively, and the superscript is embedding size.
Recall (%) 1 10 20 30 40 50
HDC [44] G 62.1 84.9 89.0 91.2 92.3 93.1
A-BIER [23] G 83.1 95.1 96.9 97.5 97.8 98.0
ABE [14] G 87.3 96.7 97.9 98.2 98.5 98.7
HTL [6] B 80.9 94.3 95.8 97.2 97.4 97.8
MS [36] B 89.7 97.9 98.5 98.8 99.1 99.2
Divide [27] R 85.7 95.5 96.9 97.5 - 98.0
MIC [25] R 88.2 97.0 - 98.0 - 98.8
FastAP [2] 90.9 97.7 98.5 98.8 98.9 99.1
Cont. w/ M G 89.4 97.5 98.3 98.6 98.7 98.9
Cont. w/ M B 89.9 97.6 98.4 98.6 98.8 98.9
Cont. w/ M R 91.3 97.8 98.4 98.7 99.0 99.1
Table 4: Recall@ performance on In-Shop.
Method Small Medium Large
1 5 1 5 1 5
GS-TRS [5] 75.0 83.0 74.1 82.6 73.2 81.9
BIER [22] G 82.6 90.6 79.3 88.3 76.0 86.4
A-BIER [23] G 86.3 92.7 83.3 88.7 81.9 88.7
VANet [4] G 83.3 95.9 81.1 94.7 77.2 92.9
MS [36] B 91.0 96.1 89.4 94.8 86.7 93.8
Divide [27] R 87.7 92.9 85.7 90.4 82.9 90.2
MIC [25] R 86.9 93.4 - - 82.0 91.0
FastAP [2] 91.9 96.8 90.6 95.9 87.5 95.1
Cont. w/ M G 94.0 96.3 93.2 95.4 92.5 95.5
Cont. w/ M B 94.6 96.9 93.4 96.0 93.0 96.1
Cont. w/ M R 94.7 96.8 93.7 95.8 93.0 95.8
Table 5: Recall@ performance on VehicleID.
Figure 7: Top 4 retrieved images w/o and w/ memory module. Correct results are highlighted with green, while incorrect purple.

5 Conclusions

We have presented a conceptually simple, easy to implement, and memory efficient cross-batch mining mechanism for pair-based DML. In this work, we identify the “slow drift” phenomena that the embeddings drift exceptionally slow during the training process. Then we propose a cross-batch memory (XBM) module to dynamically update the embeddings of instances of recent mini-batches, which allows us to collect sufficient hard negative pairs across multiple mini-batches, or even from the whole dataset. Without bells and whistles, the proposed XBM can be directly integrated into a general pair-based DML framework, and improve the performance of several existing pair-based methods significantly on image retrieval. In particular, with our XBM, a basic contrastive loss can easily surpass state-of-the-art methods [36, 25, 2] by a large margin on three large-scale datasets.

This paves a new path in solving for hard negative mining which is a fundamental problem for various computer vision tasks. Furthermore, we hope the dynamic memory mechanism can be extended to improve a wide variety of machine learning tasks other than DML, since

”slow drift” is a general phenomenon that does not just exist in DML.

References

  • [1] M. Bucher, S. Herbin, and F. Jurie (2016) Improving semantic embedding consistency by metric learning for zero-shot classiffication. In ECCV, Cited by: §1.
  • [2] F. Cakir, K. He, X. Xia, B. Kulis, and S. Sclaroff (2019) Deep metric learning to rank. In CVPR, Cited by: Cross-Batch Memory for Embedding Learning, §1, §4.4, Table 3, Table 4, Table 5, §5.
  • [3] S. Chopra, R. Hadsell, and Y. LeCun (2005) Learning a similarity metric discriminatively, with application to face verification. In CVPR, External Links: Document, ISSN 1063-6919 Cited by: §1.
  • [4] R. Chu, Y. Sun, Y. Li, Z. Liu, C. Zhang, and Y. Wei (2019) Vehicle re-identification with viewpoint-aware metric learning. In ICCV, Cited by: Table 5.
  • [5] Y. Em, F. Gag, Y. Lou, S. Wang, T. Huang, and L. Duan (2017)

    Incorporating intra-class variance to fine-grained visual recognition

    .
    In ICME, Cited by: Table 5.
  • [6] W. Ge, W. Huang, D. Dong, and M. R. Scott (2018) Deep metric learning with hierarchical triplet loss. In ECCV, Cited by: §1, §2, Table 3, Table 4.
  • [7] A. Grabner, P. M. Roth, and V. Lepetit (2018)

    3D pose estimation and 3d model retrieval for objects in the wild

    .
    In CVPR, Cited by: §1.
  • [8] R. Hadsell, S. Chopra, and Y. LeCun (2006) Dimensionality reduction by learning an invariant mapping. In CVPR, External Links: Document Cited by: §2.
  • [9] B. Harwood, V. K. B G, G. Carneiro, I. Reid, and T. Drummond (2017) Smart mining for deep metric learning. In ICCV, Cited by: §1.
  • [10] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick (2019) Momentum contrast for unsupervised visual representation learning. In arXiv:1911.05722, Cited by: §1, §3.2.
  • [11] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §4.4.
  • [12] X. He, Y. Zhou, Z. Zhou, S. Bai, and X. Bai (2018) Triplet-center loss for multi-view 3d object retrieval. In CVPR, Cited by: §1.
  • [13] A. Hermans*, L. Beyer*, and B. Leibe (2017) In defense of the triplet loss for person re-identification. arXiv:1703.07737v4. Cited by: §1.
  • [14] W. Kim, B. Goyal, K. Chawla, J. Lee, and K. Kwon (2018) Attention-based ensemble for deep metric learning. In ECCV, Cited by: §4.1, Table 3, Table 4.
  • [15] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: §4.1.
  • [16] S. Kiran Yelamarthi, S. Krishna Reddy, A. Mishra, and A. Mittal (2018) A zero-shot framework for sketch based image retrieval. In ECCV, Cited by: §1.
  • [17] L. Leal-Taixé, C. Canton-Ferrer, and K. Schindler (2016) Learning by tracking: siamese cnn for robust target association. In CVPR Workshops, Cited by: §1.
  • [18] H. Liu, Y. Tian, Y. Wang, L. Pang, and T. Huang (2016) Deep relative distance learning: tell the difference between similar vehicles. In CVPR, Cited by: §1, §4.2.
  • [19] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang (2016) DeepFashion: powering robust clothes recognition and retrieval with rich annotations. In CVPR, Cited by: §1, §4.2.
  • [20] Y. Movshovitz-Attias, A. Toshev, T. K. Leung, S. Ioffe, and S. Singh (2017) No fuss distance metric learning using proxies. In ICCV, External Links: Document, Link Cited by: §2, Table 3.
  • [21] H. Oh Song, Y. Xiang, S. Jegelka, and S. Savarese (2016) Deep metric learning via lifted structured feature embedding. In CVPR, Cited by: §1, §1, §1, §2, §4.1, §4.2.
  • [22] M. Opitz, G. Waltner, H. Possegger, and H. Bischof (2017) BIER - boosting independent embeddings robustly. In ICCV, Cited by: §4.1, Table 5.
  • [23] M. Opitz, G. Waltner, H. Possegger, and H. Bischof (2018) Deep metric learning with bier: boosting independent embeddings robustly. PAMI. Cited by: §4.4, Table 3, Table 4, Table 5.
  • [24] Q. Qian, L. Shang, B. Sun, and J. Hu (2019) SoftTriple loss: deep metric learning without triplet sampling. ICCV. Cited by: §2, §3.2, Table 3.
  • [25] K. Roth, B. Brattoli, and B. Ommer (2019) MIC: mining interclass characteristics for improved metric learning. In ICCV, Cited by: Cross-Batch Memory for Embedding Learning, §1, §4.4, Table 3, Table 4, Table 5, §5.
  • [26] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet Large Scale Visual Recognition Challenge. IJCV. Cited by: §4.1.
  • [27] A. Sanakoyeu, V. Tschernezki, U. Buchler, and B. Ommer (2019) Divide and conquer the embedding space for metric learning. In CVPR, Cited by: §1, §2, Table 3, Table 4, Table 5.
  • [28] F. Schroff, D. Kalenichenko, and J. Philbin (2015) FaceNet: a unified embedding for face recognition and clustering. In CVPR, Cited by: §1, §1, §2.
  • [29] K. Sohn (2016) Improved deep metric learning with multi-class n-pair loss objective. In NeurIPS, Cited by: §1, §1, §2, §2, §4.1.
  • [30] H. O. Song, S. Jegelka, V. Rathod, and K. Murphy (2017) Deep metric learning via facility location. In CVPR, Cited by: Table 3.
  • [31] Y. Suh, B. Han, W. Kim, and K. M. Lee (2019) Stochastic class-based hard example mining for deep metric learning. In CVPR, Cited by: §1, §2, §3.2, Table 3.
  • [32] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In CVPR, Cited by: §3.2, §4.1.
  • [33] R. Tao, E. Gavves, and A. W. Smeulders (2016) Siamese instance search for tracking. In CVPR, Cited by: §1.
  • [34] E. Ustinova and V. Lempitsky (2016) Learning deep embeddings with histogram loss. In NeurIPS, External Links: Link Cited by: §1, §2.
  • [35] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. (2016) Matching networks for one shot learning. In NeurIPS, Cited by: §2.
  • [36] X. Wang, X. Han, W. Huang, D. Dong, and M. R. Scott (2019) Multi-similarity loss with general pair weighting for deep metric learning. In CVPR, Cited by: Cross-Batch Memory for Embedding Learning, §1, §1, §1, §2, §2, item –, §3.1, §3.3, §4.3, Table 3, Table 4, Table 5, §5.
  • [37] Y. Wen, K. Zhang, Z. Li, and Y. Qiao (2016) A discriminative feature learning approach for deep face recognition. In ECCV, Cited by: §1.
  • [38] P. Wohlhart and V. Lepetit (2015)

    Learning descriptors for object recognition and 3d pose estimation

    .
    In CVPR, Cited by: §1.
  • [39] C. Wu, R. Manmatha, A. J. Smola, and P. Krähenbühl (2017) Sampling matters in deep embedding learning. ICCV. External Links: 1706.07567 Cited by: §1, §2, Table 3.
  • [40] Z. Wu, A. A. Efros, and S. Yu (2018) Improving generalization via scalable neighborhood component analysis. In ECCV, Cited by: §2.
  • [41] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin (2018) Unsupervised feature learning via non-parametric instance discrimination. In CVPR, pp. 3733–3742. Cited by: §2.
  • [42] T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang (2017) Joint detection and identification feature learning for person search. In CVPR, Cited by: §2.
  • [43] R. Yu, Z. Dou, S. Bai, Z. Zhang, Y. Xu, and X. Bai (2018) Hard-aware point-to-set deep metric for person re-identification. In ECCV, Cited by: §1.
  • [44] Y. Yuan, K. Yang, and C. Zhang (2017) Hard-aware deeply cascaded embedding. In ICCV, Cited by: Table 3, Table 4.
  • [45] A. Zhai, H. Wu, and U. San Francisco (2019) Classification is a strong baseline for deep metric learning. Cited by: §2.
  • [46] Z. Zhang and V. Saligrama (2016) Zero-shot learning via joint latent similarity embedding. In CVPR, Cited by: §1.
  • [47] Z. Zhong, L. Zheng, Z. Luo, S. Li, and Y. Yang (2019) Invariance matters: exemplar memory for domain adaptive person re-identification. In CVPR, Cited by: §2.