Deep Metric Learning
Mining informative negative instances are of central importance to deep metric learning (DML). However, the hard-mining ability of existing DML methods is intrinsically limited by mini-batch training, where only a mini-batch of instances are accessible at each iteration. In this paper, we identify a "slow drift" phenomena by observing that the embedding features drift exceptionally slow even as the model parameters are updating throughout the training process. It suggests that the features of instances computed at preceding iterations can considerably approximate to their features extracted by current model. We propose a cross-batch memory (XBM) mechanism that memorizes the embeddings of past iterations, allowing the model to collect sufficient hard negative pairs across multiple mini-batches - even over the whole dataset. Our XBM can be directly integrated into general pair-based DML framework. We demonstrate that, without bells and whistles, XBM augmented DML can boost the performance considerably on image retrieval. In particular, with XBM, a simple contrastive loss can have large R@1 improvements of 12%-22.5% on three large-scale datasets, easily surpassing the most sophisticated state-of-the-art methods by a large margin. Our XBM is conceptually simple, easy to implement - using several lines of codes, and is memory efficient - with a negligible 0.2 GB extra GPU memory.READ FULL TEXT VIEW PDF
Pair-wise loss functions have been extensively studied and shown to
The two-tower architecture has been widely applied for learning item and...
An important component of unsupervised learning by instance-based
Cross-modal video-text retrieval, a challenging task in the field of vis...
Deep metric learning has been effectively used to learn distance metrics...
Deep metric learning (DML) has received much attention in deep learning ...
Contrastive learning has been applied successfully to learn numerical ve...
Deep Metric Learning
XBM: Cross-Batch Memory for Embedding Learning
Deep metric learning (DML) aims to learn an embedding space where instances from the same class are encouraged to be closer than those from different classes. As a fundamental problem in computer vision, DML has been applied to various tasks, including image retrieval[38, 12, 7]37], zero-shot learning [46, 1, 16], visual tracking [17, 33] and person re-identification [43, 13].
A family of DML approaches are pair-based, whose objectives can be defined in term of pair-wise similarities within a mini-batch, such as contrastive loss , triplet loss , lifted-structure loss, n-pairs loss , multi-similarity (MS) loss  and etc. Moreover, most existing pair-based DML methods can be unified as weighting schemes under a general pair weighting (GPW) framework .
The performance of pair-based methods heavily rely on their capability of mining informative negative pairs. To collect sufficient informative negative pairs from each mini-batch, many efforts have been devoted to improving the sampling scheme, which can be categorized into two main directions: (1) sampling informative mini-batches based on global data distribution [31, 6, 27, 31, 9]; (2) weighting informative pairs within each individual mini-batch [21, 29, 36, 34, 39].
However, no matter how sophisticated the sampling scheme is, the hard mining ability is essentially limited by the size of mini-batch, which determines the number of possible training pairs. Therefore, to improve the sampling scheme, it is straightforward to enlarge the mini-batch size, which can boost the performance of pair-based DML methods immediately. We demonstrate that the performance of both basic pair-based approaches, contrastive loss, and recent pair-weighting method, MS loss, are improved strikingly when the mini-batch size grows larger on large-scale datasets (Figure 1, left and middle). It is not surprising because the number of negative pairs grows quadratically w.r.t. the mini-batch size. However, enlarging mini-batch is not an ideal solution to solve the hard mining problem because of two drawbacks: (1) the mini-batch size is limited by the GPU memory and computational cost; (2) a large mini-batch (e.g. 1800 used in ) often requires cross-device synchronization, which is a challenging engineering task. A naive solution to collect abundant informative pairs is to compute the features of instances in the whole training set before each training iteration, and then search hard negative pairs from the whole dataset. Obviously, this solution is extremely time consuming, especially for a large-scale dataset, but it inspires us to break the limit of mining hard negatives within a single mini-batch.
In this paper, we identify an interesting “slow drift”
phenomena that the embedding of an instance actually drifts at a relatively slow rate throughout the training process. It suggests that the deep features of a mini-batch computed at past iterations can be approximated to those extracted by current model. Based on the“slow drift” phenomena, we propose a cross-batch memory (XBM) module to record and update the deep features of recent mini-batches, allowing for mining informative examples across mini-batches. Our cross-batch memory can provide plentiful hard negative pairs by directly connecting each anchor in the current mini-batch with embeddings from recent mini-batches.
Our XBM is conceptually simple, easy to implement and memory efficient. The memory module can be updated using a simple enquene-dequene mechanism by leveraging the computation-free features computed at the past iterations, with about a negligible 0.2 GB extra GPU memory. More importantly, our XBM can be directly integrated into most existing pair-based methods with just several lines of codes, and can boost their performances considerably. We evaluate our memory scheme with various conventional pair-based DML techniques on three widely used large-scale image retrieval datasets: Stanford Online Products (SOP) , In-shop Clothes Retrieval (In-shop) , PKU VehicleID (VehicleID) . In Figure 1 (middle and right), our approach exhibits excellent robustness and brings consistent performance improvements across all settings: under the same configurations, our memory module obtains extraordinary R@1 improvements (e.g. over 20% for contrastive loss) on all three datasets compared with the corresponding conventional pair-based methods. Furthermore, with our XBM, a simple contrastive loss can easily outperform the state-of-the-art sophisticated methods, such as [36, 25, 2], by a large margin.
In parallel to our work, in , He et al
. built a dynamic dictionary as a queue of preceding mini-batches to provide a rich set of negative samples for unsupervised learning (also with a contrastive loss). However, unlike that uses a specific encoding network to compute the features of current mini-batch, our features are computed more efficiently by taking them directly from the forward of current model with no additional computational cost. More importantly, to solve the problem of feature drift, He et al. designed a momentum update that slowly progresses the key encoder to ensure consistence between different iterations, while we identify the “slow drift” phenomena that suggests that the features can become stable by itself when it finishes the early phase of training.
Pair-based DML. Pair-based DML methods can be optimized by computing the pair-wise similarities between instances in the embedding space [8, 21, 28, 34, 29, 36]. Contrastive loss  is one of the classic pair-based DML methods, which learns a discriminative metric via Siamese networks. It encourages the deep features of positive pairs to be closer to each other and those of negative pairs to be farther than a fixed threshold. Triplet loss  requires the similarity of a positive pair to be higher than that of a negative pair (with the same anchor) by a given margin.
Inspired by contrastive loss and triplet loss, a number of pair-based DML algorithms have been developed to weight all pairs in a mini-batch, such as up-weighting informative pairs (e.g. N-pair loss , MS loss ) through a log-exp formulation, or sampling negative pairs uniformly w.r.t. pair-wise distance . Generally, pair-based methods can be cast into a unified weighting formulation through GPW framework.
However, most deep models are trained with SGD where only a mini-batch of samples are accessible at each iteration, and the size of a mini-batch can be relatively small compared to the whole dataset, especially when the dataset grows larger. Moreover, a large fraction of the pairs is less informative as the model learns to embed most trivial pairs correctly. Thus the conventional pair-based DML techniques suffer from the lack of hard negative pairs which are critical to promote the model training.
To alleviate the aforementioned problems, a number of approaches have been developed to increase the potential information contained in a mini-batch, such as building a class-level hierarchical tree , updating class-level signatures  to select hard negative instances, or obtaining samples from an individual cluster . Unlike these approaches which aim to enrich a mini-batch, our XBM are designed to directly mine hard negative examples across multiple mini-batches.
Proxy-based DML. The other branch of DML methods is to optimize the embedding by comparing each sample with proxies, including proxy NCA , NormSoftmax  and SoftTriple . In fact, our XBM module can be regarded as proxies to some extent. However, there are two main differences between the proxy-based methods and our XBM module: (1) proxies are often optimized along with the model weights, while the embeddings of our memory are directly taken from past mini-batches; (2) proxies are used to represent the class-level information, whereas the embedding of our memory computes the information for each instance. Both proxy-based methods and our XBM augmented pair-based methods are able to capture the global distrubution of the whole dataset during training.
Feature Memory Module. Non-parametric memory module of embeddings has shown power in various computer visual tasks [35, 42, 40, 41, 47]. For examples, the external memory can be used to address the unaffordable computational demand of conventional NCA  in large-scale recognition, and encourage instance-invariance in domain adaptation [47, 41]. Only positive pairs are optimized, while negatives are ignored in . However, our XBM is to provide a rich set of negative examples for pair-based DML methods, which is more generalized and can make full use of past embeddings. The key distinction is that existing memory modules either only store the embeddings of current mini-batch , or maintain the whole dataset [40, 47] with moving average update, while our XBM is maintained as a dynamic queue of mini-batches, which is more flexible and applicable in extremely large-scale datasets.
In this section, we first analyze the limitation of existing pair-based DML methods, then we introduce the “slow drift” phenomena, which provides the underlying evidence that supports our cross-batch mining approach. Finally, we describe our XBM module and integrate it into pair-based DML methods.
Let denotes the training instances, and is the corresponding label of . The embedding function, , projects a data point onto a -dimensional unit hyper-sphere,, whose element is the cosine similarity between the embeddings of the -th sample and the -th sample: .
To facilitate further analysis, we delve into pair-based DML methods by the GPW framework described in . With GPW, a pair-based function can be cast to a unified pair-weighting form:
where is the mini-batch size and is the weight assigned to . Eq. 1 claims that any pair-based methods is intrinsically a weighting scheme focusing on informative pairs. Here, we list the weighting schemes of contrastive loss, triplet loss and MS loss.
Contrastive loss. For each negative pair, if , otherwise . The weights of all positive pairs are 1.
Triplet loss. For each negative pair, , where is the valid positive set sharing the anchor. Formally, and is the predefined margin in triplet loss. Similarly, we can obtain the triplet weight for a positive pair.
MS loss. Unlike contrastive loss and triplet loss that only assign an integer weight value, MS loss  is able to weight the pairs more properly by jointly considering multiple similarities. The MS weight for a negative pair is computed as:
where and are hyper-parameters, and is the valid negative set of the anchor . the MS weights of the positive pairs are similar.
In fact, the main path of developing pair-based DML is to design a better weighting mechanism for pairs within a mini-batch. Under a small mini-batch (e.g. 16 or 32), the sophisticated weighting schemes can perform much better (Figure 1, left). However, beyond the weighting scheme, the mini-batch size is also of great importance to DML. Figure 1 (left and middle) shows the R@1s of many pair-based methods are increased considerably by using a larger mini-batch size on large-scale benchmarks. Intuitively, the number of negative pairs increases quadratically when the mini-batch size grows, which naturally provides more informative pairs. Instead of developing another sophisticated but highly complicated algorithms to weight the informative pairs, our intuition is to simply collect sufficient informative negative pairs, where a simple weighting scheme, such as contrastive loss, can easily outperform the stage-of-the-art weighting approaches. This provides a new path that is straightforward yet more efficient to solve the hard mining problem in DML.
Naively, a straightforward solution to collect more informative negative pairs is to increase mini-batch size. However, training deep networks with a large mini-batch is limited by GPU memory, and often requires massive data flow communication between multiple GPUs. To this end, we attempt to achieve the same goal by introducing an alternative approach with very low GPU memory and minimum computation burden. We propose a XBM module that allows the model to collect informative pairs over multiple past mini-batches, based on the “slow drift” phenomena as described below.
The embeddings of past mini-batches are usually considered as out-of-date since the model parameters are changing throughout the training process [10, 31, 24]. Such out-of-date features are always discarded, but we learn that they can be an important yet computation-free resource by identifying the “slow drift” phenomena. We study drifting speed of the embeddings by measuring the difference of features for a same instance computed at different training iterations. Formally, the feature drift of an input at -th iteration with step is defined as:
We train GoogleNet  from scratch with contrastive loss, and compute the average feature drift for a set of randomly sampled instances with different steps: (in Figure 3). The feature drift is consistently small within a small amount of, e.g. 10 iterations. For the large steps, e.g. 100 and 1000, the features change drastically at the early phase, but become relatively stable within about 3K iterations. Furthermore, when the learning rate decreases, the drift gets extremely slow. We define such phenomena as “slow drift”, which suggests that with a certain number of training iterations, the embeddings of instances can drift very slowly, resulting in marginal difference between the features computed at different training iterations.
Furthermore, we demonstrate that such “slow drift” phenomena can provide a strict upper bound for the error of gradients of a pair-based loss. For simplicity, we consider the contrastive loss of one single negative pair , where , are the embeddings of current model and is an approximation of .
Assume , and satisfies Lipschitz continuous condition, then the error of gradients related to is,
where is the Lipschitz constant.
Proof and discussion of Lemma 1 are provided in Supplementary Materials. Empirically, is often less than 1 with the backbones used in our experiments. Lemma 1 suggests that the error of gradients is controlled by the error of embeddings under Lipschitz assumption. Thus, the “slow drift” phenomenon ensures that mining across mini-batches can provide negative pairs with valid information for pair-based methods.
In addition, we discover that the “slow drift” of embeddings is not a special phenomena in DML, and also exists in other conventional tasks, as shown in Supplementary Materials.
We first describe our cross-batch memory (XBM) module, with model initialization and updating mechanism. Then we show that our memory module is easy to implement, can be directly integrated into existing pair-based DML framework as a plug-and-play module, by simply using several lines of codes (in Algorithm 1).
As the feature drift is relatively large at the early epochs, we warm up the neural networks with 1k iterations, allowing the model to reach a certain local optimal field where the embeddings become more stable. Then we initialize the memory moduleby computing the features of a set of randomly sampled training images with the warm-up model. Formally, , where is initialized as the embedding of the i-th sample , and is the memory size. We define a memory ratio as , the ratio of memory size to the training size.
We maintain and update our XBM module as a queue: at each iteration, we enqueue the embeddings and labels of current mini-batch, and dequeue the entities of the earliest mini-batch. Thus our memory module is updated with embeddings of current mini-batch directly, without any additional computation. Furthermore, the whole training set can be cached in the memory module, because very limited memory is required for storing the embedding features, e.g. -
float vectors. See other update strategy in Supplementary Materials.
XBM augmented Pair-based DML. We perform hard negative mining with our XBM on the pair-based DML. For a pair-based loss, based on GPW in , it can be cast into a unified weighting formulation of pair-wise similarities within a mini-batch in Eqn.(1), where a similarity matrix is computed within a mini-batch, . To perform our XBM mechanism, we simply compute a cross-batch similarity matrix between the instances of current mini-batch and the memory bank.
Formally, the memory augmented pair-based DML can be formulated as below:
where . The memory augmented pair-based loss in Eqn.(4) is in the same form as the normal pair-based loss in Eqn.(1), by computing a new similarity matrix . Each instance in current mini-batch is compared with all the instances stored in the memory, enabling us to collect sufficient informative pairs for training. The gradient of the loss w.r.t. is,
and the gradients w.r.t. model parameters (
) can be computed through a chain rule:
Finally, the model parameters
are optimized through stochastic gradient descent. Lemma1 ensures that the gradient error raised by embedding drift can be strictly constrained with a bound, which minimizes the side effect to the model training.
Hard Mining Ability. We investigate the hard mining ability of our XBM mechanism. We study the amount of valid negative pairs produced by our memory module at each iteration. A negative pair with non-zero gradient is considered as valid. The statistical result is illustrated in Figure 4. Throughout the training procedure, our memory module steadily contributes about 1,000 hard negative pairs per iteration, whereas less than 10 valid pairs are generated by original mini-batch mechanism.
Qualitative hard mining results are shown in Figure 5. Given a bicycle image as an anchor, the mini-batch provides limited and different images, e.g. roof and sofa, as negatives. On the contrary, our XBM offers both semantically bicycle-related images and other samples, e.g. wheel and clothes. These results clearly demonstrate that the proposed XBM can provide diverse, related, and even fine-grained samples to construct negative pairs.
Our results confirm that (1) existing pair-based approaches suffer from the problem of lacking informative negative pairs to learn a discriminative model, and (2) our XBM module can significantly strengthen the hard mining ability of existing pair-based DML techniques in a very simple yet efficient manner. See more examples in Supplementary Materials.
We follow the standard settings in [21, 29, 22, 14] for fair comparison. Specifically, we adopt GoogleNet  as the default backbone networks if not mentioned. The weights of the backbone were pre-trained on ILSVRC 2012-CLS dataset . A 512-d fully-connected layer with normalization is added after the global pooling layer. The default embedding dimension is set as 512. For all datasets, the input images are first resized to , and then cropped to . Random crops and random flip are utilized as data augmentation during training. For testing, we only use the single center crop to compute the embedding for each instance as . In all experiments, we use the Adam optimizer  with weight decay and the PK sampler (P categories, K samples/category) to construct mini-batches.
Our methods are evaluated on three datasets which are widely-used on large-scale few-shot image retrieval. The Recall performance is reported. The training and testing protocol follow the standard setups:
Stanford Online Products (SOP)  contains 120,053 online product images in 22,634 categories. There are only 2 to 10 images for each category. Following , we use 59,551 images (11,318 classes) for training, and 60,502 images (11,316 classes) for testing.
In-shop Clothes Retrieval (In-shop) contains 72,712 clothing images of 7,986 classes. Following , we use 3,997 classes with 25,882 images as the training set. The test set is partitioned to a query set with 14,218 images of 3,985 classes, and a gallery set having 3,985 classes with 12,612 images.
PKU VehicleID (VehicleID)  contains 221,736 surveillance images of 26,267 vehicle categories, where 13,134 classes (110,178 images) are used for training. Following the test protocol described in , evaluation is conducted on a predefined small, medium and large test sets which contain 800 classes (7,332 images), 1600 classes (12,995 images) and 2400 classes (20,038 images) respectively.
|Contrastive w/ M||77.8||89.8||95.4||98.5||89.1||97.3||98.1||98.4||98.7||98.8||94.1||96.2||93.1||95.5||92.5||95.5|
|Triplet w/ M||74.2||87.4||94.2||98.0||82.9||95.7||96.9||97.4||97.8||98.0||93.3||95.8||92.0||95.0||91.3||94.8|
|MS w/ M||76.2||89.3||95.4||98.6||87.1||97.1||98.0||98.4||98.7||98.9||94.1||96.7||93.0||95.8||92.1||95.6|
We provide ablation study on SOP dataset with GoogleNet to verify the effectiveness of the proposed XBM module.
Memory Ratio. The search space of our cross-batch hard mining can be dynamically controlled by memory ratio . We illustrate the impact of memory ratio to XBM augmented contrastive loss on three benchmarks (in Figure 1, right). Firstly, our method significantly outperforms baseline (with ), with over 20% improvements on all three datasets using various configurations of . Secondly, our method with mini-batch of 16 can achieve better performance than the non-memory counterpart using 256 mini-batch, e.g. with an improvement of 71.7%78.2% on recall@1, while our method saves GPU memory considerably.
More importantly, our XBM can boost the contrastive loss largely with small (e.g. on In-shop, 52.0% 79.4% on recall@1 with ) and its performance is going to be saturated when the memory expands to a moderate size. It makes sense, since the memory with a small (e.g. 1%) already contains thousands of embeddings to generate sufficient valid negative instances on large-scale datasets, especially fine-grained ones, such as In-shop or VehicleID. Therefore, our memory scheme can have consistent and stable performance improvements with a wide range of memory ratios.
Mini-batch Size. Mini-batch size is critical to the performance of many pair-based approaches (Figure 1, left). We further investigate its impact to our memory augmented pair-based methods (shown in Figure 6). Our method has 3.2% performance gain by increasing a mini-batch size from 16 to 256, while the original contrastive method has a significantly larger improvement of 25.1%. Obviously, with the proposed memory module, the impact of mini-batch size is reduced largely. This indicates that the effect of mini-batch size can be strongly compensated by our memory module, which provides a more principle solution to address the hard mining problem in DML.
With General Pair-based DML. Our memory module can be directly applied to GPW framework. We evaluate it with contrastive loss, triplet loss and MS loss. As shown in Table 1, our memory module can improve the original DML approaches significantly and consistently on all benchmarks. Specifically, the memory module remarkably boost the performance of contrastive loss by 64.0%77.8% and MS loss by 69.7%76.2%. Furthermore, with sophisticated sampling and weighting approach, MS loss has 16.7% recall@1 performance improvement over contrastive loss on VehicleID Large test set. Such a large gap can be simply filled by our memory module, with a further 5.8% improvement
. MS loss has a smaller improvement because it weights extremely hard negatives heavily which might be outliers, while such harmful influence is weakened by the equally weighting scheme of contrastive loss. For detailed analysis see Supplementary Materials.
The results suggest that (1) both straightforward (e.g. contrastive loss) and carefully designed weighting scheme (e.g. MS loss) can be improved largely by our memory module, and (2) with our memory module, a simple pair-weighting method (e.g. contrastive loss) can easily outperform the state-of-the-art sophisticated methods such as MS loss  by a large margin.
|Cont. bs. 64||2.10 h.||5.12 GB||63.9||-|
|Cont. bs. 256||4.32 h.||+15.7 GB||71.7||+7.8|
|Cont. w/ 1%||2.48 h.||+0.01 GB||69.8||+5.9|
|Cont. w/ 100%||3.19 h.||+0.20 GB||77.4||+13.5|
Memory and Computational Cost. We analyze the complexity of our XBM module on memory and computational cost. On memory cost, The XBM module () and affinity matrix () requires a negligible 0.2 GB GPU memory for caching the whole training set (Table 2). On computational complexity, the cost of () increases linearly with memory size . With a GPU implementation, it takes a reasonable 34% amount of extra training time w.r.t. the forward and backward procedure.
It is also worth noting that XBM does not act in inference phase. It only requires 1 hour extra training time and 0.2GB memory, to achieve a surprising 13.5% performance gain by using a single GPU. Moreover, our method can be scalable to an extremely large-scale dataset, e.g. with 1 billion samples, since XBM module can generate a rich set of valid negatives with a small-memory-ratio XBM, which requires acceptable cost.
In this section, we compare our XBM augmented contrastive loss with the state-of-the-art DML methods on three benchmarks on image retrieval. Even though our method can reach better performance with a larger mini-batch size (Figure 6), we only use 64 mini-batch which can be implemented with a single GPU with ResNet50 . Since the backbone architecture and embedding dimension can effect the recall metric, we list the results of our method with various configurations for fair comparison in Table 3, 4 and 5. See results on more datasets in Supplementary Materials.
As can be found, with our XBM module, a contrastive loss can surpass the state-of-the-art methods on all datasets by a large margin. On SOP, our method with R outperforms the current state-of-the-art method: MIC  by 77.2% 80.6%. On In-shop, our method with R achieves even higher performance than FastAP  with R, and improves by 88.2%91.3% compared to MIC. On VehicleID, our method outperforms existing approaches considerably. For example, on the large test dataset, by using a same G, it improves the R@1 of recent A-BIER  largely by 81.9%92.5%. With R, our method surpass the best results by 87%93%, which is obtained by FastAP  using R.
Figure 7 shows that our memory module promotes to learn a more discriminative encoder. For example, at the first row, our model is aware of the deer under the lamp which is a specific character of the query product, and retrieves the correct images. In addition, we also present some bad cases in the bottom rows, where our retrieved results are visually closer to the query than that of baseline model. See more results in Supplementary Materials.
|Cont. w/ M||G||77.4||89.6||95.4||98.4|
|Cont. w/ M||B||79.5||90.8||96.1||98.7|
|Cont. w/ M||R||80.6||91.6||96.2||98.7|
|Cont. w/ M||G||89.4||97.5||98.3||98.6||98.7||98.9|
|Cont. w/ M||B||89.9||97.6||98.4||98.6||98.8||98.9|
|Cont. w/ M||R||91.3||97.8||98.4||98.7||99.0||99.1|
|Cont. w/ M||G||94.0||96.3||93.2||95.4||92.5||95.5|
|Cont. w/ M||B||94.6||96.9||93.4||96.0||93.0||96.1|
|Cont. w/ M||R||94.7||96.8||93.7||95.8||93.0||95.8|
We have presented a conceptually simple, easy to implement, and memory efficient cross-batch mining mechanism for pair-based DML. In this work, we identify the “slow drift” phenomena that the embeddings drift exceptionally slow during the training process. Then we propose a cross-batch memory (XBM) module to dynamically update the embeddings of instances of recent mini-batches, which allows us to collect sufficient hard negative pairs across multiple mini-batches, or even from the whole dataset. Without bells and whistles, the proposed XBM can be directly integrated into a general pair-based DML framework, and improve the performance of several existing pair-based methods significantly on image retrieval. In particular, with our XBM, a basic contrastive loss can easily surpass state-of-the-art methods [36, 25, 2] by a large margin on three large-scale datasets.
This paves a new path in solving for hard negative mining which is a fundamental problem for various computer vision tasks. Furthermore, we hope the dynamic memory mechanism can be extended to improve a wide variety of machine learning tasks other than DML, since”slow drift” is a general phenomenon that does not just exist in DML.
Incorporating intra-class variance to fine-grained visual recognition. In ICME, Cited by: Table 5.
3D pose estimation and 3d model retrieval for objects in the wild. In CVPR, Cited by: §1.
Learning descriptors for object recognition and 3d pose estimation. In CVPR, Cited by: §1.