A fundamental problem in computer vision is how to measure the similarity or distance between a pair of images. With a good distance metric for images, it is possible to perform clustering, retrieval , and classification , as well as verifying whether two images are from the same category . This problem has been studied for decades but not been fully solved yet.
The emergence of the deep image embedding has greatly advanced the performance of distance involving tasks. With the deep convolutional neural networks (CNNs)
, both low-level and high-level features are learned directly from raw images instead of being hand-crafted by human experts. The recent research in deep image embedding mainly focuses on how to design a better loss function[22, 24], sampling policy [5, 30], or proxy-based method .
Existing deep image embedding approaches generally consider images from one domain as input each time. For example, for fine-grained car image retrieval, car images with different brands and models are collected and a car specialist embedding is trained with all the car images. The resulted car embedding may work well for car images but not for images from a different domain, e.g. birds. For fine-grained bird image retrieval, the same process needs to be repeated.
Modern content-based image search engines need to deal with arbitrary query images uploaded by a user. To achieve good retrieval results in practice, several specialist embedding models are maintained. When the query image is detected as belonging to one specialist’s domain, that specialist is used for retrieval. In addition, a default embedding model is also needed when a query image does not belong to any of the trained specialist models’ domain. Keeping one image embedding model for each domain has three drawbacks. First, it is needed to decide which embedding model to use for a given query image. If the car embedding model is used for extracting the embedding vector for a bird image, the extracted embedding vector cannot reflect the discriminative information of the bird image. Choosing the correct embedding model for query images alone is a difficult task. It is impossible to achieve perfect precision or recall over unrestricted query images uploaded by users. Second, multiple embedding vectors need to be computed and stored for one gallery image, which takes significantly large storage in large-scale image search. Third, it is impractical to distribute so many embedding models to mobile devices.
In this paper, we aim to train a universal embedding model to provide good performance on multiple domains. The universal embedding model should match the performance of multiple specialists on each of the specialist’s domain. It is nontrivial to achieve this goal. Simply fusing the training data from different domains and training using existing methods will not obtain performances as good as the specialists. Since some domains are getting overfitted much faster than other domains, it is impossible to choose a stopping point where the universal embedding gets the best performance for all the domains, as illustrated in Fig. 2 and Fig. 2. Another problem is that it becomes more difficult to sample effective training pairs or triplets after fusing multiple domains. The images across different domains generally have larger differences than within the same domain. For the triplet loss , it will be too easy for a training triplet to produce no gradients if the negative image and anchor image are from different domains. These ineffective triplets will form the majority of all the triplets when fusing multiple domains, which makes it more difficult to see effective triplets during training. One may suspect that data imbalance may also be a reason for the performance decrease in Fig. 2, but we do not believe it is the case because of two facts: 1) the proposed method in this paper also suffers from data imbalance but still achieves good performance; and 2) a data-balanced baseline we add in the experiment also has the early overfitting issue, as shown in Fig. 2.
To solve the early overfitting issue, we propose to distill the knowledge learned in specialists into a universal embedding. By distilling the knowledge from properly trained specialists, the obtained universal embedding will not overfit on any domain. Existing embedding knowledge distillation methods [16, 20, 32] are based on an assumption that the distances between pairs of images measured by a teacher model are exactly the same or proportional to the distances measured by a student model. This assumption does not hold for some cases when distilling the knowledge from specialists into a universal embedding. For example, we may want to unify a specialist trained on CUB200-2011  and another specialist trained on ImageNet , as shown in Fig. 1. We hope that the resulted universal embedding is roughly the same as the ImageNet specialist for non-bird sub-space. But for the bird sub-space, we expect the universal embedding can learn how to distinguish fine-grained birds from the CUB200-2011 specialist. The bird images only take a small part of the embedding space of the ImageNet specialist, but the bird images take a larger embedding space of the bird specialist. Due to the different bird space sizes, the pairwise distances between bird images measured by the CUB200-2011 specialist need to shrink so that the distances can fit in the ImageNet specialist. Fig. 2
shows the kernel density estimation of the ratio between the bird image distances measured by the ImageNet specialist and CUB200-2011 specialist.
To distill distances with the necessary shrinkage, we employ stochastic neighbor embedding (SNE) 
. In SNE, the absolute pairwise distances are transformed into distributions and the Kullback-Leibler divergence is used to match the distributions of high-dimension embedding and low-dimension embedding. By doing so, the distance shrinkage is properly handled.
In summary, our contributions are four-fold:
We identify an important but unexplored embedding task: how to train a universal image embedding model to match the performance of several specialists on each of the specialist’s domain.
We propose to use distillation to avoid the early overfitting of some domains when training the universal embedding model.
We distill the knowledge in embedding models using SNE, which properly handles distance shrinkage.
We validate the effectiveness of the proposed universal embedding method in experiments using different combinations of several public datasets.
2 Related Work
Deep Metric Learning Recent deep metric learning research mainly falls into three directions: loss function, sampling policy, and learning with proxy. Contrastive loss  aims to pull similar sample pairs closer to each other and push dissimilar sample pairs farther away. However, directly minimizing the absolute distances of similar pairs to zero may be too restrictive. Triplet loss  was proposed to solve this issue. In triplet loss, the relative distance order between anchor-positive and anchor-negative is ensured. Schroff et al.  proposed semi-hard sampling to find the first negative farther than the positive to the anchor. Wu et al.  proposed to sample negative samples reciprocal to the anchor-negative distance. They showed that their sampling policy leads to a more balanced distribution of anchor-negative distance. Instead of sampling positives and negatives from a large pool of candidates, Movshovitz et al.  proposed to use a proxy to represent a class of samples. The training speed and model performance were both improved by introducing proxies.
Existing research on deep metric learning mainly focuses on training a specialist embedding model for one domain of images. We are interested in training a universal embedding model having good performance on multiple domains.
Knowledge Distillation Initially, Bucilua et al.  compressed large ensemble models into smaller and faster models. The ensemble was used to label a large unlabeled dataset. Thereafter, the small neural network was trained using the ensemble labeled data. Hinton et al. 
improved the compression method by introducing a temperature to reduce the effect of large negative logits. Passalis and Tefas21] is very similar to the proposed method in this paper. The main differences between  and this paper are that we are solving a different task and handling the embedding distance shrinkage problem during distillation.
Recently, several methods [16, 20, 32] tried to apply distillation to image embedding. They first trained a large teacher embedding model using existing methods and then distilled the knowledge learned by the teacher to a small student model. Instead of distilling the learned embedding vectors, the learned distances between embedding vectors are distilled. Different from [16, 20, 32], we aim to train a universal embedding in this paper. SNE  is used for embedding knowledge distillation, which can distill learned distances with shrinkage.
Unifying Models Gao et al. trained a classification model with an extremely large number of classes . This was done by distilling the knowledge from a hierarchy of smaller models. Vongkulbhisal et al. 
developed a new task called Unifying Heterogeneous Classifiers (UHC) because of privacy considerations. The task to be solved in this paper is similar to UHC in that both tasks aim to unify several models into one. This paper unifies image embedding models, while UHC unifies classification models.
3 Embedding Distillation
We learn the universal embedding by distilling the knowledge from the specialist embeddings of different domains. In this section, we introduce the distillation techniques used in the paper. We first briefly review relational knowledge distillation (RKD) [20, 32] and then introduce the knowledge distillation method based on SNE .
3.1 Relational Knowledge Distillation
In RKD, there is a teacher model and a relatively smaller student model, both of which are deep neural networks typically. The teacher model has better performance than the student model trained on the same training data without distillation. During distillation, the student model is trained to mimic the teacher model so that the performance of the student model could be similar to that of the teacher model. As a result, the student model obtained by distillation will be better than another student model trained without distillation.
Let , , and denote three different images while and represent the teacher model and the student model, respectively. For simplicity, we also define the embedding vectors for computed by the teacher model and student model as and , respectively. The relation distillation objective is to let the student mimic the learned distance between two embedding vectors from the teacher, which is given by
where could be 1-distance loss or Huber loss . and are the pairwise distances between two images measured by the teacher model and student model, respectively. and are the mean distance between images in a batch. A basic assumption behind this mean distance normalization is that should be proportional to . For angle-wise distillation , similar triangles are constructed, which is also based on the proportional assumption. In summary, existing relational knowledge distillation methods are designed for cases where the absolute distances between a pair of images measured by the teacher model and student model are the same or proportional.
3.2 Stochastic Neighbor Distillation
As stated in the introduction, when unifying a fine-grained dataset, e.g. CUB200-2011, and a large dataset, e.g. ImageNet, the embedding sub-space taken by the CUB200-2011 images will shrink compared with the embedding sub-space taken by CUB200-2011 in a CUB200-2011 specialist. The shrinkage is non-uniform, as the distance ratio is not a constant. This breaks the proportional assumption in RKD.
To address this issue, inspired by SNE  and PKT , we distill knowledge by a distance distribution instead of absolute distance. The proposed method is named stochastic neighbor distillation (SND), which is shown in Fig. 3.
We use the SNE objective to train a new embedding model instead of just computing vectors for certain objects. The high-dimension and low-dimension vectors in  can be viewed as the output of the teacher and student models, respectively. Considering the image with feature , the probability that it would pick with feature as its neighbor measured by the teacher model is
is the variance of the Gaussian kernel used in the-th distance distribution for the teacher embedding, denoted by . We can set the value of by hand or use binary search to determine the value of so that the perplexity of the distance distribution equals to a predefined value . Similarly, we can define the distribution for with feature picking with feature as its neighbor measured by the student model as:
where is the variance of the Gaussian kernel used in the -th distance distribution for the student embedding, denoted by . The distillation objective is given by
To better understand how this loss behaves, the gradient of the loss with respect to is given in :
From the gradient, we can see that is either pulled towards or pushed away from depending on whether is observed to be a neighbor of more or less often than expected. SND inherits two nice properties from SNE: 1) The absolute distances between images are turned into probabilities, which properly handles the distance shrinkage; and 2) By tuning the variance of the Gaussian, it is possible to control how many neighbors that are considered during the distillation. If small variance values are used, SND will preserve the local structure of the data manifold. When the variances are large enough, SNE is equivalent to minimizing the mismatch between squared distances in the two spaces . Therefore, SND can be viewed as a generalization of RKD.
4 Universal Embedding Training
There are two cases to consider: 1) unifying some mutually exclusive domains; and 2) unifying a coarse-grained domain with its fine-grained sub-domains, e.g. ImageNet and CUB200-2011. A naive way to train the universal embedding is to fuse the training images from multiple domains together. The label correspondence across domains also needs to be figured out. Fusing and training with single domain methods cannot obtain a good universal embedding, because some domains are getting overfitted much sooner than other domains, as shown in the introduction.
We propose to distill the knowledge from properly trained specialist embedding models to the universal embedding. Let denote the domains that need to be unified, where is the number of domains. For each domain , we first train a specialist embedding model using existing single domain methods. All the specialist embedding models are teachers to the universal embedding. In SNE, the distance distribution is estimated over the whole dataset, which is computational infeasible for large datasets like ImageNet. Therefore, we sample a mini-batch and use all the distances inside a mini-batch to approximate the distance distribution.
As mini-batch is used, we must design a sampling policy for domains of data. Each specialist embedding model is trained with images in only one domain, so the specialist can only be used to encode the images in that domain. As a result, we are unable to compute the distances between images across domains. Based on this fact, we have each mini-batch only containing images from one domain, which is named domain-specific sampling. The frequency of choosing one domain to form a mini-batch is proportional to the number of images in that domain. After determining which domain to use, we follow the convention to choose the images inside a mini-batch. We randomly select classes and sample images for each class to form a mini-batch with size , where is set to in this paper. With the domain-specific mini-batch sampling, we minimize the between one specialist and the universal embedding in each training iteration.
Without explicitly informing the universal embedding model that images from different domains are dissimilar, the universal embedding model may mix different domains together. In the experiment, we show that the proposed universal embedding model does not mix different domains. One possible reason may be that we are using a CNN which is pre-trained for ImageNet classification. The ResNet  features of images from different domains are already well separated. During the embedding training process, the objective is to make similar images have closer embeddings. As a result, the trained deep embedding model has the property of mapping similar images to similar sub-spaces. This property can act as a regularization to avoid mapping dissimilar images to the same embedding.
We evaluate the proposed method by training a universal embedding model for multiple domains. Each one of ImageNet , CUB200-2011 , CARS196 , In-shop Clothes , Stanford Online Produces , and PKU VehicleID  can be viewed as a domain. We first describe the experimental settings. Then, we show that the universal embedding does not mix different domains. In the end, the unification results are given.
ImageNet is used to refer to ILSVRC2012 . It is widely used for object recognition. In this paper, we use ImageNet for the image embedding task for its variety of categories. The training images are used for training and the validation images are used for testing.
CUB200-2011 contains images belonging to bird categories. The first categories ( images) are used for training and the remaining categories ( images) are used for testing. The bounding box annotations are not used for training or testing.
CARS196 consists of images of classes of cars. The first categories ( images) are used for training and the remaining categories ( images) are used for testing. The bounding box annotations are not used for training or testing.
In-shop Clothes Retrieval (In-shop) contains cloth images with large pose and scale variations. We follow the settings used in . categories ( images) are used for training and categories () are used for testing. The testing set is further divided into a gallery set ( images) and probe set ( images).
Stanford Online Products (SOP) has images of classes of online products. We follow the settings used in , where categories ( images) are used for training and the remaining categories ( images) are used for testing.
PKU VehicleID (VID) is a large-scale dataset for vehicle re-identification. It contains images of vehicles captured by surveillance cameras. We follow the settings used in . vehicles are used for training. There are three test sets of different sizes defined in . We only use the large test set containing images of vehicles.
In all the experiments, we use ResNet50 pre-trained for ImageNet classification task as the backbone CNN. The training images are resized to and then randomly cropped to . Randomly horizontal flipping is also used for data augmentation. Central crop is used for the test images. We add a fully-connected layer to project the 2048-dim ResNet50 output into a 128-dim embedding vector and further normalize the embedding vector to unit-length. To train a specialist, we either use Triplet semi-hard  or Multi-Similarity . The batch size is set to and Adam optimizer with learning rate is used for training. Since there is no validation set in the previous data split setting, we tune by self-distillation on one dataset and then use the obtained
value for the experiments on the other datasets. The Recall@k are reported for performance comparison. The proposed method is implemented by Tensorflow. Nvidia GTX 1080Ti GPUs are used to train the embedding models.
We design four baseline methods for comparison. In the “Concatenation” baseline, we concatenate the embedding vectors extracted by the trained specialists and then use PCA to project the concatenated embedding into 128-dim. The remaining three baselines can be summarized as fusing the training data and then training with single domain methods. The difference between these three baselines lies in the sampling method: 1) naïve sampling; 2) domain-specific (DS) sampling introduced in Sec. 4; and 3) domain-balanced (BAL) sampling. In naïve sampling, the training images from different datasets are mixed in each training batch. BAL sampling is based on DS sampling, but the datasets are sampled with equal probability regardless of the number of training images from each dataset.
5.1 Visualization of the Embedding
In this section, we show that the proposed universal embedding model does not mix different domains together, even though domain-specific sampling is used. Instead, it preserves the separation of different domains while matching the performance of several specialists on each specialist’s domain with a single embedding model. We first visualize both the specialist embedding and the universal embedding with t-SNE and then show the principal component analysis results of the embedding vectors.
t-SNE Visualization In Fig. 4, we show four t-SNE visualizations of the CUB200-2011, CARS196, and In-shop. The perplexity is set to when generating the figure. In Fig. 4(a), we visualize the embedding vectors generated by a pre-trained ResNet50. We can see that the ResNet50 features of the images from different datasets are already well separated. In Fig. 4
(b), we visualize the embedding vectors generated by a randomly initialized model. The weights in the fully-connected layer are drawn from a Gaussian distribution and weights in ResNet50 layers are pre-trained by ImageNet. It can be observed that the fully-connected layer preserves the relationships of the images. In Fig.4(c), we train a specialist embedding for each dataset and use the corresponding specialist to extract image embeddings. The three specialists are trained independently, meaning that each specialist is not aware of the other two. We put all the image embeddings from the three datasets together and compute the t-SNE visualization. There are still clear boundaries between the datasets. The specialists are projecting images from different domains into different sub-spaces. In Fig. 4(d), we visualize the embeddings computed by a universal embedding trained with the proposed method. It can be observed that the images from the three datasets are mapped to different sub-spaces by the universal embedding. From these results, we infer that each dataset only takes certain sub-space in the embedding space and the whole embedding space is large enough to place all the three datasets in different sub-spaces.
In some cases of unifying mutually exclusive domains, the embeddings learned by different specialists may not be as well separated as in Fig. 4(c). Further study is needed to know how good distillation will work for such cases. We leave that to future work. When unifying a coarse-grained domain (ImageNet) with a fine-grained sub-domain (CUB200-2011), the labels in the coarse-grained domain can guide the universal model to separate the sub-domain (bird) images from other (non-bird) images.
Principal component analysis We also visualize the eigen values in the PCA of the embeddings of these three datasets computed by three models, i.e. the random model, the specialist, and the universal embedding. From Fig. 5, we can see that most of the eigen values in the left two figures are close to zeros, meaning that CUB200-2011 and CARS196 datasets lie in low-dimension sub-spaces. This may be because there are many bird categories and car categories in ImageNet. After pre-training, ResNet50 maps bird images and car images into a low-dimension sub-space. For In-shop, there are more eigen values much larger than zeros, which may be because there are not many cloth related categories in ImageNet. The second observation from Fig. 5 is that the eigen values of embeddings computed by the specialist and the universal embedding are almost the same. This is in accordance with the conclusion from the t-SNE visualization that the whole embedding space is large enough to place all the three datasets in different sub-spaces without shrinking anything.
5.2 Unifying Mutually Exclusive Domains
In this section, we train two universal models, one by unifying the CUB200-2011, CARS196, and In-shop specialists, and the other by unifying the In-shop, SOP, and VID specialists.
|Combination 1||Combination 2|
Evaluation on single domain We first report the performance of the specialist embedding models obtained by Triplet semi-hard and the performance of the universal embeddings on each dataset separately. The results are shown in Table 1. The specialist row shows the performance of the specialist embedding models in each of their specific domain respectively. “Our_RKD” and “Ours_SND” rows show the performances of the universal embedding models trained with RKD and SND by distilling three different specialist models. Note that the specialist row shows the performances of three different specialist models in their specific domain, while the “Our_RKD” and “Ours_SND” rows show the performance of one universal embedding in three different domains. To achieve the performance in the specialist row in a real image search scenario, an additional step of determining which specialist to use for a user uploaded query image is needed.
That “Ours_RKD” achieves better performance than “Ours_SND” may be due to two reasons. First, as shown in Sec. 5.1, the whole embedding space is large enough to place all the three datasets in different areas without any shrinkage. Second, we are using fixed values instead of binary search. It is interesting to find that the performance of “Ours_RKD” is even slightly better than the specialists. This may be because the unifying process is similar to three independent self-distillations , which can produce a better student model than the teacher model .
Evaluation on fused domains Instead of evaluating in each domain separately, we consider a harder problem by fusing the evaluation set of all the unified domains together to simulate the real-world image search system handling a diversity of domains. To be specific, when computing the recall for CUB200-2011, the CUB200-2011 evaluation set is used as the probe and all the evaluation images from the three domains (CUB200-2011, CAR196, In-shop) are used as the gallery. The In-shop dataset evaluation is slightly different from the other datasets because the evaluation images are divided into probe and gallery. For In-shop dataset evaluation, we keep the probe set unchanged and fuse the gallery set with the evaluation images of the other two datasets.
The results of unifying CUB200-2011, CARS196 and In-shop specialists are shown in Table 2, and the results of unifying In-shop, SOP, and VID specialists are shown in Table 3. In both tables, we first report the performance of all the specialists under the fused evaluation setting and then provide the performance of embeddings trained with the four baseline methods. Finally, we report the performance of training universal embedding using distillation methods with the specialists.
First, we compare the performance before (Table 1) and after (Table 2) fusing the evaluation set. The Recall@1 of In-shop specialist drops from to when evaluated on fused datasets. The specialist image embedding models make some mistakes on cross-domain images, but the confusion is not serious. For “Ours_RKD” and “Ours_SND”, the performance drop is less than , meaning the universal embedding model is able to distinguish images from different domains. The “Concatenation” baseline achieves poor results because two-thirds of the embedding are out of domain and this has a side effect on the distance. The performance of “Triplet” trained with the fused dataset is much worse than the specialists in their specific domains, which may be because of the early overfitting and ineffective triplet problem. Domain-specific sampling can solve the ineffective triplet problem, so “Triplet+DS” is much better than “Triplet”. Comparing “Triplet+DS” and “Triplet+BAL”, we find that the domains with fewer images, i.e. CUB200-2011 and CARS196, show improved results, but the domain with the most images, i.e. In-shop, shows worse results. The results of distillation methods are comparable to or better than the specialists in their specific domains, showing the effectiveness of training universal embedding using distillation.
To show that the proposed method has the capability to deal with specialists trained with other methods, we train CUB200-2011, CARS196, and In-shop specialists using one of the state-of-the-art methods, Multi-Similarity , and then distill the specialists into a universal embedding. The Recall@1 of the three specialists on single domain are , , and , respectively. The performance of the unified models is listed in Table 4. The findings are very similar to what we obtain by distilling Triplet semi-hard models.
5.3 Unifying ImageNet with Other Domains
We also try to unify ImageNet with CUB200-2011 or CARS196, which is quite different from the cases in Sec. 5.2. The datasets used in Sec. 5.2 are much smaller than ImageNet and they are mutually exclusive. ImageNet contains categories over a broad range and it has a certain overlap with the small datasets. The total number of training images in ImageNet is 218 and 159 times larger than CUB200-2011 and CARS196, respectively. If we simply sample the mini-batch proportional to the number of training images, it is very rare to see the images from CUB200-2011 or CARS196. So we increase the probability of choosing the training images from CUB200-2011 or CARS196 by ten times. The result sampling policy can be viewed as a combination of domain-specific and domain-balance sampling and it is used for all of “Triplet”, “Ours_RKD”, and “Ours_SND” in this section. Since no distance shrinkage or early overfitting happens to the ImageNet dataset, it is not necessary to use SND for ImageNet. We directly use triplet loss instead of SND on the ImageNet images during universal embedding training. Because there exists category overlapping between ImageNet and small datasets, we do not fuse the evaluation set and the naïve sampling baseline cannot be used. The evaluation sets without fusion are adequate to evaluate the universal embedding in this section because the ImageNet evaluation set can show whether bird/car images are mixed with other images and the CUB200-2011/CARS196 evaluation set can show the fine-grained retrieval performance.
The results of the unification are reported in Table 5. Compared with the specialists’ performance in Table 1, we find that the ImageNet specialist performs quite badly on CUB200-2011 and CARS196. As the bird and car categories in ImageNet are in a coarser granularity than in CUB200-2011 and CARS196, the ImageNet specialist does not know how to distinguish the fine-grained birds or cars. If we train a universal embedding using “Triplet” on the fused training data, the performance on CUB200-2011 and CARS196 are much better than ImageNet specialist. Although twice as much computation is used, the “Concatenation” baseline achieves inferior performance. Again, the performance of the universal embedding trained with distillation is comparable to the performance of specialists. Different from Sec. 5.2, the performance of “Ours_SND” is better than “Ours_RKD” in Table 5, showing that “Ours_SND” handles distance shrinkage better than “Ours_RKD”.
In this paper, we have studied the problem of how to train a universal image embedding model to have good performance on multiple domains. This problem is very important for large-scale image retrieval but has rarely been studied before. Fusing the training data from all the domains and training with single domain methods cannot solve this problem because of the early overfitting problem for some domains. To solve this problem, we propose to distill the knowledge learned by properly trained specialist models into a desired universal embedding model. When unifying a coarse-grained domain with a fine-grained domain, the learned knowledge, distances between images, cannot be distilled directly because the distances are at different scales. Therefore, we develop a novel embedding knowledge distillation method based on SNE. The experimental results of unifying several combinations of public datasets have shown the effectiveness of the proposed method.
TensorFlow: large-scale machine learning on heterogeneous systems. Note: Software available from tensorflow.org External Links: Cited by: §5.
-  (2006) Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535–541. Cited by: §2.
Learning a similarity metric discriminatively, with application to face verification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 539–546. Cited by: §2.
-  (2009) Imagenet: a large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: §1, §5, §5.
-  (2019) Deep embedding learning with discriminative sampling policy. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4964–4973. Cited by: §1.
-  (2018) Born again neural networks. arXiv preprint arXiv:1805.04770. Cited by: §5.2.
-  (2017) Knowledge concentration: learning 100k object classifiers in a single cnn. arXiv preprint arXiv:1711.07607. Cited by: §2.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §4.
-  (2003) Stochastic neighbor embedding. In Advances in neural information processing systems, pp. 857–864. Cited by: §1, §2, §3.2, §3.2, §3.
-  (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §2.
-  (1992) Robust estimation of a location parameter. In Breakthroughs in statistics, pp. 492–518. Cited by: §3.1.
-  (2013) 3D object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition, Sydney, Australia. Cited by: Figure 2, §5.
-  (1951) On information and sufficiency. The annals of mathematical statistics 22 (1), pp. 79–86. Cited by: §1.
-  (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §1.
-  (2016) Deep relative distance learning: tell the difference between similar vehicles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2167–2175. Cited by: Table 3, §5, §5.
-  (2019) Knowledge distillation via instance relationship graph. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7096–7104. Cited by: §1, §2.
-  (2016) DeepFashion: powering robust clothes recognition and retrieval with rich annotations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Cited by: Figure 2, Table 3, §5, §5.
-  (2017) No fuss distance metric learning using proxies. In Proceedings of the IEEE International Conference on Computer Vision, pp. 360–368. Cited by: §1, §2.
-  (2016) Deep metric learning via lifted structured feature embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4004–4012. Cited by: §1, Table 3, §5, §5.
-  (2019) Relational knowledge distillation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3967–3976. Cited by: §1, §2, §3.1, §3, §5.2.
-  (2018) Learning deep representations with probabilistic knowledge transfer. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 268–284. Cited by: §2, §3.2.
-  (2019) SoftTriple loss: deep metric learning without triplet sampling. arXiv preprint arXiv:1909.05235. Cited by: §1.
Facenet: a unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815–823. Cited by: Figure 2, §1, §2, Table 2, Table 3, Table 5, §5.
-  (2016) Improved deep metric learning with multi-class n-pair loss objective. In Advances in Neural Information Processing Systems, pp. 1857–1865. Cited by: §1.
-  (2019) Unifying heterogeneous classifiers with distillation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3175–3184. Cited by: §2.
-  (2011) The caltech-ucsd birds-200-2011 dataset. Cited by: Figure 2, §1, §5.
-  (2019) Multi-similarity loss with general pair weighting for deep metric learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5022–5030. Cited by: §5.2, Table 4, §5.
-  (2006) Distance metric learning for large margin nearest neighbor classification. In Advances in neural information processing systems, pp. 1473–1480. Cited by: §1, §2.
-  (2009) Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research 10 (Feb), pp. 207–244. Cited by: §1.
-  (2017) Sampling matters in deep embedding learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2840–2848. Cited by: §1, §2.
-  (2003) Distance metric learning with application to clustering with side-information. In Advances in neural information processing systems, pp. 521–528. Cited by: §1.
-  (2019) Learning metrics from teachers: compact networks for image embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2907–2916. Cited by: §1, §2, §3.