Hard-sample Guided Hybrid Contrast Learning for Unsupervised Person Re-Identification

09/25/2021 ∙ by Zheng Hu, et al. ∙ 0

Unsupervised person re-identification (Re-ID) is a promising and very challenging research problem in computer vision. Learning robust and discriminative features with unlabeled data is of central importance to Re-ID. Recently, more attention has been paid to unsupervised Re-ID algorithms based on clustered pseudo-label. However, the previous approaches did not fully exploit information of hard samples, simply using cluster centroid or all instances for contrastive learning. In this paper, we propose a Hard-sample Guided Hybrid Contrast Learning (HHCL) approach combining cluster-level loss with instance-level loss for unsupervised person Re-ID. Our approach applies cluster centroid contrastive loss to ensure that the network is updated in a more stable way. Meanwhile, introduction of a hard instance contrastive loss further mines the discriminative information. Extensive experiments on two popular large-scale Re-ID benchmarks demonstrate that our HHCL outperforms previous state-of-the-art methods and significantly improves the performance of unsupervised person Re-ID. The code of our work is available soon at https://github.com/bupt-ai-cz/HHCL-ReID.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Person Re-ID aims to identify the same person under different cameras views. It has been used extensively in large-scale surveillance systems. Though great progress has been made in supervised person Re-ID tasks, the reliance on extensive manual annotation greatly constrains its application. Nevertheless, collecting pedestrian images without annotation is much cheaper and easier. Thus, increasing research attention has been drawn to unsupervised person Re-ID, directly learning from unlabeled data, which is more scalable and has more potential to deployments in the real world.

Figure 1: Hard-sample guided hybrid contrast learning. According to the features saved in the memory bank, we calculate cluster-level contrastive loss and hard instance-level contrastive loss, respectively. (a) Cluster centroid leads the optimization trend of features, resulting in features belonging to the same cluster being more compact and strengthen identity similarity. (b) Hard instance contrastive loss compares input sample with hard positive that belong to the same cluster and hard negative instances from other clusters, thereby learning to distinguish easily confusing samples. (Best viewed in color)

The extant unsupervised person re-ID methods can be broadly divided into two categories, unsupervised domain adaptation Re-ID methods and purely unsupervised Re-ID methods. The first type methods are based on unsupervised domain adaption (UDA) where the source domain dataset is fully annotated and the target domain is an unlabeled dataset. Most of these UDA-based methods address this task by learning the knowledge in the labeled source domain dataset and transferring them to the unlabeled target domain dataset [32, 1, 8]

. The second type of unsupervised Re-ID method is pseudo-label-based fully unsupervised learning that directly learn from unlabeled data in the target domain and use representation features to estimate pseudo labels

[23, 29, 9]. This method does not require any annotations and is more challenging. Existing fully unsupervised Re-ID works mainly aim to exploit pseudo labels from clustering and apply contrastive learning which has shown excellent performance in unsupervised representation learning [27, 3, 11].

The performance of the unsupervised methods relies on feature representation learning. More recently, the State-of-the-art method [11] using a memory bank unit [28] to store all instance features, treats each image as an individual class, and learns the representation by matching features of the same instance in different augmented views. However, each class usually contains more than one positive instance in Re-ID datasets. SpCL [9] method alleviates this problem by matching an instance with the centroid of the multiple positives. To further ensure each positive converges to its centroid at a uniform pace, cluster contrast learning [4] updates the memory dictionary and computes contrastive loss in the cluster level.

Although cluster contrast learning [4] has achieved impressive performance, the method of applying contrastive learning only in the cluster level does not consider the the relationship between hard instances in the instance level. In fact, previous works in deep metric learning have focused on hard sample mining to lay more emphasis on hard samples inside a class. These methods aim to distinguish samples from different categories and bring samples from the same category closer together. However, these methods usually adopt a mini-batch-based deep metric loss, such as hard triplet loss [13] and multi-similarity loss [25]. Meanwhile, these losses only utilized a small portion of data without considering the information of all categories.

To learn discriminative feature representation for Re-ID and address the lack of adequately exploring information of hard samples, this paper introduces a novel hard-sample mining strategy and proposes a simple and effective method of hard-sample guided hybrid contrast learning for unsupervised Re-ID. In summary, this paper makes the following contributions:

  • We propose a hybrid contrast learning framework for unsupervised person Re-ID which combines both cluster-level contrastive loss and instance-level contrastive loss.

  • We introduce a novel hard instance mining strategy, which is based on an instance memory bank, to explore more discriminative information by selecting global hard samples online for each input instance.

  • Extensive experiments on two popular large-scale Re-ID benchmarks demonstrate that our HHCL outperforms previous state-of-the-art methods and significantly improves the performance of unsupervised person Re-ID.

2 Related Works

2.1 Unsupervised Re-ID

The domain adaptation strategy has been widely used for unsupervised person Re-ID tasks [1, 8]. The transfer-based methods follow the strategy of UDA, which uses the pre-trained model in the labeled source domain dataset as the initialization of the target domain, or uses the style transfer method to transfer labeled images to the target domain. However, the UDA approach can be very challenging when the categories in the two domains are quite different. The drawback with pseudo-labels is that if the domains are not similar enough, it is not easy for us to obtain high quality pseudo labels, because the labeling noise might be too high to hurt the performance.

More recently, researchers have given more attention to pseudo-label-based methods that do not require source domain data. The pseudo labels can be generated by a pre-trained classifier or by a feature similarity-based clustering algorithm, such as K-means, DB-SCAN

[6]. In this way, the pseudo labels are applied to fine-tuning the Re-ID model in a supervised manner. HCT [31]

combined hierarchical clustering with hard-batch triplet loss to improve the quality of pseudo labels. MMCL

[22] formulated unsupervised person re-ID as a multi-label classification task to progressively seek true labels. SpCL [9] adopted the self-paced contrastive learning strategy to form more reliable clusters. CACL [16] designed an asymmetric contrastive learning framework to help the siamese network effectively mine the invariance in feature learning.

2.2 Mining Schemes

Sampling is a fundamental operation for reducing bias during model learning. Random sampling is one of the commonly used approaches, and different sampling methods are proposed to facilitate the learning of various loss functions. For the person re-ID task, identity sampling is widely used during the training stage, such as pair-wise sampling for contrastive loss and semi-hard negative mining method for triplet loss.

Hard sample mining is considered as a vital component of many deep metric learning algorithms [28]

to accelerate network convergence or to improve the final discriminative ability of the neural network because hard samples are more informative for training. The training should focus more on hard samples than easy samples. However, existing hard mining schemes of deep metric learning based on mini-batch training data often suffer from slow convergence, because they employ only one negative or partial negative example in mini-batch while not interacting with the other negative classes that have not been sampled into the current mini-batch in each update. In this paper, we propose a new strategy selecting the global hard samples from a memory bank for each input feature, to improve the model performance. Our hard mining strategy considers the relationship between each query instance and other clusters of different pseudo labels rather than taking into account only the inter-instance relationship with a small fraction of the categories.

Figure 2:

Hybrid Contrast Learning Framework. 1) Initialization: clustering algorithm divides all features extracted from the training set into different clusters as pseudo labels and initialize instance memory bank and cluster centroid memory bank; 2) forward propagate: calculate cluster contrastive loss between the input and the clustering centroids and the hard instance contrastive loss of the hard samples selected by hard mining strategy respectively; 3) back propagate and update the encoder model; 4) update instance memory bank and cluster centroid memory bank.

3 Preliminaries

Given an unlabeled training set consisting of image samples, the goal is to learn —an encoder parameterized by used to extract features from input images. For inference, this encoder is applied to the gallery set and query set . The gallery set contains the total collection of retrieval images in the database and representations of the query images are used to search the gallery set to retrieve the most similar matches to according to Euclidean distance between the query and gallery embeddings, , where a smaller distance implies increased similarity between the images. Thus, feature representations of the same person are supposed to be as close as possible.

4 Method

4.1 Architecture

Our hybrid contrast learning framework for fully unsupervised Re-ID consists of two main components: Cluster Centroid Contrastive Loss (CCCL) and Hard Instance Contrastive Loss (HICL). As shown in Fig.2.

4.2 Hybrid Contrast Learning

To increase intra-class compactness and inter-class separability, state-of-the-art contrastive learning methods minimize the distance between samples of the same category and maximize the distance between samples of different categories with InfoNCE loss [21].


where is an encoded query and is a positive feature which has the same label with selected from a set of candidates . is a temperature hyper-parameter that controls the scale of similarities.

Comparing the non-parametric loss functions of different approaches based on the memory dictionary, the SSL [17]

considers each image as an individual instance and computes the loss and updates the memory dictionary both in the instance level so that all features of the training data need to be saved. To decrease memory usage and take full advantage of clustering outliers, SPCL

[9] computes the loss in cluster level but updates the memory dictionary in the instance level. However, the updating progress for each cluster is inconsistent due to the varying cluster size and randomness of sampling. ClusterNCE loss [4]

updates the feature vectors and computes the loss both in the cluster level. Although only a smaller storage space needs to be created to hold a cluster size amount of features for ClusterNCE, a single feature vector is not enough for a cluster representation. The averaged momentum representations calculated from all instances belonging to one cluster may lose the intra-class diversity. If updating cluster representation with only an instance feature, would introduce more biases because of noisy pseudo labels generated by unsupervised clustering.

Thus, we proposed a new unsupervised Re-ID framework that combines cluster-level loss with instance-level loss. The overall loss function of our method is as follow:


where is a balancing factor and we set = 0.5 by default. In the following, we will detail the objective function Eq.(2).

Cluster Centroid Contrastive Loss Some instance-level memory dictionary techniques, such as [22, 8] maintaining each instance feature of the dataset and update corresponding memory dictionary with its own instance features in each mini-batch, have the problem of memory updating consistency [4]. Since different instances within the same cluster will have different updating states. In every training iteration, due to the unbalanced distribution of cluster size, a smaller cluster could have a higher proportion of instances updated than a larger cluster. Unlike the previous instance-level memory dictionary, we use cluster-level memory dictionary to keep one cluster feature for each cluster instead of preserving every instance feature. The corresponding memory dictionary is updated regardless of whether the clusters are large or small, ensuring updating consistency of features within the same cluster.



is the number of clusters in a training epoch and

is a temperature hyper-parameter. Different from unified contrastive loss, outliers are dropped out.

We calculate cluster centroids and store them in a memory for the cluster centroid contrastive loss. We update the cluster memory bank as follows:


where is the average of -th class instance features in the mini-batch.

4.3 Memory Based Hard Mining Scheme

To further distinguish easily confused sample pairs and explore the inter-instance relationship, we propose a novel hard sample mining strategy based on a memory dictionary. We construct another memory-based dictionary to store instance features, which contains pseudo identities and each identity has instances. As shown in Fig.1, unlike traditional hard mining strategies such as hard triplet loss [13], which is based on pairwise loss calculating the distance of the hardest positive and the hardest negative instances within a mini-batch, our proposed method is based on all pseudo-labeled categories and contains negative samples for each query. Our hard mining strategy considers the relationship between each query instance and other clusters of different pseudo labels rather than taking into account only the inter-instance relationship with a small fraction of the categories.

For the same query, we construct sample pairs which include one positive pair and hard negative pairs. We define hard instance contrastive loss as follows:


where is an instance temperature hyper-parameter,

is the hard positive instance feature that has the lowest cosine similarity with query

within the same cluster, and is hard negative instance feature that has the highest cosine similarity that belongs to -th class. They are respectively defined as


Similarly, to ensure memory updating consistency, all instance features of the corresponding K identities in the mini-batch are updated in each training iteration. We update the instance memory bank as follows:

Data: An unlabeled training set
Input:ImageNet pre-trained model , the iteration number , the training batch size .
Result: trained model
1 for  to  do
2       Extract feature embedding from by ;
3       Clustering into clusters with DB-Scan;
4       Initialize cluster memory bank and hard instance memory bank with Eq.1;
5       for  to  do
6             Sample mini-batch images from ;
7             Forward to extract the features of the samples;
8             Compute the total loss in Eq.(2) which combines cluster centroid contrastive loss Eq.(3) and hard instance contrastive loss Eq.(5);
9             Backward to update model ;
10             Update cluster memory and instance memory via Eq.(4) and Eq.(8);
12       end for
14 end for
Algorithm 1 Hard-sample Guided Hybrid Contrast Learning for Unsupervised Re-Identification

5 Experiments

5.1 Data and Metrics

We evaluate our approach on two large-scale benchmark datasets: Market1501 [33]

, and DukeMTMC-reID

[35] which are widely used real-world person Re-ID tasks.

Market1501 contains 1,501 person identities with 32,668 images which are captured by 6 cameras in front of the Tsinghua University campus. It contains 12,936 images of 751 identities for training and 19,732 images of 750 identities for testing. All of the images were cropped by a pedestrian detector which inevitably introduced little misalignment, part missing and false positives.

DukeMTMC-reID consists a total of 36,411 images of people from 1404 different identities collected by 8 cameras. Specifically, The dataset is split by randomly selecting 702 identities as the training set and 702 identities as the testing set. it contains 16,522 images for training, 2,228 query images and 17,661 gallery images for testing.

Evaluation Metrics.

We followed the standard training/test split and evaluation protocol to evaluate the performance of our method. For the evaluation metrics, we used the Rank-k (for k = 1, 5, and 10) matching accuracy, which means the query picture has the match in the top-k list. And we use the mean Average Precision (mAP), which is computed from the Cumulated Matching Characteristics (CMC)

[10]. Moreover, results reported in this paper are under the single-query setting, and no post-processing technique is applied.

5.2 Implementation

We adopt ResNet-50 [12]

as the backbone of the feature extractor and initialize the model with the parameters pre-trained on ImageNet


. After layer-4, we remove all sub-module layers and add global average pooling (GAP) followed by batch normalization layer

[14] and L2-normalization layer, which will produce 2048-dimensional features. During testing, we take the features of the global average pooling layer to calculate the distance. For the beginning of each epoch, we use DB-SCAN [6] for clustering to generate pseudo labels. The input image is resized

. For training images, we perform random horizontal flipping, padding with 10 pixels, random cropping, and random erasing. Each mini-batch contains 256 images of 16 pseudo person identities (16 instances for each person). We adopt Adam optimizer to train the Re-ID model with weight decay 5e-4. The initial learning rate is set to 3.5e-4, and is reduced to 1/10 of its previous value every 20 epoch in a total of 50 epoch. As the same with the cluster method of

[9] paper, we use DB-SCAN and Jaccard distance [36] to cluster with k nearest neighbors, where k = 30. For DB-SCAN, the maximum distance d between two samples is set as 0.45 and the minimal number of neighbors in a core point is set as 4.

5.3 Results

5.3.1 Comparison with unsupervised method

Method Reference Market1501 DukeMTMC-reID
mAP R1 R5 R10 mAP R1 R5 R10
Unsupervised Domain Adaptation
ECN [37] CVPR’19 43.0 75.1 87.6 91.6 40.4 63.3 75.8 80.4
MAR[30] CVPR’19 40.0 67.7 81.9 - 48.0 67.1 79.8 -
SSG[7] ICCV’19 58.3 80.0 90.0 92.4 53.4 73.0 80.6 83.2
MMCL [22] CVPR’20 60.4 84.4 92.8 95.0 51.4 72.4 82.9 85.0
JVTC [15] ECCV’20 61.1 83.8 93.0 95.2 56.2 75.0 85.1 88.2
DG-Net++ [40] ECCV’20 61.7 82.1 90.2 92.7 63.8 78.9 87.8 90.4
ECN+ [38] PAMI’20 63.8 84.1 92.8 95.4 54.4 74.0 83.7 87.4
MMT [8] ICLR’20 71.2 87.7 94.9 96.9 65.1 78.0 88.8 92.5
DCML [1] ECCV’20 72.6 87.9 95.0 96.7 63.3 79.1 87.2 89.4
MEB [32] ECCV’20 76.0 89.9 96.0 97.5 66.1 79.6 88.3 92.2
SpCL [9] NeurIPS’20 76.7 90.3 96.2 97.7 68.8 82.9 90.1 92.5
Fully Unsupervised
SSL [17] CVPR’20 37.8 71.7 83.8 87.4 28.6 52.5 63.5 68.9
JVTC [15] ECCV’20 41.8 72.9 84.2 88.7 42.2 67.6 78.0 81.6
MMCL [22] CVPR’20 45.5 80.3 89.4 92.3 40.2 65.2 75.9 80.0
HCT [31] CVPR’20 56.4 80.0 91.6 95.2 50.7 69.6 83.4 87.4
CycAs [26] ECCV’20 64.8 84.8 - - 60.1 77.9 - -
SpCL [9] NeurIPS’20 73.1 88.1 95.1 97.0 65.3 81.2 90.3 92.2
CAP [24] AAAI’21 79.2 91.4 96.3 97.7 67.3 81.1 89.3 91.8
CACL [16] Arxiv’21 80.9 92.7 97.4 98.5 69.6 82.6 91.2 93.8
ICE[2] ICCV’ 21 82.3 93.8 97.6 98.4 69.9 83.3 91.5 94.1
CCL[4] Arxiv’21 82.6 93.0 97.0 98.1 72.8 85.7 92.0 93.5
HHCL This paper 84.2 93.4 97.7 98.5 73.3 85.1 92.4 94.6
PCB[20] ECCV’18 81.6 93.8 97.5 98.5 69.2 83.3 90.5 92.5
OSNet [39] ICCV’ 19 84.9 94.8 - - 73.5 88.6 - -
DG-Net [34] CVPR’19 86.0 94.8 - - 74.8 86.6 - -
ICE [2] (w/ GT) ICCV’ 21 86.6 95.1 98.3 98.9 76.5 88.2 94.1 95.7
HHCL(w/ GT) This paper 87.2 94.6 98.5 99.1 80.0 89.8 95.2 96.7
Table 1:

Experimental results of the proposed HHCL and state-of-the-art methods on Market-1501 and DukeMTMC-reID. Note that the best results are bolded.

We compare our proposed method with state-of-the-art ReID methods including: 1) the unsupervised domain adaptation methods for person Re-ID(e.g. ECN [37], MAR[37], SSG[7], MMCL[22], JVTC[15], DG-Net++[40], ECN+[38], MMT[8], DCML[1], MEB[32], SpCL [9]; 2) the purely unsupervised methods for person Re-ID SSL[9], MMCL[22], JVTC[15], HCT[31], CycAs[26], SpCL[9], CAP[24], CACL [16], CCL [4] and ICE[2]). The comparison results of the state-of-the-art unsupervised domain adaptation methods and purely unsupervised methods on Market-1501 and DukeMTMC-reID are reported in Tab. 1.

As shown in Tab.1, we observe our method is competitive with all the state-of-the-art methods. On the three datasets, our proposed HHCL without any identity annotation achieves better performance than all of UDA methods that use of the additional labeled source dataset. It can be found that we not only perform better than all unsupervised domain adaptation methods and also achieve competitive performance with purely unsupervised methods. Under the fully unsupervised setting, HHCL achieves in mAP and in rank-1 accuracy on Market-1501, which is 1.9% higher than the current state of the art (ICE [4]). On DukeMTMC-reID, our method also achieves a high performance of in mAP/rank-1. These results indicate that our method is effective for unsupervised person Re-ID learning.

5.3.2 Comparison with supervised method

Our HHCL method can be easily implemented as a supervised approach when we replace the pseudo-labels with ground truth. We further find that our proposed unsupervised method is already comparable to some excellent supervised methods, such as PCB [20] and DG-Net [34], when ground truth is not used. And our HHCL even achieves a better performance under supervised setting. This result shows that our proposed method achieves better results when using the ground truth to avoid introducing noisy pseudo-labels. And it also further demonstrates the effectiveness of our method for the person Re-ID problem, both unsupervised and supervised.

5.4 Ablation Study

mAP R1 R5 R10
0(hard) 78.5 90.5 96.0 97.4
0.25 82.7 92.9 97.0 98.2
0.5 84.2 93.4 97.7 98.5
0.75 81.7 92.1 97.1 98.2
1.(mean) 80.8 91.3 96.3 97.6
Table 2: Evaluation of parameter on Market1501.

Influence of Hyper-Parameter Tab. 2 reports the experiment result under different value of hyper-parameter. As mentioned in 2, is a balancing factor between 0 and 1, which plays an important role in affecting the weights of the cluster-level loss and instance-level loss. When is equal to 0, the loss function contains only the hard instance contrastive loss. From the fig.3. we can find that the model converges very slowly in the early stage of the training process, and using only the hard samples for comparison is not benefit for learning generalized features and obtaining better clustering pseudo labels. On the contrary, when =1 and cluster-level loss only is used, although a faster convergence can be achieved, only one feature is retained for each cluster, which loses the diversity of intra class and is still not conducive to facilitating the network to learn more discriminative features. It can be seen that combining both kind of contrastive loss leads to better performance obviously. And when = 0.5, we get the best performance 84.2% in mAP, indicating that our proposed hybrid contrastive learning method has a distinct advantage over others during the training process.

Figure 3: Ablation study on Market1501: Result comparisons of different settings in mAP and Rank-1.
Method Market1501
mAP R1 R5 R10
ResNet50 84.2 93.4 97.7 98.5
IBN + GeM 87.8 95.1 98.2 98.8
IBN + GeM + LS 88.2 94.9 98.3 98.9
Method DukeMTMC-reID
mAP R1 R5 R10
ResNet50 73.3 85.1 92.4 94.6
IBN + GeM 76.8 87.9 93.4 94.9
IBN + GeM + LS 77.3 87.7 93.5 95.1
Table 3: Comparison of HHCL with other tricks on Market1501 and DukeMTMC-reID datasets. ’IBN’ denotes that the backbone applies IBN-ResNet50. ’GeM’ and ’LS’ represent GeM pooling layer and label smoothing respectively.

Instance-batch normalization (IBN) [18] and Generalized Mean Pooling (GeM) [19] has been proved effective in both supervised and UDA based Re-ID methods. We compare the performance of HCCL under different settings in Tab.3. The performance of our proposed HHCL can be further improved with an IBN-ResNet50 backbone network and GeM pooling layer.

6 Conclusion

In this paper, we propose a novel method for the fully unsupervised person re-ID. The new concepts and techniques introduced include a more efficient hybrid contrast learning framework and a memory based hard sample mining scheme. Specifically, our proposed HHCL approach comprehensively consider both of cluster level and instance level information. For effectively exploiting the invariance within and between clusters, HHCL leverages hard samples to guide network to learn more robust and discriminative features. Extensive experiments on two benchmark datasets demonstrated that HHCL achieves the best results comparing with all existing purely unsupervised and UDA-based Re-ID methods.


This work was supported in part by 111 Project of China (B17007), and in part by the National Natural Science Foundation of China (61602011).


  • [1] G. Chen, Y. Lu, J. Lu, and J. Zhou (2020) Deep credible metric learning for unsupervised domain adaptation person re-identification. In ECCV, Cited by: §1, §2.1, §5.3.1, Table 1.
  • [2] H. Chen, B. Lagadec, and F. Brémond (2021) ICE: inter-instance contrastive encoding for unsupervised person re-identification. ArXiv abs/2103.16364. Cited by: §5.3.1, Table 1.
  • [3] T. Chen, S. Kornblith, M. Norouzi, and G. E. Hinton (2020) A simple framework for contrastive learning of visual representations. ArXiv abs/2002.05709. Cited by: §1.
  • [4] Z. Dai, G. Wang, S. Zhu, W. Yuan, and P. Tan (2021) Cluster contrast for unsupervised person re-identification. ArXiv abs/2103.11568. Cited by: §1, §1, §4.2, §4.2, §5.3.1, §5.3.1, Table 1.
  • [5] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) ImageNet: a large-scale hierarchical image database. In CVPR, Cited by: §5.2.
  • [6] M. Ester, H. Kriegel, J. Sander, and X. Xu (1996) A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD, Cited by: §2.1, §5.2.
  • [7] Y. Fu, Y. Wei, G. Wang, X. Zhou, H. Shi, and T. S. Huang (2019) Self-similarity grouping: a simple unsupervised cross domain adaptation approach for person re-identification. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6111–6120. Cited by: §5.3.1, Table 1.
  • [8] Y. Ge, D. Chen, and H. Li (2020) Mutual mean-teaching: pseudo label refinery for unsupervised domain adaptation on person re-identification. ArXiv abs/2001.01526. Cited by: §1, §2.1, §4.2, §5.3.1, Table 1.
  • [9] Y. Ge, D. Chen, F. Zhu, R. Zhao, and H. Li (2020) Self-paced contrastive learning with hybrid memory for domain adaptive object re-id. ArXiv abs/2006.02713. Cited by: §1, §1, §2.1, §4.2, §5.2, §5.3.1, Table 1.
  • [10] D. Gray, S. Brennan, and H. Tao (2007) Evaluating appearance models for recognition, reacquisition, and tracking. Cited by: §5.1.
  • [11] K. He, H. Fan, Y. Wu, S. Xie, and R. B. Girshick (2020) Momentum contrast for unsupervised visual representation learning.

    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    , pp. 9726–9735.
    Cited by: §1, §1.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: §5.2.
  • [13] A. Hermans, L. Beyer, and B. Leibe (2017) In defense of the triplet loss for person re-identification. ArXiv abs/1703.07737. Cited by: §1, §4.3.
  • [14] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. ArXiv abs/1502.03167. Cited by: §5.2.
  • [15] J. Li and S. Zhang (2020) Joint visual and temporal consistency for unsupervised domain adaptive person re-identification. In ECCV, Cited by: §5.3.1, Table 1.
  • [16] M. Li, C. Li, and J. Guo (2021) Cluster-guided asymmetric contrastive learning for unsupervised person re-identification. ArXiv abs/2106.07846. Cited by: §2.1, §5.3.1, Table 1.
  • [17] Y. Lin, L. Xie, Y. Wu, C. Yan, and Q. Tian (2020) Unsupervised person re-identification via softened similarity learning. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3387–3396. Cited by: §4.2, Table 1.
  • [18] X. Pan, P. Luo, J. Shi, and X. Tang (2018) Two at once: enhancing learning and generalization capacities via ibn-net. In ECCV, Cited by: §5.4.
  • [19] F. Radenović, G. Tolias, and O. Chum (2019)

    Fine-tuning cnn image retrieval with no human annotation

    IEEE Transactions on Pattern Analysis and Machine Intelligence 41, pp. 1655–1668. Cited by: §5.4.
  • [20] Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang (2018) Beyond part models: person retrieval with refined part pooling. In ECCV, Cited by: §5.3.2, Table 1.
  • [21] A. van den Oord, Y. Li, and O. Vinyals (2018) Representation learning with contrastive predictive coding. ArXiv abs/1807.03748. Cited by: §4.2.
  • [22] D. Wang and S. Zhang (2020) Unsupervised person re-identification via multi-label classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10981–10990. Cited by: §2.1, §4.2, §5.3.1, Table 1.
  • [23] H. Wang, X. Zhu, T. Xiang, and S. Gong (2016) Towards unsupervised open-set person re-identification. 2016 IEEE International Conference on Image Processing (ICIP), pp. 769–773. Cited by: §1.
  • [24] M. Wang, B. Lai, J. Huang, X. Gong, and X. Hua (2021) Camera-aware proxies for unsupervised person re-identification. In AAAI, Cited by: §5.3.1, Table 1.
  • [25] X. Wang, X. Han, W. Huang, D. Dong, and M. Scott (2019) Multi-similarity loss with general pair weighting for deep metric learning. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5017–5025. Cited by: §1.
  • [26] Z. Wang, J. Zhang, L. Zheng, Y. Liu, Y. Sun, Y. Li, and S. Wang (2020) CycAs: self-supervised cycle association for learning re-identifiable descriptions. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16, pp. 72–88. Cited by: §5.3.1, Table 1.
  • [27] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin (2018) Unsupervised feature learning via non-parametric instance discrimination. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3733–3742. Cited by: §1.
  • [28] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin (2018) Unsupervised feature learning via non-parametric instance-level discrimination. ArXiv abs/1805.01978. Cited by: §1, §2.2.
  • [29] H. Yu, A. Wu, and W. Zheng (2017) Cross-view asymmetric metric learning for unsupervised person re-identification. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 994–1002. Cited by: §1.
  • [30] H. Yu, W. Zheng, A. Wu, X. Guo, S. Gong, and J. Lai (2019) Unsupervised person re-identification by soft multilabel learning. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2143–2152. Cited by: Table 1.
  • [31] K. Zeng, M. Ning, Y. Wang, and Y. Guo (2020) Hierarchical clustering with hard-batch triplet loss for person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13657–13665. Cited by: §2.1, §5.3.1, Table 1.
  • [32] Y. Zhai, Q. Ye, S. Lu, M. Jia, R. Ji, and Y. Tian (2020) Multiple expert brainstorming for domain adaptive person re-identification. ArXiv abs/2007.01546. Cited by: §1, §5.3.1, Table 1.
  • [33] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian (2015) Scalable person re-identification: a benchmark. 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1116–1124. Cited by: §5.1.
  • [34] Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, and J. Kautz (2019) Joint discriminative and generative learning for person re-identification. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2133–2142. Cited by: §5.3.2, Table 1.
  • [35] Z. Zheng, L. Zheng, and Y. Yang (2017) Unlabeled samples generated by gan improve the person re-identification baseline in vitro. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 3774–3782. Cited by: §5.1.
  • [36] Z. Zhong, L. Zheng, D. Cao, and S. Li (2017) Re-ranking person re-identification with k-reciprocal encoding. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3652–3661. Cited by: §5.2.
  • [37] Z. Zhong, L. Zheng, Z. Luo, S. Li, and Y. Yang (2019) Invariance matters: exemplar memory for domain adaptive person re-identification. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 598–607. Cited by: §5.3.1, Table 1.
  • [38] Z. Zhong, L. Zheng, Z. Luo, S. Li, and Y. Yang (2021) Learning to adapt invariance in memory for person re-identification. IEEE Transactions on Pattern Analysis and Machine Intelligence 43, pp. 2723–2738. Cited by: §5.3.1, Table 1.
  • [39] K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang (2019) Omni-scale feature learning for person re-identification. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3701–3711. Cited by: Table 1.
  • [40] Y. Zou, X. Yang, Z. Yu, B. Kumar, and J. Kautz (2020) Joint disentangling and adaptation for cross-domain person re-identification. ArXiv abs/2007.10315. Cited by: §5.3.1, Table 1.