Person re-identification (ReID) is widely applied in video surveillance and criminal investigation applications . Person ReID with deep neural networks has progressed and achieved high performance in recent years [3, 4, 5]. Apart from many novel and effective ideas being proposed, the improvement of baseline model plays a key role. The importance of baseline model should not be ignored. However, few works [6, 7, 5] have focused on the design of an effective baseline. The performance of such baselines has gradually become obsolete due to the rapid development of person ReID. In the literature, some effective training tricks or refinements briefly appear in several papers or source codes. In the present study, we design a strong and effective baseline for person ReID by collecting and evaluating such effective training tricks.
This study has three motivations. First, we survey articles published in ECCV2018 and CVPR2018 of the past year. As shown in Fig. 1, most of the previous works were expanded on poor baselines. On Market1501, only two in 23 baselines surpassed 90% rank-1 accuracy. The rank-1 accuracies of four baselines were even lower than 80%. On DukeMTMC-reID , all baselines did not surpass 80% rank-1 accuracy or 65% mean average precision (mAP). Achieving improvements on poor baselines cannot strictly demonstrate the effectiveness of some methods. Thus, a strong baseline is crucial in promoting research development.
Second, we discover that some works were unfairly compared with other state-of-the-art methods. The improvements were mainly from training tricks rather than methods themselves. However, the training tricks were understated in the paper; thus, readers ignored them, thereby exaggerating the effectiveness of the method. We suggest that reviewers consider these tricks when commenting on academic papers.
Third, the industry prefers simple and effective models over concatenating many local features in the inference stage. In pursuit of high accuracy, researchers in the academia always combine several local features or utilize semantic information from pose estimation or segmentation models. Nevertheless, such methods bring extra consumption. Large features also greatly reduce the speed of the retrieval process. Thus, we use tricks to improve the capability of the ReID model and only use global features to achieve high performance.
On the basis of the aforementioned considerations, the motivations of designing a strong baseline are summarized as follows:
For the academia, we survey many works published on top conferences and discover that most of them were expanded on poor baselines. We aim to provide a strong baseline for researchers to achieve high accuracies in person ReID.
For the community, we aim to provide references to reviewers regarding tricks that will affect the performance of the ReID model. We suggest that reviewers consider these tricks when comparing the performance of different methods.
For the industry, we aim to provide effective tricks for acquiring improved models without extra consumption.
Many effective training tricks have been presented in papers or open-sourced projects. We collect tricks and evaluate each of them on ReID datasets. After numerous experiments, we select six tricks to introduce in this study. We propose a novel bottleneck structure, namely, batch normalization neck (BNNeck). As classification and metric losses are inconsistent in the same embedding space, BNNeck optimizes these two losses in two different embedding spaces. In addition, person ReID task mainly focuses on ranking performance, such as cumulative match characteristic (CMC) curve and mAP, but ignores the clustering effect, such as intra-class compactness and inter-class separability. However, clustering effect is important to some special tasks, such as object tracking, which must decide on a distance threshold to separate positive samples from negative ones. An easy approach to overcome this problem is to train the model with center loss. Finally, we add the tricks into a widely used baseline to obtain our modified baseline (the backbone is ResNet50), which achieves 94.5% and 85.9% mAP on Market1501.
To determine whether these tricks are generally useful or not, we design extended experiments from three aspects. First, we follow the cross-domain ReID settings in which the models are trained and evaluated on different datasets. Cross-domain experiments can show whether the tricks boost the models or simply suppress overfitting in the training dataset. Second, we evaluate all tricks with different backbones, such as ResNet18, SeResNet50, and IBNNet-50. All backbones achieve improvements from our training tricks. Third, we reproduce some state-of-the-art methods on our modified baseline. Experimental results show that our baseline obtains better performance than those reported in published papers. Although our baseline achieves surprising performance, some methods remain effective on our baseline. Thus, our baseline can be a strong baseline for the ReID community.
As a supplement, we discover that different works select different image sizes and batch size numbers. Therefore, we explore their effects on model performance. The contributions of this study are summarized as follows:
We collect effective training tricks for person ReID. We evaluate the improvements from each trick on two widely used datasets.
We observe the inconsistency between ID loss and triplet and propose a novel neck structure, namely, BNNeck.
We observe that the ReID task ignores intra-class compactness and inter-class separability and claim that center loss can compensate for it.
We provide a strong ReID baseline, which achieves 94.5% and 85.9% mAP on Market1501. The results are obtained with global features provided by ResNet50 backbone. To our best knowledge, this result is the best performance acquired by global features in person ReID.
We design extended experiments to demonstrate that our baseline can be a strong baseline for the ReID community.
As a supplement, we evaluate the influences of image size and batch size number on the performance of ReID models.
Ii Related Works
This section focuses on deep learning baseline for person ReID. In addition, existing approaches compared with our strong baseline for deep person ReID are introduced.
Ii-a Baseline for Deep Person ReID
Recent studies on person ReID mostly focus on building deep convolutional neural networks (CNNs) to represent the features of person images in an end-to-end learning manner. GoogleNet, ResNet , DenseNet 6]
proposed ID-discriminative embedding (IDE) to train the re-ID model as image classification which is fine-tuned from the ImageNet
pre-trained models. Classification loss is also called ID loss in person ReID because IDE is trained by classification loss. However, ID loss requires an extra fully connected (FC) layer to predict the logits of person IDs in the training stage. In the inference stage, such FC layer is removed, and the feature from the last pooling layer is used as the representation vector of the person image.
Different from ID loss, metric loss regards the ReID task as a clustering or ranking problem. The most widely used baseline based on metric learning is training model with triplet loss . A triplet includes there images, i.e. anchor, positive, and negative samples. The anchor and positive samples belong to the same person ID, whereas the negative sample belongs to a different person ID. Triplet loss minimizes the distance from the anchor sample to the positive sample and maximizes the distance from the anchor sample to the negative one. However, triplet loss is greatly influenced by the sample triplets. Inspired by FaceNet, Hermans et al. proposed an online hard example mining for triplet loss (TriHard loss). Most current methods are expanded on the TriHard baseline. Combining ID loss with TriHard loss is also a popular manner of acquiring a strong baseline .
Apart from designing different losses, some works focus on building effective baseline model for deep person ReID. In , three good practices were proposed to build an effective CNN baseline toward person ReID. Their most important practice is adding a batch normalization (BN) layer after the global pooling layer. Similar to these models, the baseline uses a global feature for image representation. Sun et al.  proposed part-based convolutional baseline (PCB). Given an image input, PCB outputs a convolutional descriptor consisting of several part-level features. Both baselines have achieved good performance in person ReID.
Ii-B Some Existing Approaches for Deep person ReID
On the basis of the aforementioned baselines, many methods have been proposed in the past few years. We divide these works into striped-based, pose-guided, mask-guided, attention-based, GAN-based, and re-ranking methods.
Stripe-based methods, which divide the image into several stripes and extract local features for each stripe, play an important role in person ReID. Inspired by PCB, the typical methods includes AlignedReID++, MGN , SCPNet , etc. Stripe-based local features are effective in boosting the performance of the ReID model. However, they always encounter the problem of pose misalignment.
Pose-guided methods [17, 18, 19, 20] use an extra pose/skeleton estimation model to acquire human pose information. Pose information can exactly align corresponding parts of two person images. However, an extra model brings additional computation consumption. A trade off between the performance and speed of the model is important.
proposed a mask-guided contrastive attention model that applies binary segmentation masks to learn features separately from the body and background regions. Kalayehet al.  proposed SPReID, which uses human semantic parsing to harness local visual cues for person ReID. Mask-guided models extremely rely on accurate pedestrian segmentation model.
Attention-based methods [24, 25, 26, 27] involve an attention mechanism to extract additional discriminative features. In comparison with pixel-level masks, attention region can be regraded as an automatically learned high-level ‘mask’. A popular model is Harmonious Attention CNN (HA-CNN) model porposed by Li et al. . HA-CNN combines the learning of soft pixel and hard regional attentions along with simultaneous optimization of feature representations. An advantage of attention-based models is that they do not require a segmentation model to acquire mask information.
GAN-based methods [28, 29, 30, 31] address the limited data for person ReID. Zheng et al.  first used GAN  to generate images for enriching ReID datasets. The GAN model randomly generates unlabeled and unclear images. On the basis of , PTGAN  and CamStyle  were proposed to bridge domain and camera gaps for person ReID, respectively. Qian et al.  proposed PNGAN for obtaining a new pedestrian feature and transforming a person into normalized poses. The final feature is obtained by combining the pose-independent features with original ReID features. With the development of GAN, many ganbased methods have been proposed to generate high quality for supervised and unsupervised person ReID tasks.
are post-processing strategies for image retrieval. In general, person ReID simply uses Euclidean or cosine distances in the retrieval stage. Zhonget al.  a -reciprocal encoding method to re-rank the ReID results. Given an image, a -reciprocal feature is calculated by encoding its k-reciprocal nearest neighbors into a single vector, which is used for re-ranking under the Jaccard distance. The final distance is computed as the combination of the original and Jaccard distances. Shen et al.  proposed a deep group-shuffling random walk (DGRW) network for fully utilizing the affinity information between gallery images in training and testing processes. In the retrieval stage, DGRW can be regarded as a re-ranking method. Re-ranking is a critical step in improving retrieval accuracy.
Iii Standard Baseline
In this section, a widely used baseline for the academia and industry is introduced. For convenience, such baseline is called standard baseline. The backbone of the standard baseline is ResNet50 . In the training stage, the pipeline includes the following steps:
We initialize the ResNet50 with pre-trained parameters on ImageNet and change the dimension of the fully connected layer to . denotes the number of identities in the training dataset.
We randomly sample identities and images of per person to constitute a training batch. Finally, the batch size equals to . In this study, we set and .
We resize each image into
pixels and pad the resized image 10 pixels with zero values. We randomly crop it into arectangular image.
Each image is flipped horizontally with 0.5 probability.
Each image is decoded into 32-bit floating point raw pixel values in . RGB channels are normalized by subtracting 0.485, 0.456, 0.406 and dividing by 0.229, 0.224, 0.225, respectively.
The model outputs ReID features and ID prediction logits .
ReID features is used to calculate triplet loss . ID prediction logits is used to calculated cross-entropy loss. The margin of triplet loss is 0.3.
Adam method is adopted to optimize the model. The initial learning rate is 0.00035 and is decreased by 0.1 at the 40th epoch and 70th epoch. Training epochs total 120.
Fig. (a)a presents the framework of the standard baseline, and additional details are available in our open source code.
Iv Our Strong Baseline and Training Tricks
This section introduces some effective training tricks in person ReID. Our proposed BNNeck structure is discussed in detail. The intra-class compactness and inter-class separability problem for person ReID is also raised. Most tricks can be expanded on the standard baseline without changing the model architecture. Fig. (b)b shows training strategies and model architecture.
Iv-a Warmup Learning Rate
Learning rate has a great effect on the performance of a ReID model. Standard baseline is initially trained with a large and constant learning rate. In , a warmup strategy was applied to bootstrap the network for enhanced performance. In practice, we spend 10 epochs, thereby linearly increasing the learning rate from to , as shown in Fig. 3. The learning rate is decayed to and at 40th and 70th epochs, respectively. The learning rate at epoch is compute as follows:
Iv-B Random Erasing Augmentation
In person ReID, persons in the images are sometimes occluded by other objects. To address the occlusion problem and improve the generalization capability of ReID models, Zhong et al.  proposed a new data augmentation approach, namely, random erasing augmentation (REA). In practice, for an image in a mini-batch, the probability of REA undergoing random erasing is , and the probability of remaining unchanged is . REA randomly selects a rectangular region with size in image , and erases its pixels with random values. Assuming the area of image and region are and , respectively, we denote as the area ratio of erasing rectangle region. In addition, the aspect ratio of region is randomly initialized between and . To determine a unique region, REA randomly initializes a point . If and , then we set the region as the selected rectangle region. Otherwise we repeat the above process until an appropriate is selected. With the selected erasing region , each pixel in is assigned to the mean value of image .
In this study, we set hyper-parameters to , respectively. Some examples are shown in Fig. 4.
Iv-C Label Smoothing
The IDE  network is a basic baseline in person ReID. The last layer of IDE, which outputs the ID prediction logits of images, is a fully connected layer with a hidden size equal to the number of persons . Given an image, we denote as truth ID label and as ID prediction logits of class . The cross-entropy loss is computed as follows:
As the category of classification is determined by the person ID, we call such loss function as ID loss in this study.
Nevertheless, person ReID can be a one-shot learning task because person IDs of the testing set do not appear in the training set. The ReID model must be prevented from overfitting training IDs. Label smoothing (LS) proposed in  is a widely used method to prevent overfitting for a classification task. The construction of is changed to:
where is a small constant to encourage the model to be less confident on the training set. In this study, is set to be . When the training set is not large, LS can significantly improve the model performance.
Iv-D Last Stride
A high spatial resolution always enriches feature granularity. In , Sun et al.
removed the last spatial down-sampling operation in the backbone network to increase the size of the feature map. For convenience, we denote the last spatial down-sampling operation in the backbone network as the last stride. The last stride of ResNet50 is set to be 2. When fed into an image withsize, the backbone of ResNet50 outputs a feature map with a spatial size of . If last stride is changed from 2 to 1, then we can obtain a feature map with increased spatial size (). This manipulation only slightly increases the computation cost and does not involve extra training parameters. However, an increased spatial resolution brings significant improvement.
Most works combined ID and triplet losses to train ReID models. Fig. 5(a) shows that both losses constrain the same feature in the standard baseline.. However, the targets of these two losses are inconsistent in the embedding space.
(a) presents that ID loss constructs several hyperplanes to separate the embedding space into different subspaces. The features of each class are distributed affinely in different subspaces. Cosine distance is more suitable than Euclidean distance for the model optimized by ID loss in the inference stage. However, as shown in6(b), triplet loss enhances intra-class compactness and inter-class separability in the Euclidean space. Inter-class distance sometimes is smaller than intra-class distance because triplet loss cannot provide globally optimal constraint. A widely used method is to combine ID and triplet losses to train the model. This approach allows the model to learn additional discriminative features. Nevertheless, for image pairs in the embedding space, ID loss optimizes the cosine distances whereas triplet loss focuses on the Euclidean distances. If we use both losses to optimize a feature space simultaneously, then their goals may be inconsistent. During training, a possible problem is that one loss is reduced, whereas the other loss oscillates or even increases, as shown in Fig.8. Finally, triplet loss may influence the clear decision surfaces of ID loss, and ID loss may reduce the intra-class compactness of triplet loss. The feature distribution is tadpole shaped. Therefore, directly combining these two losses can boost the performance, but it is not the best way.
Xiong et al.  added a BN layer between feature and ID loss, which is same as Fig. 10(d). The authors claimed that the BN layer overcomes the overfitting and boosts the performance of IDE baseline. However, we consider that the BN layer can smoothen the feature distribution in the embedding space. For ID loss (Fig. 6(a)), the BN layer will enhance the intra-class compactness. The BN layer can improve the performance of ID loss because the features close to the affine center lack clear decision surfaces and are difficult to distinguish. Nevertheless, such layer increases the cluster radius of intra-class feature for triplet loss. Thus, the decision surfaces of 6(e)(f) are stricter than those of Fig. 6(b)(c).
To overcome this problem, we design a structure, namely, BNNeck, as shown in Fig. 5(b). BNNeck adds a BN layer after features and before classifier FC layers. The BN and FC layers are initialized through Kaiming initialization proposed in . The feature before the BN layer is denoted as . We let pass through a BN layer to acquire the feature . In the training stage, and are used to compute triplet and ID losses, respectively. Fig. 5(g) shows that not only can keep a compact distribution from but also acquires ID knowledge from ID loss. Affected by the BN layer and ID loss, the distribution of is tadpole shaped. In comparison with 5(c), has clear decision surfaces because of the weaker influence of the triple loss. Additional details are introduced in Section V-D.
In the inference stage, we select to perform the person ReID task. Cosine distance metric can achieve better performance than Euclidean distance metric. Experimental results in Table. I show that BNNeck can improve the performance of the ReID model by a large margin.
Iv-F Center Loss
Person ReID is always regarded as a retrieval/ranking task. The evaluation protocols, i.e. CMC curve and mAP, are determined by the ranking results but ignore the clustering effect. However, for some ReID applications, such as tracking task, an important step is to decide on a distance threshold to separate positive and negative objects. As shown in Fig. 7, two cases can acquire the same ranking results, but probe2 is easy for the tracking task because of its intra-class compactness of positive pairs.
Focusing on relative distance, triplet loss is computed as:
where and are feature distances of positive and negative pairs. is the margin of triplet loss, and equals . In this study, is set to . However, the triplet loss only considers the difference between and and ignores their absolute values. For instance, when and , the triplet loss is . For another case, when and , the triplet loss also is . Triplet loss is determined by two randomly sampled person IDs. Ensuring that in the entire training dataset is difficult. In addition, intra-class compactness is ignored.
To compensate for the drawbacks of the triplet loss, we involve center loss 
intraining, simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. The center loss function is formulated as follows:
where is the label of the th image in a mini-batch. denotes the th class center of deep features, and is the batch size number. The formulation effectively characterizes the intra-class variations. Minimizing center loss increases intra-class compactness. Our model includes three losses as follows:
where is the balanced weight of center loss. In our baseline, is set to be .
We evaluate our models on Market1501  and DukeMTMC-reID  datasets, because both datasets are widely used and large scale. Following the previous works, we use rank-1 accuracy and mAP for evaluation on both datasets.
Market1501 contains 32,217 images of 1,501 labeled persons of six camera views. The training set has 12,936 images from 751 identities, and the testing set has 19,732 images from 750 identities. In testing, 3,368 hand-drawn images from 750 identities are used as queries to retrieve the matching persons in the database. Single-query evaluation is used in this study.
DukeMTMC-reID is a new large-scale person ReID dataset and collects 36,411 images from 1,404 identities of eight camera views. The training set has 16,522 images from 702 identities, and the testing set has 19,889 images from other 702 identities. Single-query evaluation is used in this study.
V-B Influences of Each Trick (Same domain)
|Model||r = 1||mAP||r = 1||mAP|
The standard baseline introduced in section III achieves 87.7% and 79.7% rank-1 accuracies on Market1501 and DukeMTMC-reID, respectively. The performance of standard baseline is similar to most baselines reported in other papers. Warmup strategy, random erasing augmentation, LS, stride change, BNNeck, and center loss are individually added to the model training process. The designed BNNeck boosts performance to a greater extent than other tricks, especially on DukeMTMC-reID. Finally, with these tricks, the baseline acquires 94.5% rank-1 accuracy and 85.9% mAP on Market1501. On DukeMTMC-reID, the baseline reaches 86.4% rank-1 accuracy and 76.4% mAP. Thus, these training tricks boost the performance of the standard baseline by over 10% mAP. To achieve such improvement, we only involve an extra BN layer and do not increase training time.
V-C Influences of Each Trick (Cross domain)
|Model||r = 1||mAP||r = 1||mAP|
To explore the effectiveness further, we present the results of cross-domain experiments in Table. II. In overview, three tricks, namely, warmup strategy, LS, and BNNeck, greatly boost the cross-domain performance of ReID models. Stride change and center loss seem to have no influence on the performance. However, REA harms the models in cross-domain ReID task. When our modified baseline is trained without REA, it achieves 41.4% and 54.3% rank-1 accuracies on Market1501 and DukeMTMC-reID datasets, respectively. The performance surpasses those of the standard baseline by a large margin. We infer that by REA masking the regions of training images, the model learns additional knowledge in the source domain and performs poorly in the target domain. Finally, our baseline achieves good performance and can be used as a strong baseline for cross-domain ReID task.
V-D Analysis of BNNeck
V-D1 Different neck structures
|Feature||Metric||r = 1||mAP||r = 1||mAP|
To discuss the effectiveness of our BNNeck, we design several different neck structures, as shown as Fig. 10. In addition, some ablation studies also are analysed in Table III. Neck3 outperforms Neck1 and Neck2. In addition, BNNeck2 is worse than Neck2, but BNNeck1 is better than Neck1. Our BNNeck achieves the best performance on two benchmarks. In summary, we present the following observations/conclusions. 1) Without the BN layer, integrating ID and triplet losses is better than only using one loss. 2) The BN layer is effective for ID loss but is invalid for triplet loss. 3) Our BNNeck that sets triplet loss before the BN layer is a reasonable neck structure.
V-D2 Inconsistency between ID loss and Triplet loss
To verify that ID and triplet losses are inconsistent in the same feature space, we train the models with Neck3, BNNeck3, and our proposed BNNeck. Fig. 10 shows that these three neck structures use ID and triplet losses to optimize the same feature. Fig. 8 presents the training loss curves of 500 iterations. In Fig. 8a and 8d, the triplet loss initially increases and then decays in the loss curves marked by black ovals, showing a clear confrontation between triplet and ID losses. In comparison with Neck3, BNNeck3 adds a BN layer after f. In Figs. 8b and 8e, the BN layer weakens but does not eliminate the inconsistency. However, for BNNeck in Figs. 8c and 8f, the inconsistency is suppressed, and the triplet loss curves are smoothened. In conclusion, the BN layer can weaken the inconsistency between the losses, and separating them into two different feature spaces is important.
V-D3 Visualization of feature distribution
To analyze the distribution of the different features in Fig. 10, we train models in MNIST dataset. The visualization has considerable noise because the number of person IDs on ReID benchmark is large, and the number of images from each person ID is small. By contrast, MNIST only has 10 categories, and each category consists of thousands of samples, making the feature distribution clear and robust. Fig. 9 shows the results. ID and triplet losses have two different feature distributions. When integrating these two losses in Fig. 9c, the clustered distribution is stretched to be tadpole shaped. The distributions of (df) are more gaussian than those of (ac) because of the BN effect. Figs. 9g and 9h show that our BNNeck separates triplet and ID losses into two different feature spaces. The feature distribution of triplet loss remains clustered, and that of ID loss has clear decision surfaces similar to Figs. 9a and 9b.
We summarize our conclusions or observations as follows: 1) The feature distributions of ID and triplet losses are affined and clustered, i.e., they are inconsistent. 2) The feature distribution of ID+Triplet loss is tadpole shaped. 3) The BN layer can smoothen/normalize the feature distribution and enhance the intra-class compactness for ID loss but reduce it for triplet loss. 4) We separate triplet and ID losses into two different and suitable feature spaces.
are mean value, standard deviation, and Coefficient of Variation.
V-D4 Two feature space of BNNeck
Although the results on MNIST in Fig. 9 can efficiently support our conclusion, image classification and person ReID are two different tasks. We perform statistical analysis on the norm distribution of and in BNNeck on Market1501 dataset. The mean value and standard deviation of feature norm are calculated. To analyze the separability of feature distribution, Coefficient of Variation is also present. As shown in Fig. 11, and are distributed differently in the feature space.
is compactly and gaussian distributed in an annular space because it is directly optimized by triplet loss. However, we consideras a tadpole-shaped distribution because ID loss stretches intra-class distribution. The maximum value of is 48.70, whereas the is 18.62. of is 0.043, but of reaches 0.98, which demonstrates that is distributed more discretely than . In conclusion, BNNeck provide two different and suitable feature spaces for triplet loss and ID loss.
V-D5 Metric space for BNNeck
We evaluate the performance of two different features ( and ) with Euclidean and cosine distance metrics. All models are trained without center loss in Table. IV. We observe that cosine distance metric performs better than Euclidean distance metric for . As ID loss directly constrains the features followed the BN layer, can be clearly separated by several hyperplanes. The cosine distance can measure the angle between feature vectors; thus, cosine distance metric is more suitable than Euclidean distance metric for . However, is simultaneously close to triplet loss and constrained by ID loss. The two types of metrics achieve similar performance for .
Overall, BNNeck significantly improve the performance of ReID models. We select with cosine distance metric to perform the retrieval in the inference stage.
|Feature||Metric||r = 1||mAP||r = 1||mAP|
V-E Analysis of Center loss
We discuss the influence of center loss on intra-class compactness. We consider that average intra-class distance cannot fully represent the intra-class compactness because it ignores inter-class distance. For convenience, the average intra-class and inter-class distances are denoted as and , respectively. Inspired by , the ratio of to is used to measure the clustering effect of feature distribution. The ratio is computed as . We set to different values and evaluate rank-1, mAP, and of the models. Table V presents the results.
|Feature||r = 1||mAP||r = 1||mAP|
For the feature constrained directly by center loss, decreases as increases. With increasing from 0 to 0.5, is reduced from 0.407 to 0.311 on Market1501 and from 0.424 to 0.363 on DukeMTMC-reID. Hence, center loss can improve intra-class compactness and inter-class separability, thereby bringing a clear boundary between positive and negative samples. When is set to 0.5, can acquire the best clustering effect but obtains the worse rank-1 and mAP accuracies. However, the BN layer destroys such clustering effect. For feature , the value of is almost not influenced by . On the basis of these observations, we arrive at the following conclusions: (1) Center loss boosts intra-class compactness and inter-class separability. (2) The BN layer can destroy the effect of center loss. (3) Increasing the weight of center loss may reduce ranking performance.
V-F Comparison to Other Baselines
|Baseline||Loss||r = 1||mAP||r = 1||mAP|
We compare our strong baseline with other effective baselines, such as IDE , TriNet , AWTL  and PCB . PCB is a part-based baseline for person ReID. Table VI presents the performance of these baselines. The experimental results show that our baseline outperforms IDE, TriNet, and AWTL by a large margin. PCB integrates multi-part features and GP uses effective tricks, and both of them achieves great performance. However, our baseline surpasses them by over 7.1% mAP on both datasets. To our best knowledge, our baseline is the strongest baseline.
V-G Comparison to State-of-the-Arts
|Type||Method||r = 1||mAP||r = 1||mAP|
|Global feature||IDE ||1||79.5||59.9||-||-|
We compare our strong baseline with state-of-the-art methods in Table. VII. All methods have been divided into different types. Pyramid achieves surprising performance on two datasets, but it concatenates 21 local features of different scales. When only the global feature is utilized, Pyramid obtains 92.8% rank-1 accuracy and 82.1% mAP on Market1501. Our strong baseline can reach 94.5% rank-1 accuracy and 85.9% mAP on Market1501. BFE obtains similar performance to our strong baseline, but it combines features of two branches. Among all methods that only use global features, our strong baseline outperforms AWTL by more than 10% mAP on both Market1501 and DukeMTMC-reID. To our best knowledge, our baseline achieves the best performance when only global features are used.
V-H Baseline Meets State-of-the-Arts
|Method||Reference||r = 1||mAP||r = 1||mAP||Loss|
We reproduce some popular state-of-the-art methods with our strong baseline. Given numerous outstanding methods are available, we cannot try all of them and select only several typical models such as -reciprocal re-ranking, PCB, AligedReID++, CamStyle, and MGN. For a fair comparison, we use the same losses as the paper reported to train the models. For instance, AlignedReID++ only uses ID and triplet losses, and we do not use center loss to reproduce it. However, as -reciprocal re-ranking is a post-processing method of global features, three losses are used to improve its performance. Table VIII shows the details and results, wherein the values in parentheses are the results reported by authors in their papers. In addition, we present the performance of the baselines (with BNNeck) trained by different losses as a reference.
Our baseline boosts the performance of -reciprocal re-ranking, PCB, AligedReID++, and CamStyle by a large margin. The mAP of k-reciprocal re-ranking achieves +30.6% on Market1501, demonstrating that the performance of baselines is important for methods. In addition, our MGN achieves similar performance to  because its accuracies are too high to improve, and  uses BNNeck1 structure. Integrating multiple part features can reduce the effect of global features and limit the effectiveness of baselines for PCB and MGN. However, PCB and MGN still obtain better performance than Baseline2, i.e., part-based methods are effective for our baseline. However, CamStyle(Our) outperforms CamStyle  but not Baseline1. Our baseline can be a strong baseline for the ReID community because it can boost the performance of some methods, and other methods based on it may be ineffective. To some extent, our baseline efficiently filters effective methods.
|Backbone||r = 1||mAP||r = 1||mAP|
V-I Performance of Different Backbones
All aforementioned models apply ResNet50 as backbones for clear ablation studies and comparison with other methods.
Models with different backbones, such as ResNet, SeResNet, SeResNeXt, and IBNNet, are evaluated because backbones have a great influence on their performance. As shown in Table IX, deep and large backbones can achieve high performance. For example, ResNet101 outperforms ResNet18 by 2.8% and 9.3% in Rank-1 and mAP accuracy on Market1501, respectively. In addition, the channel attention of SeNet and group convolution of ResNeXt can enhance the performance by a slight margin. IBN-Net50 , which replaces the BN layers with instance BN layers for ResNet50, is also effective for our baseline. Specifically, IBN-Net50-a is suitable for standard ReID task and obtains 95.0% and 90.1% rank-1 accuracies on Market1501 and DukeMTMC-reID, respectively. However, IBN-Net50-b achieves 50.1% rank-1 and 29.8% mAP for MD and 61.7% rank-1 and 32.0% mAP (DM).
For comparison, IBN-Net50-a achieves 40.0% rank-1 and 25.1% mAP for MD and 52.9% rank-1 and 25.1% mAP (DM). In conclusion, IBN-Net-a and IBN-Net50-b are suitable for the same domain task and the cross-domain task, respectively.
Vi Supplementary Experiments
We observe that some previous works were conducted with different batch size numbers or image sizes. In this section, we explore their effects on model performance as a supplement.
Vi-a Influences of the Number of Batch Size
|r = 1||mAP||r = 1||mAP|
The mini-batch of triplet loss includes images. and denote the number of different persons and the number of different images per person, respectively. A mini-batch can only contain up to 128 images in one GPU; thus, we cannot perform the experiments with or . We remove center loss to find the relation between triplet loss and batch size clearly. Table. X presents the results. However, conclusions do not specifically show the effect of on performance. A slight trend is that a large batch size is beneficial for model performance. We infer that a large helps mine hard positive pairs, whereas a large helps mine hard negative pairs.
Vi-B Influences of Image Size
We feed training images of different sizes and train models without center loss with the setting As shown in Table XI, four models achieve similar performances on both datasets. In our opinion, the image size is not a strictly important factor for the performance of ReID models.
|Image Size||r = 1||mAP||r = 1||mAP|
Vii Conclusions and Outlooks
In this study, we propose a strong baseline for person ReID with only adding an extra BN layer for standard baseline. Our strong baseline achieves 94.5% rank-1 accuracy and 85.9% mAP on Market1501. To our best knowledge, this result is the best performance achieved by the global features of a single backbone. We evaluate each trick of our baseline on same- and cross-domain ReID tasks. In addition, some state-of-the-art methods can be effectively extended on our baseline. We hope that this work can promote ReID research in the academia and industry.
We observe the inconsistency between ID and triplet losses in previous ReID baselines. To address this problem, we propose a BNNeck to separate both losses into two different feature spaces. Extended experiments show that the BN layer can enhance and reduce the intra-class compactness for ID and triplet losses, respectively. Furthermore, ID loss is suitable for optimizing the feature.
We emphasize that the evaluation of ReID task ignores the clustering effect of representation features. However, the clustering effect is important to some ReID applications, such as tracking task wherein an important step is deciding on a distance threshold to separate positive and negative objects. A simple way to address this problem is using center loss to train the model. Center loss can boost the clustering effect of features, but may reduce the ranking performance of ReID models.
In the future, we will explore additional tricks and effective methods based on this strong baseline. In comparison with face recognition, person ReID still has room for further exploration. In addition, some confusions remain, such as why REA reduces the cross-domain performance in our baseline. Points wherein the conclusion is unclear are worth researching.
This research is supported by the National Natural Science Foundation of China (No. 61633019) and the Science Foundation of Chinese Aerospace Industry (JCKY2018204B053).
-  H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang, “Bag of tricks and a strong baseline for deep person re-identification,” in
-  Z. Wang, J. Jiang, Y. Yu, and S. Satoh, “Incremental re-identification by cross-direction and cross-ranking adaption,” IEEE Transactions on Multimedia, 2019.
-  H. Luo, W. Jiang, X. Zhang, X. Fan, J. Qian, and C. Zhang, “Alignedreid++: Dynamically matching local information for person re-identification,” Pattern Recognition, 2019. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0031320319302031
-  C. Wang, Q. Zhang, C. Huang, W. Liu, and X. Wang, “Mancs: A multi-task attentional network with curriculum sampling for person re-identification,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 365–381.
-  Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline),” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 480–496.
-  Z. Zheng, L. Zheng, and Y. Yang, “A discriminatively learned cnn embedding for person reidentification,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 14, no. 1, p. 13, 2018.
-  F. Xiong, Y. Xiao, Z. Cao, K. Gong, Z. Fang, and J. T. Zhou, “Good practices on building effective cnn baseline model for person re-identification,” in Tenth International Conference on Graphics and Image Processing (ICGIP 2018), vol. 11069. International Society for Optics and Photonics, 2019, p. 110690I.
-  E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, “Performance measures and a data set for multi-target, multi-camera tracking,” in European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking, 2016.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
-  G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248–255.
-  H. Liu, J. Feng, M. Qi, J. Jiang, and S. Yan, “End-to-end comparative attention networks for person re-identification,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3492–3506, 2017.
-  F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 815–823.
-  G. Wang, Y. Yuan, X. Chen, J. Li, and X. Zhou, “Learning discriminative features with multiple granularities for person re-identification,” in 2018 ACM Multimedia Conference on Multimedia Conference. ACM, 2018, pp. 274–282.
-  X. Fan, H. Luo, X. Zhang, L. He, C. Zhang, and W. Jiang, “Scpnet: Spatial-channel parallelism network for joint holistic and partial person re-identification,” arXiv preprint arXiv:1810.06996, 2018.
-  H. Zhao, M. Tian, S. Sun, J. Shao, J. Yan, S. Yi, X. Wang, and X. Tang, “Spindle net: Person re-identification with human body region guided feature decomposition and fusion,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1077–1085.
-  L. Wei, S. Zhang, H. Yao, W. Gao, and Q. Tian, “Glad: Global-local-alignment descriptor for pedestrian retrieval,” in Proceedings of the 25th ACM international conference on Multimedia. ACM, 2017, pp. 420–428.
-  M. Saquib Sarfraz, A. Schumann, A. Eberle, and R. Stiefelhagen, “A pose-sensitive embedding for person re-identification with expanded cross neighborhood re-ranking,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
-  L. Zheng, Y. Huang, H. Lu, and Y. Yang, “Pose invariant embedding for deep person re-identification,” IEEE Transactions on Image Processing, 2019.
-  C. Song, Y. Huang, W. Ouyang, and L. Wang, “Mask-guided contrastive attention model for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1179–1188.
-  M. M. Kalayeh, E. Basaran, M. Gökmen, M. E. Kamasak, and M. Shah, “Human semantic parsing for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1062–1071.
-  L. Qi, J. Huo, L. Wang, Y. Shi, and Y. Gao, “Maskreid: A mask based deep ranking neural network for person re-identification,” arXiv preprint arXiv:1804.03864, 2018.
-  J. Si, H. Zhang, C.-G. Li, J. Kuen, X. Kong, A. C. Kot, and G. Wang, “Dual attention matching network for context-aware feature sequence based person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5363–5372.
-  W. Li, X. Zhu, and S. Gong, “Harmonious attention network for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2285–2294.
-  S. Li, S. Bak, P. Carr, and X. Wang, “Diversity regularized spatiotemporal attention for video-based person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 369–378.
-  J. Xu, R. Zhao, F. Zhu, H. Wang, and W. Ouyang, “Attention-aware compositional network for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2119–2128.
-  Z. Zheng, L. Zheng, and Y. Yang, “Unlabeled samples generated by gan improve the person re-identification baseline in vitro,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3754–3762.
-  L. Wei, S. Zhang, W. Gao, and Q. Tian, “Person transfer gan to bridge domain gap for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 79–88.
-  Z. Zhong, L. Zheng, Z. Zheng, S. Li, and Y. Yang, “Camstyle: A novel data augmentation method for person re-identification,” IEEE Transactions on Image Processing, vol. 28, no. 3, pp. 1176–1190, 2019.
-  X. Qian, Y. Fu, T. Xiang, W. Wang, J. Qiu, Y. Wu, Y.-G. Jiang, and X. Xue, “Pose-normalized image generation for person re-identification,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 650–667.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
-  Z. Zhong, L. Zheng, D. Cao, and S. Li, “Re-ranking person re-identification with k-reciprocal encoding,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1318–1327.
-  Y. Shen, H. Li, T. Xiao, S. Yi, D. Chen, and X. Wang, “Deep group-shuffling random walk for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2265–2274.
-  M. Ye, C. Liang, Y. Yu, Z. Wang, Q. Leng, C. Xiao, J. Chen, and R. Hu, “Person reidentification via ranking aggregation of similarity pulling and dissimilarity pushing,” IEEE Transactions on Multimedia, vol. 18, no. 12, pp. 2553–2566, 2016.
-  A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re-identification,” arXiv preprint arXiv:1703.07737, 2017.
-  X. Fan, W. Jiang, H. Luo, and M. Fei, “Spherereid: Deep hypersphere manifold embedding for person re-identification,” Journal of Visual Communication and Image Representation, 2019.
-  Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmentation,” arXiv preprint arXiv:1708.04896, 2017.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026–1034.
-  Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” in European conference on computer vision. Springer, 2016, pp. 499–515.
-  L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re-identification: A benchmark,” in Computer Vision, IEEE International Conference, 2015.
-  X. Zhang, Z. Fang, Y. Wen, Z. Li, and Y. Qiao, “Range loss for deep face recognition with long-tailed training data,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 5409–5418.
-  E. Ristani and C. Tomasi, “Features for multi-target multi-camera tracking and re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6036–6046.
-  F. Zheng, X. Sun, X. Jiang, X. Guo, Z. Yu, and F. Huang, “A coarse-to-fine pyramidal model for person re-identification via multi-loss dynamic training,” arXiv preprint arXiv:1810.12193, 2018.
-  Z. Dai, M. Chen, S. Zhu, and P. Tan, “Batch feature erasing for person re-identification and beyond,” arXiv preprint arXiv:1811.07130, 2018.
-  Y. Sun, L. Zheng, W. Deng, and S. Wang, “Svdnet for pedestrian retrieval,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3800–3808.
-  X. Pan, P. Luo, J. Shi, and X. Tang, “Two at once: Enhancing learning and generalization capacities via ibn-net,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 464–479.