AdaptiveReID: Adaptive L2 Regularization in Person Re-Identification

07/15/2020 ∙ by Xingyang Ni, et al. ∙ Tampere Universities 16

We introduce an adaptive L2 regularization mechanism termed AdaptiveReID, in the setting of person re-identification. In the literature, it is common practice to utilize hand-picked regularization factors which remain constant throughout the training procedure. Unlike existing approaches, the regularization factors in our proposed method are updated adaptively through backpropagation. This is achieved by incorporating trainable scalar variables as the regularization factors, which are further fed into a scaled hard sigmoid function. Extensive experiments on the Market-1501, DukeMTMC-reID and MSMT17 datasets validate the effectiveness of our framework. Most notably, we obtain state-of-the-art performance on MSMT17, which is the largest dataset for person re-identification. Source code will be published at https://github.com/nixingyang/AdaptiveReID.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Person re-identification involves retrieving corresponding samples from a gallery set based on the appearance of a query sample across multiple cameras. It is a challenging task since images may differ significantly due to variations in factors such as illumination, camera angle and human pose. On account of the availability of large-scale datasets [33, 20, 29], remarkable progress has been witnessed in recent studies on person re-identification, e.g., utilizing local feature representations [27, 23], leveraging extra attribute labels [22, 16], improving policies for data augmentation [36, 4], adding a separate re-ranking step [35, 37] and switching to video-based datasets [17, 3].

regularization imposes constraints on the parameters of neural networks and adds penalties to the objective function during optimization. It is a commonly adopted technique which can improve the model’s generalization ability. Although some works 

[26, 9, 19, 15] provide insights on the underlying mechanism of regularization, it is an understudied topic and has not received sufficient attention. In most literature, regularization is taken for granted, and the text dedicated to it is typically shrunk into one sentence as in [7]

. On the other hand, existing approaches assign constant values to regularization factors in the training procedure, and such hyperparameters are hand-picked via hyperparameter optimization which is a tedious and time-consuming process. The primary purpose of this work is to address the bottleneck of conventional

regularization and introduce a mechanism which learns the regularization factors and update those values adaptively.

In this paper, our major contributions are twofold:

  • We introduce an adaptive regularization mechanism, which optimizes each regularization factor adaptively as the training procedure progresses.

  • With the proposed framework, we obtain state-of-the-art performance on MSMT17, which is the largest dataset for person re-identification.

The rest of this paper is organized as follows. Section II reviews important works in person re-identification and regularization. In Section III, we present the essential components of our baseline, alongside the proposed adaptive regularization mechanism. Section IV

describes the details of our experiments, including datasets, evaluation metrics and comprehensive analysis of our proposed method. Finally, Section 

V concludes the paper.

Ii Related work

In this section, we give a brief overview of two research topics, namely, person re-identification and regularization.

Ii-a Person Re-Identification

Utilizing local feature representations which are specific to certain regions, has been shown successful. Varior et al[27]

propose a Long Short-Term Memory architecture which models the spatial dependency and thus extracts more discriminative local features. Sun

et al[23] apply a uniform partition strategy which divides the feature maps evenly into individual parts, and the part-informed features are concatenated to form the final descriptor.

Besides, methods based on auxiliary feature are advocated, aiming to utilize extra attributes in addition to the identity labels. Su et al[22] shows that learning mid-level human attributes can be used to address the challenge of visual appearance variations. Specifically, an attribute prediction model is trained on an independent dataset which contains the attribute labels. Lin et al[16] manually annotate attribute labels which contain detailed local descriptions. A multi-task network is proposed to learn an embedding for re-identification and also predict the attribute labels. In addition to the performance improvement in re-identification, such system can speed up the retrieval process by ten times.

By applying random manipulations on training samples, data augmentation has played an essential role in suppressing the overfitting issue and improving the generalization of models. Zhong et al[36] introduce an approach which erases the pixel values in a random rectangle region during training. By contrast, Dai et al[4] suggest dropping the same region for all samples in the same batch. Such feature dropping branch strengthens the learned features of local regions.

Adding a separate re-ranking step to refine the initial ranking list can lead to significant improvements. Zhong et al[35] develop a k-reciprocal encoding method based on the hypothesis that a gallery image is more likely to be a true match if it is similar to the probe in the k-reciprocal nearest neighbours. Zhou et al[37] rank the predictions with a specified local metric by exploiting negative samples for each online query, rather than implementing a general global metric for all query probes.

Lastly, some works shift the emphasis from image-based to video-based person re-identification. Liu et al[17] introduce a spatio-temporal body-action model which exploits the periodicity exhibited by a walking person in a video sequence. Alternatively, Dai et al[3] present a learning approach which unifies two modules: one module extracts the features of consecutive frames, and the other module tackles the poor spatial alignment of moving pedestrians.

Ii-B regularization

Laarhoven [26] prove that regularization would not regularize properly in the presence of normalization operations, i.e

., batch normalization 

[11] and weight normalization [21]. Instead, regularization will affect the scale of weights, and therefore it has an influence on the effective learning rate.

Similarly, Hoffer et al[9] investigate how does applying weight decay before batch normalization affect learning dynamics. Combining weight decay and batch normalization would constrain the norm to a small range of values and lead to a more stable step size for the weight direction. It enables better control over the effective step size through the learning rate.

Later on, Loshchilov et al[19] clarify a long-established misunderstanding that regularization is equivalent to weight decay. The aforementioned statement does not hold when applying adaptive gradient algorithms, e.g., Adam [13]. Furthermore, they suggest decoupling the weight decay from the optimization steps, and it leads to the original formulation of weight decay.

Most recently, Lewkowycz et al[15] present an empirical study on the relations among the

coefficient, the learning rate, and the number of training epochs and the performance of the model. In a similar manner as learning rate schedules, a manually designed schedule for the L2 parameter is proposed to increase training speed and boost model’s performance.

Iii Proposed method

In this section, we first present a minimal setup for person re-identification. Later on, we explain five components that contribute to significant improvements in performance and use the resulting method as the baseline in our study. Most importantly, we discuss the proposed adaptive regularization mechanism at the end.

Iii-a Minimal setup

Fig. 1: Structure of an objective module. In the training procedure, two objective functions are applied: triplet loss [8] and categorical cross-entropy loss. In the inference procedure, the feature embeddings before the batch normalization [11] layer are extracted as the representations. Note that blocks in yellow are excluded from the minimal setup.

Backbone: ResNet50 [7]

, initialized with ImageNet 

[5] pre-trained weights, is selected as the backbone model. For convenience, it is separated into five individual blocks, i.e., block 1-5, as illustrated in Figure 2

. Additionally, the stride arguments of the first convolution layer in block 5 are set to 1, rather than default value 2. This enlarges the feature maps by a scale factor of 2 along with both height and width dimensions while reusing the pre-trained weights and keeping the total amount of parameters identical.

Objective module: Figure 1 demonstrates the structure of an objective module that converts the feature maps to learning objectives. A global average pooling layer squeezes the spatial dimensions in the feature maps, and the following batch normalization [11]

layer generates the normalized feature vectors. The concluding fully-connected layer does not contain a bias vector, and it produces the predicted probabilities of each unique identity so that the model can be optimized using the categorical cross-entropy loss. In the inference procedure, the feature embeddings before the batch normalization layer are extracted as the representations, and cosine distance is adopted to measure the distance between two samples.

Overall topology: The topology of the overall model is shown in Figure 2. It is to be observed that the minimal setup only contains the global branch. Given a batch of images, the individual blocks from the backbone model utilize successively, and an objective module is appended at the end.

Fig. 2: Topology of the overall model. Feature embeddings from multiple objective modules are concatenated in the inference procedure. Note that blocks in yellow are excluded from the minimal setup.

Data augmentation:

The image is resized to target resolution using a bilinear interpolation method. Besides, the image is flipped horizontally at random with probability set to 0.5. Zero paddings are added to all sides of the image,

i.e., the top, bottom, left, and right sides. A random part with target resolution is subsequently cropped.

Learning rate: The learning rate increases linearly from a low value to the pre-defined base learning rate in the early stage of the training procedure, and it is divided by ten once the performance on the validation set plateaus. On the one hand, the warmup strategy suppresses the distorted gradient issue at the beginning [18]. On the other hand, periodically reducing the learning rate boosts the performance even further.

Label smoothing: The label smoothing regularization [24]

is applied alongside with the categorical cross-entropy loss function. Given a sample with ground truth label

, the one-hot encoded label

equals to only if the index is as the same as label , and otherwise. The smoothed label introduces a hyperparameter and is calculated as:

(1)

Iii-B Baseline

Triplet loss: As highlighted in Figure 1, the triplet loss [8] is applied on the feature embeddings before the batch normalization layer. It mines the moderate hard triplets instead of all possible combinations of triplets, given that using all possible triplets may destabilize the training procedure. Considering that multiple loss functions are present, the weighting coefficients of each loss function are set to 1 on account of simplicity.

Regional branches: In addition to the global branch, two regional branches are integrated into the model. Figure 2 illustrates the diagram of those regional branches. Firstly, the block 5 from the backbone model is replicated, and it is not shared with the global branch. Secondly, we adopt the uniform partition scheme as in [23]. The slicing layer explicitly divides the feature maps into two horizontal stripes. Lastly, dimensionality reduction is performed using a convolutional layer on each stripe. Separate objective modules are appended afterwards. In the inference procedure, feature embeddings from multiple objective modules are concatenated.

Random erasing: In addition to random horizontal flipping, random erasing [36] is utilized in data augmentation. During training, it erases an area of original images to improve the robustness of the model, especially for the occlusion cases.

Clipping: The clipping layer is inserted between the global average pooling layer and the batch normalization layer in Figure 1

. It performs element-wise value clipping so that values in its output are contained in a closed interval. The clipping layer works in a similar manner as the ReLU-n units 

[14], and it relieves optimization difficulties in the succeeding triplet loss [8].

regularization: Conventional regularization is utilized to all trainable parameters, i.e., the regularization factors remain constant throughout the training procedure. Additionally, those regularization factors need to be hand-picked via hyperparameter optimization.

Iii-C Adaptive regularization

A neural network consists of a set of distinct parameters,

(2)

with containing all trainable parameters. Each

is an array which could be a vector, a matrix or a 3rd-order tensor. For example, the kernel and bias terms in a fully-connected layer are a matrix and a vector, respectively.

Conventional regularization imposes an additional penalty term to the objective function, which can be formulated as follows:

(3)

where and denote the original and updated objective functions, respectively. In our case (see Figures 1 and 2), is a weighted sum of triplet loss [8] and categorical cross-entropy loss functions. In addition, refers to the square of the norm111We define to denote the sum of squares of all elements also when is a matrix or a 3rd-order tensor. of , and the constant coefficient defines the regularization strength.

One may wish to add penalties in a different way, e.g., applying lighter regularization in the early layers but stronger in the last ones. Thus, it is possible to generalize even further, i.e., defining a unique coefficient for each :

(4)

where each parameter is associated with an individual regularization factor .

Obviously, it is infeasible to manually fine-tune those regularization factors for one by one, since is in the order of 100 for models trained with ResNet50. Therefore, we treat them as any other learnable parameters and find suitable values from the data itself.

To make the aforementioned regularization factors adaptive, a straightforward extension is obtained by replacing the pre-defined constant with scalar variables which are trainable through backpropagation. After the modification, Equation 4 remains unchanged while . However, such an approach without any constraints on will fail. Namely, setting negative values for allows naively increasing so that decreases sharply. In other words, the regularization penalties would become dominant in the optimization process. Thus the model collapses and would not learn useful feature embeddings.

To address the collapse problem, we apply the hard sigmoid function which assures that the regularization factor would always have non-negative values. The hard sigmoid function is defined as

(5)

In our experiments, we use , but any other positive values can be used as well.

The regularization factor is obtained by applying the hard sigmoid on the raw parameters as

(6)

where () are the trainable scalar variables. Furthermore, we introduce a hyperparameter which represents the amplitude. Hence, we get

(7)

The amplitude offers flexibility of avoiding excessively large regularization factors which could deteriorate the training procedure. Combining Equation (4) and (7) gives

(8)

Iv Experiments

In this section, we explain datasets, evaluation metrics and comprehensive analysis of our proposed method.

Iv-a Datasets

Dataset Market-1501 DukeMTMC-reID MSMT17
Train Samples 12,936 16,522 32,621
Train Identities 751 702 1,041
Test Query Samples 3,368 2,228 11,659
Test Gallery Samples 15,913 17,661 82,161
Test Identities 751 1,110 3,060
Cameras 6 8 15
TABLE I: Comparison of three person re-identification datasets, namely, Market-1501 [33], DukeMTMC-reID [20] and MSMT17 [29].

We conduct experiments on three person re-identification datasets, namely, Market-1501 [33], DukeMTMC-reID [20] and MSMT17 [29]. Table I makes a comparison of those datasets. The MSMT17 dataset outshines the other two due to its large scale.

The Market-1501 dataset is collected with six different cameras in total. It contains 32,217 images from 1,501 pedestrians, and at least two cameras capture each pedestrian. The training set includes 751 pedestrians with 12,936 images, while the test set consists of the remaining images from 750 pedestrians and one distractor class.

The DukeMTMC-reID dataset includes 1,404 pedestrians that appear in at least two cameras and 408 pedestrians that appear only in one camera. The training and test sets contain 16,522 and 19,889 images, respectively. The query and gallery samples in the test set are randomly split.

The MSMT17 dataset is the largest person re-identification dataset which is publicly available, as of July 2020. It contains 126,441 images from 4,101 pedestrians, while 3 indoor cameras and 12 outdoor cameras are employed. In particular, the test set has approximately three times as much samples as the training set. Such setting motivates the research community to leverage a limited number of training samples that are available since data annotation is costly.

Iv-B Evaluation metrics

Following the practices in [33], two evaluation metrics are applied to measure the performance, i.e., mean Average Precision (mAP), and Cumulative Matching Characteristic (CMC) rank-k accuracy. The metrics take the distance matrix between query and gallery samples, in conjunction with the ground truth identities and camera IDs as input arguments. Gallery samples are discarded if they have been taken from the same camera as the query sample. As a result, greater emphasis is laid on the performance in the cross-camera setting.

Since the query samples may have multiple ground truth matches in the gallery set, mAP is preferable than rank-k accuracy for the reason that mAP considers both precision and recall.

Method Venue Backbone Market-1501 DukeMTMC-reID MSMT17
mAP R1 mAP R1 mAP R1
Annotators [31] arXiv 2017 - - 93.5 - - - -
PCB [23] ECCV 2018 ResNet50 81.6 93.8 69.2 83.3 - -
IANet [10] CVPR 2019 ResNet50 83.1 94.4 73.4 87.1 46.8 75.5
AANet [25] CVPR 2019 ResNet50 82.5 93.9 72.6 86.4 - -
CAMA [30] CVPR 2019 ResNet50 84.5 94.7 72.9 85.8 - -
DGNet [34] CVPR 2019 ResNet50 86.0 94.8 74.8 86.6 52.3 77.2
OSNet [38] ICCV 2019 OSNet 84.9 94.8 73.5 88.6 52.9 78.7
MHN [1] ICCV 2019 ResNet50 85.0 95.1 77.2 89.1 - -
BDB [4] ICCV 2019 ResNet50 86.7 95.3 76.0 89.0 - -
BAT-net [6] ICCV 2019 GoogLeNet 87.4 95.1 77.3 87.7 56.8 79.5
SNR [12] CVPR 2020 ResNet50 84.7 94.4 72.9 84.4 - -
HOReID [28] CVPR 2020 ResNet50 84.9 94.2 75.6 86.9 - -
RGA-SC [32] CVPR 2020 ResNet50 88.4 96.1 - - 57.5 80.3
SCSN [2] CVPR 2020 ResNet50 88.5 95.7 79.0 91.0 58.5 83.8
Baseline (Ours) - ResNet50 87.2 94.6 78.9 88.0 57.7 79.1
AdaptiveReID (Ours) - ResNet50 88.3 95.3 79.9 88.9 59.4 79.6
ResNet101 88.6 94.8 80.6 89.2 61.9 81.3
ResNet152 88.9 95.6 81.0 90.2 62.2 81.7
ResNet152 94.4 96.0 90.7 92.2 76.7 84.9
TABLE II: Performance comparisons among baseline, AdaptiveReID and existing approaches. The mAP score on MSMT17 is the most reliable indicator of performance. R1: rank-1 accuracy. -: not available. : re-ranking [35] is applied.
Method Market-1501 DukeMTMC-reID MSMT17
mAP R1 mAP R1 mAP R1
Minimal Setup 28.3 60.0 28.7 49.9 11.4 34.0
+ Triplet Loss 79.9 92.0 68.8 82.2 44.0 70.9
+ Regional Branches 81.3 93.3 71.2 84.2 47.9 74.2
+ Random Erasing 85.8 94.4 76.6 87.0 54.1 77.0
+ Clipping 86.8 94.3 78.1 87.6 56.5 78.4
+ regularization 87.2 94.6 78.9 88.0 57.7 79.1
TABLE III: Ablation study of baseline using the ResNet50 backbone.
R1: rank-1 accuracy.

Iv-C Ablation study of baseline

The baseline differs from the minimal setup in five aspects, as discussed in Section III-B. Table III presents an ablation study to demonstrate how each component contributes to the performance on person re-identification. On the one hand, the triplet loss [8] brings the most significant improvements on all three datasets. The boost is due to the fact that the triplet loss is applied to the feature embeddings which are retrieved in the inference procedure (see Figure 1). Since the triplet loss directly optimizes the model in a manner comparable to similarity search, it closes the gap between the training and inference procedures. On the other hand, the other four components bring moderate improvements. It is conceivable that the model reaches better generalization by using hand-picked regularization factors which remain constant throughout the training procedure.

Iv-D Comparisons with existing approaches

Table II shows performance comparisons among baseline, AdaptiveReID and existing approaches.

Firstly, all methods listed in Table II have surpassed the best-performing human annotators [31] on the Market-1501 dataset. In light of the scale of the Market-1501 and DukeMTMC-reID datasets (see Table I), these two small-scale datasets might have been saturated and more emphasis should be put on the MSMT17 dataset. Since mAP is preferable than rank-k accuracy, the mAP score on MSMT17 is the most reliable indicator of performance.

Secondly, our AdaptiveReID models are trained with the proposed adaptive regularization mechanism. The amplitude in Equation 8 is set to . On the one hand, the AdaptiveReID method achieves decent improvements over the baseline method, especially on MSMT17 in which the mAP score increases by . On the other hand, the AdaptiveReID method obtains the state-of-the-art performance on DukeMTMC-reID and MSMT17, very close to state-of-the-art performance on Market-1501, among methods which utilize the ResNet50 backbone.

Last but not least, deeper backbones (i.e., ResNet101 and ResNet152) further improve the performance, at the cost of extra computations. Attributed to the re-ranking [35] method which exploits the test data in the inference procedure, new milestones have been accomplished, i.e., the mAP scores on Market-1501, DukeMTMC-reID and MSMT17 stand at , and , respectively.

Iv-E Quantitative analysis of regularization factors

Fig. 3: The median value of regularization factors in each category, with respect to the number of iterations.

Depending on the associated distinct parameter (see Equation 2

), the regularization factors can be classified into five categories:

conv_kernel, conv_bias, bn_gamma, bn_beta and dense_kernel, where conv, bn and dense denote the convolutional, batch normalization and fully-connected layers. In the following, we examine the regularization factors for a model trained on MSMT17 using the ResNet50 backbone.

Figure 3 visualizes the median value of regularization factors in each category, with respect to the number of iterations. Note that the learning rate gets reduced at iterations and . While conv_kernel, bn_gamma and bn_beta behave similarly, conv_bias remains constant throughout the training procedure and dense_kernel drops to in the early stage.

Figure 4 demonstrates a histogram of regularization factors in the last epoch, i.e., the training procedure completes. The interval is divided evenly into five buckets. For regularization factors from the same category, the values could differ significantly, e.g., and regularization factors from conv_bias fall within the interval and , respectively. To be specific, the regularization factors from conv_bias in the two Reduction blocks are (see Figure 2). If omitting the effects of the Clipping layer in Figure 1, those convolutional layers are followed by batch normalization layers which intrinsically cancels out the bias terms in aforementioned convolutional layers. Consequently, such regularization factors would converge to . In summary, this phenomenon reflects the superiority of our proposed method, in which each regularization factor is optimized separately.

Fig. 4: Histogram of regularization factors in the last epoch.

Iv-F Qualitative analysis of predictions

Fig. 5: Selected query samples with corresponding top 5 matches from the gallery set. Images with orange, green and red border are query samples, correct matches and erroneous matches, respectively.

Figure 5 illustrates selected query samples with corresponding top 5 matches from the gallery set. Although query samples and erroneous matches may have similar appearances, minor differences could be observed under careful inspection, e.g., the dissimilarity between backpacks. Furthermore, our models could retrieve correct matches even in the presence of large illumination changes, e.g., the two examples from the MSMT17 dataset.

V Conclusion

In this work, we revisit the regularization in neural networks and propose an adaptive mechanism named AdaptiveReID. Differentiated from existing approaches which employ hand-picked regularization factors that are constant, our proposed method can optimize those regularization factors adaptively through backpropagation. More specifically, we apply a scaled hard sigmoid function to trainable scalar variables, and use those as the regularization factors. Extensive experiments validate the effectiveness of our framework, and we obtain state-of-the-art performance on MSMT17 which is the largest person re-identification dataset.

References

  • [1] B. Chen, W. Deng, and J. Hu (2019) Mixed high-order attention network for person re-identification. In

    Proceedings of the IEEE International Conference on Computer Vision

    ,
    pp. 371–381. Cited by: TABLE II.
  • [2] X. Chen, C. Fu, Y. Zhao, F. Zheng, J. Song, R. Ji, and Y. Yang (2020) Salience-Guided Cascaded Suppression Network for Person Re-Identification. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    pp. 3300–3310. Cited by: TABLE II.
  • [3] J. Dai, P. Zhang, D. Wang, H. Lu, and H. Wang (2018) Video person re-identification by temporal residual learning. IEEE Transactions on Image Processing 28 (3), pp. 1366–1377. Cited by: §I, §II-A.
  • [4] Z. Dai, M. Chen, X. Gu, S. Zhu, and P. Tan (2019) Batch DropBlock network for person re-identification and beyond. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3691–3701. Cited by: §I, §II-A, TABLE II.
  • [5] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. Cited by: §III-A.
  • [6] P. Fang, J. Zhou, S. K. Roy, L. Petersson, and M. Harandi (2019) Bilinear attention networks for person retrieval. In Proceedings of the IEEE International Conference on Computer Vision, pp. 8030–8039. Cited by: TABLE II.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. Cited by: §I, §III-A.
  • [8] A. Hermans, L. Beyer, and B. Leibe (2017) In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737. Cited by: Fig. 1, §III-B, §III-B, §III-C, §IV-C.
  • [9] E. Hoffer, R. Banner, I. Golan, and D. Soudry (2018) Norm matters: efficient and accurate normalization schemes in deep networks. In Advances in Neural Information Processing Systems, pp. 2160–2170. Cited by: §I, §II-B.
  • [10] R. Hou, B. Ma, H. Chang, X. Gu, S. Shan, and X. Chen (2019) Interaction-and-aggregation network for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9317–9326. Cited by: TABLE II.
  • [11] S. Ioffe and C. Szegedy (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §II-B, Fig. 1, §III-A.
  • [12] X. Jin, C. Lan, W. Zeng, Z. Chen, and L. Zhang (2020) Style normalization and restitution for generalizable person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3143–3152. Cited by: TABLE II.
  • [13] D. P. Kingma and J. Ba (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §II-B.
  • [14] A. Krizhevsky and G. Hinton (2010)

    Convolutional deep belief networks on cifar-10

    .
    Unpublished manuscript 40 (7), pp. 1–9. Cited by: §III-B.
  • [15] A. Lewkowycz and G. Gur-Ari (2020) On the training dynamics of deep networks with $ L_2 $ regularization. arXiv preprint arXiv:2006.08643. Cited by: §I, §II-B.
  • [16] Y. Lin, L. Zheng, Z. Zheng, Y. Wu, Z. Hu, C. Yan, and Y. Yang (2019) Improving person re-identification by attribute and identity learning. Pattern Recognition. Cited by: §I, §II-A.
  • [17] K. Liu, B. Ma, W. Zhang, and R. Huang (2015) A spatio-temporal appearance representation for viceo-based pedestrian re-identification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3810–3818. Cited by: §I, §II-A.
  • [18] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han (2019)

    On the variance of the adaptive learning rate and beyond

    .
    arXiv preprint arXiv:1908.03265. Cited by: §III-A.
  • [19] I. Loshchilov and F. Hutter (2018) Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Cited by: §I, §II-B.
  • [20] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi (2016) Performance measures and a data set for multi-target, multi-camera tracking. In European Conference on Computer Vision, pp. 17–35. Cited by: §I, §IV-A, TABLE I.
  • [21] T. Salimans and D. P. Kingma (2016) Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in neural information processing systems, pp. 901–909. Cited by: §II-B.
  • [22] C. Su, S. Zhang, J. Xing, W. Gao, and Q. Tian (2016) Deep attributes driven multi-camera person re-identification. In European conference on computer vision, pp. 475–491. Cited by: §I, §II-A.
  • [23] Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang (2018) Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In Proceedings of the European Conference on Computer Vision, pp. 480–496. Cited by: §I, §II-A, §III-B, TABLE II.
  • [24] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826. Cited by: §III-A.
  • [25] C. Tay, S. Roy, and K. Yap (2019) Aanet: Attribute attention network for person re-identifications. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7134–7143. Cited by: TABLE II.
  • [26] T. Van Laarhoven (2017) L2 regularization versus batch and weight normalization. arXiv preprint arXiv:1706.05350. Cited by: §I, §II-B.
  • [27] R. R. Varior, B. Shuai, J. Lu, D. Xu, and G. Wang (2016) A siamese long short-term memory architecture for human re-identification. In European conference on computer vision, pp. 135–153. Cited by: §I, §II-A.
  • [28] G. Wang, S. Yang, H. Liu, Z. Wang, Y. Yang, S. Wang, G. Yu, E. Zhou, and J. Sun (2020) High-Order Information Matters: Learning Relation and Topology for Occluded Person Re-Identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6449–6458. Cited by: TABLE II.
  • [29] L. Wei, S. Zhang, W. Gao, and Q. Tian (2018) Person transfer gan to bridge domain gap for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 79–88. Cited by: §I, §IV-A, TABLE I.
  • [30] W. Yang, H. Huang, Z. Zhang, X. Chen, K. Huang, and S. Zhang (2019) Towards rich feature discovery with class activation maps augmentation for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1389–1398. Cited by: TABLE II.
  • [31] X. Zhang, H. Luo, X. Fan, W. Xiang, Y. Sun, Q. Xiao, W. Jiang, C. Zhang, and J. Sun (2017) Alignedreid: Surpassing human-level performance in person re-identification. arXiv preprint arXiv:1711.08184. Cited by: §IV-D, TABLE II.
  • [32] Z. Zhang, C. Lan, W. Zeng, X. Jin, and Z. Chen (2020) Relation-Aware Global Attention for Person Re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3186–3195. Cited by: TABLE II.
  • [33] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian (2015) Scalable person re-identification: A benchmark. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1116–1124. Cited by: §I, §IV-A, §IV-B, TABLE I.
  • [34] Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, and J. Kautz (2019) Joint discriminative and generative learning for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2138–2147. Cited by: TABLE II.
  • [35] Z. Zhong, L. Zheng, D. Cao, and S. Li (2017) Re-ranking person re-identification with k-reciprocal encoding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1318–1327. Cited by: §I, §II-A, §IV-D, TABLE II.
  • [36] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang (2017) Random erasing data augmentation. arXiv preprint arXiv:1708.04896. Cited by: §I, §II-A, §III-B.
  • [37] J. Zhou, P. Yu, W. Tang, and Y. Wu (2017) Efficient online local metric adaptation via negative samples for person re-identification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2420–2428. Cited by: §I, §II-A.
  • [38] K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang (2019) Omni-scale feature learning for person re-identification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3702–3712. Cited by: TABLE II.