Adversarial Multi-scale Feature Learning for Person Re-identification

12/28/2020 ∙ by Xinglu Wang, et al. ∙ Zhejiang University 0

Person Re-identification (Person ReID) is an important topic in intelligent surveillance and computer vision. It aims to accurately measure visual similarities between person images for determining whether two images correspond to the same person. The key to accurately measure visual similarities is learning discriminative features, which not only captures clues from different spatial scales, but also jointly inferences on multiple scales, with the ability to determine reliability and ID-relativity of each clue. To achieve these goals, we propose to improve Person ReID system performance from two perspective: 1). Multi-scale feature learning (MSFL), which consists of Cross-scale information propagation (CSIP) and Multi-scale feature fusion (MSFF), to dynamically fuse features cross different scales.2). Multi-scale gradient regularizor (MSGR), to emphasize ID-related factors and ignore irrelevant factors in an adversarial manner. Combining MSFL and MSGR, our method achieves the state-of-the-art performance on four commonly used person-ReID datasets with neglectable test-time computation overhead.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Given a probe image, Person ReID aims to identify the person of interest from a large gallery image database collected from different cameras. Person ReID is conducted by estimating the visual similarities between persons. The gallery images are ranked in the descending order of the similarities as re-identification results. Such a task has extensive applications in intelligent surveillance. For instance, it can be used to search criminal suspects or missing persons from a large surveillance camera network efficiently and effectively.

Fig. 1: Several retrieval results for a converged ReID model. Each row represents one visual case. First column (P) is the probe image. A – J columns show the retrieved gallery images, where the green boxes denote true positives and the red boxes denote false negatives. From these visual cases, we observe various challenges in Person ReID task that can be summarized into two aspects: 1).

scale variance of clues.

2). reliability of each clue. To name the specific examples, 1). detection misalignment (Row 1, Column D) causes the large scale and shift variance, 2). False alarm of a detector (Row 2, Column F) produces partial human body part, which serves as a distractor in Test-set; 3). Extremely confusing identity pairs (Row 2, Column A) requires scrutiny on multi-scale clues, including not only the global color attribute, but also tiny details like bags and shoes. 4). Global appearance like color and body structure helps, but is influenced by camera parameters and human pose,, the color distortion caused by camera settings shown in Row 3. 5). Facial attributes like wearing glass are useful in Row 2, however, face gets blur or partially occluded in Row 3, whose importance should be reduced. 6). Local parts of different scales are not certainly reliable, , misalignment causes the missing of shoes (Row 2 ,Column C), suddenly appear of a bicycle in the background (Row 4, Column E), the appearance change of bag caused by different poses (Row 2, Column E).

As illustrated in Fig. 1, person ReID faces many challenges, which are summarized as two aspects: 1). The scale of clues to distinguish persons changes dramatically. 2). None of the clues is absolutely reliable to distinguish persons. To give the specific examples, 1). Detection misalignment (Row 1, Column D) causes the large scale and shift variance, 2). False alarm of a detector (Row 2, Column F) produces partial human body part, which serves as a distractor in testset; 3). The existence of extremely similar persons (Row 2, Column A) needs a careful scrutiny of multi-scale clues, including not only the global color attribute, but also tiny details like bags and shoes, and intelligently inference upon them. 4). Global appearance like color and body structure helps, but is influenced by camera parameters and human pose, , the color distortion caused by camera settings in Row 3. 5). Facial attributes like wearing glass are useful in Row 2, however, face gets blur or partially occluded in Row 3, and importance of face in this circumstance should be reduced. 6). Local parts of different scales are not certainly reliable, , misalignment causes the missing of shoes (Row 2 ,Column C), the bicycle emerging in the background (Row 4, Column E), the appearance change of bag caused by different poses (Row 2, Column E).

We argue that all the challenge would be handled when the model is: 1). Capable of discovering global-scale and multiple local-scale clues for distinguishing person from person. 2). Able to evaluate the reliability and ID-relativity of each clue, and finally combine the clues into a discriminative feature.

To achieve the goals mentioned above, we propose two improvement: 1). Multi-scale feature learning (MSFL), with a model structure possessing the potential to utilize different scales and their combination, and intelligently fuse various scales. 2). To fully exploit and realize the potential, we guide the model with Multi-scale Gradient Regularizer (MSGR), which implicitly consider all possible perturbation in the beginning and gradually emphasize adversarial perturbation as model approaching converge. In this way, the model converges to a better optimum, learns to ignore irrelevant factors and focuses on the combination of informative ID-related factors.

Ii Related Work

Ii-a Pyramid Structure

Due to the success of deep learning, CNNs have emerged as general purpose feature extractors for a wide range of visual recognition tasks. However, the features from single convolution layer are insufficient for many tasks that need multi-scale clues to inference. The importance of multi-scale feature learning and the design of pyramid network structure has been recognized recently and some recent works

[2, 12, 18, 40, 41] attempt to investigate the effectiveness of exploiting feature from different convolution layers within a CNN. For example, Hariharan et al. [12] considered the feature maps from all convolution layers, allowing finer grained resolution for localization tasks. Long et al. [18] combined the finer-level and higher-level semantic feature from different convolution layers for better segmentation. Xie et al. [40] proposed a holistically-nested framework where the side outputs are added after lower convolution layers to provide deep supervision for edge detection. Cai et al. [2] concatenated the activation maps from multiple convolution layers to model the interaction of part features for fine-grained recognition.

However, simply concatenating multi-scale features may fails to capture the importance, reliability, semantically abstract level and ID-relativity of features in different scales, yields a inferior model. Admittedly, the concatenation of hierarchical features incorporates information of different spatial resolutions. However, it also introduces large semantic gaps caused by different depths. The high-resolution low-semantic maps may harm their representational capacity for overall task. Due to this reason, the Single Shot Detector (SSD) [17] foregoes reusing low-level features, instead builds the pyramid starting from high up in the network (e.g., conv4_3 of VGG nets [31]) and then by adding several new layers. However, the SSD-style pyramid misses the opportunity to reuse the higher-resolution maps of the feature hierarchy.

To create a feature pyramid that has strong semantics at all scales, many novel neural network architectures have been proposed, including U-Net

[25] and Sharp-Mask [21] for segmentation, Recombinator networks [14]

for face detection, and Stacked Hourglass networks

[20] for keypoint estimation. Ghiasi et al. [9] present a Laplacian pyramid presentation for FCNs to progressively refine segmentation. These methods adopt architectures with pyramidal shapes, and exploit lateral/skip connections that associate low-level feature maps with high-semantic features. Different to their works, we further improves the representation power of feature according to its pyramid level.

Ii-B Adversarial Learning

The idea of adversarial learning comes from adversarial training [10], where a mixture of normal and adversarially-generated examples are applied in the training process, in the hope to increase the robustness against adversarial examples. However, as suggested by [26], inject adversarial noise in a anisotropic and brute-force way harms the model performance on the original validation set, since adversarially-generated examples are not naturally interpretable images.

Adversarial learning originated from regularizing techniques that reduce overfitting, where an implicit or explicit player turns against the optimization goal. For example, Bengio et al.[1] and Gulcehre et al.[11]

add noise in the ReLU and Sigmoid activation functions respectively. Szegedy et al.

[36]

propose label-smoothing regularization technique to minimize distance between the model distribution and uniform distribution.

Metric learning community start to explore this advanced learning paradigm recently. For example, Deep adversarial metric learning [8] simultaneously learns to generate synthetic hard negatives from the observed easy negative samples and discriminate the feature embedding in an adversarial manner. Adversarial metric learning [6] simultaneously train the hard negative generator and feature embedding in an adversarial manner. Energy confusion regularization [4] seeks to confuse the learned model by enlarging intra-variance of all positive samples. All of them design a adversarial target, but optimized in one-step and alternative updating fashion. Different from existing work, we design a differentiable gradient regularizer that implicitly generate multiply adversarial perturbation at the same time and jointly optimized with original target.

Iii Proposed Method

Generally, representation learning tasks aims at learning a compact representation, from which the downstream tasks benefit. As a special case of representation learning task, the training pipeline of Person ReID is also divided into feature extraction stage

and loss computation stage . In this section, we present our novel modifications for the current pipeline, to equip the model with ability of extracting global-scale and multiple local-scale features in the first stage and guide the model to evaluate the reliability and ID-relativity of each feature in the second stage.

Iii-a Multi-scale Feature Learning

Prevailing Person ReID methods use one-size-fits-all high-level embeddings from a deep convolutional network for all cases. To be specific, the coarse-resolution high-semantic embeddings from the last layer is leveraged to retrieve the images in the gallery database. This might limit their accuracy on difficult examples caused by variance of resolution, scale, pose and viewpoint, detector misalignment, and extremely similar identity as shown in Fig. 1. As analyzed in Sec. I, it is important to empower the model to extract features of all scales.

To achieve this goal, ideally we would want to reason jointly across multiple scales of semantic abstraction. However simply concatenating high and low level embeddings suffers from severe performance degradation, which is verified by [15]. There are two reasons that naively combining low level features hurts the performance: 1). Early layers lack of semantically abstracted features. In the neural network, the high-resolution features of low level (shape and color) usually lack representation power and high-semantic features of high level (objects and its parts) tend to lose information about the fine spatial details. The features of early layers lack representation power and thus attach to final features prematurely will likely yield unsatisfactory high error rates. 2).

The direct deep supervision on low level features altering the internal representation and making it premature. Modern Neural Networks abstract semantic concept level by level, intermediate supervision influencing the early features to be optimized for the short-term and not for the final layers. This might improves the discriminative of shallow features, but collapses information required to generate high quality features in later layers. This effect becomes more pronounced when the first classifier is attached to an earlier layer as shown in

[15].

To remedy this issue, we present a novel Person ReID architecture that effectively solves the two problems by 1). creating a feature pyramid that has strong semantics at all scales, which naturally leverage the pyramidal shape of a CNN’s feature hierarchy. 2). inserting more blocks according to the abstraction level of feature, postpone the abstraction of shallow feature and mitigate the effect of deep supervision. The overview of proposed network architecture is shown in Fig. 2. The architecture first extract features hierarchy and encode the image into a global context embedding via bottom-up path, then gradually propagate the information across different scales via top-down pathway and lateral connections, finally combine features of different scale via Multi-scale Feature Fusion (MSFF). There are two main consideration for the proposed network architecture:

Cross-scale information propagation (CSIP) The clues to discriminate persons are imbalanced across the features of different scales and isolated from each other. To overcome these problem, we combine low and high level features and allow information communicate and rebalance across scales, via the design of top-down pathway and lateral connection.

The first step is extracting features hierarchy. For the convenience of description, we define a stage of a network as the combination of layers that produce feature maps of the same size. We choose the output of the last layer of each stage as our reference set of feature maps, which we will enrich to create the feature pyramid. This choice is natural since the deepest layer of each stage should have the strongest features. Specifically, for ResNet backbone[13], we use the feature activations of each stage’s last residual block (conv1, conv2, conv3, conv4), denoted as

, with the strides of

the input image. We do not include conv1 into the pyramid due to its large memory footprint.

Then, the top-down pathway upsamples the high-level strong-semantic features. These features are then enhanced with features from the bottom-up pathway via lateral connections. Each lateral connection merges feature maps of the same spatial size from the bottom-up pathway and the top-down pathway. The bottom-up feature map is of lower-level semantics, but its activations are of high resolution and more accurately localized.

To name the details, we use nearest neighbor upsampling to upsample the spatial resolution by a factor of 2, and append convolution to reduce the aliasing effect of upsampling. The lateral connection consists of a convolutional layer to normalize the channel dimensions to 512. The outputs of lateral connection and upsapling module would be merged element-wise addition. This final set of feature maps is called , corresponding to that are respectively of the same spatial sizes.

The result is a feature pyramid that has rich semantics at all levels, where the details of low level are conditionally decoded according to the global context propagated from high-semantic embeddings.

Multi-scale feature fusion (MSFF) Although low-level embeddings contain rich information about shape, color, and texture, they are not abstract enough to discriminate different persons. Thus, we stack more bottleneck blocks into shallow layers, refine the feature representation, and finally integrate the information of different scales via concatenating. To specific, we further abstract features by 3, 2, and 1 bottleneck blocks respectively, the finally induced feature set is .

Fig. 2: The overview of proposed methods.

Iii-B Multi-scale Gradient Regularizer

It is one thing that all clues across difference scale must be extracted and considered jointly by MSFL, while it is a more important thing that some clues should be permitted partially altered or even missing. To achieve the later and avoid increasing the computation burden in the test stage, we introduce an differentiable Multi-scale Gradient Regularizer (MSGR), which is applied in the training stage to help the model extract a more ID-related and discriminative embedding.

Ross et al. [26] demonstrate that input gradient regularization helps the model be robust to adversarial attacks while be more naturally interpretable. Inspired by [26], we expect that MSGR applied to low-level feature maps could interpretably alter the ID-related color/context information, and MSGR applied to high-level feature maps could interpretably change the ID-related semantic part information. Altough the experiments show that the effect is more complicated and entangled, we describe the methods from simple to complex.

We first describe single-scale Gradient Regularizer, which is naturally derived from the worst-case perturbation and intimatedly become a regularizer in the final loss function. Denote

as input images, and as model parameters. For simplicity, we denote as final loss, which encapsulate the feature extraction process. The idea could be formalized as follows. Instead of solving , ideally we would like to solve the following problem if we try to build a robust model against any perturbation . The perturbation is observed to be random at beginning stage and then become ID-related as the model converging.

(1)

The norm constraint in eq. 1 implies that we only require our model to be robust against certain small perturbation (that do not change ID completely). In general, this problem is difficult to be solved explicitly due to its non-convex nature and . We propose to solve it via first order Taylor expansion at point . The inner problem then becomes

(2)

This problem is trivially linear, and hence convex . We can obtain a closed form solution by Lagrangian multiplier method, see Appendix A for details. This yields

(3)

where is the dual of , , .

Substitute the optimal back to the original optimization problem eq. 1, we can see that the influence of perturbations can be formulated as a regularization term:

(4)

There are multiple ways to implement the regularization term: 1). Imitate adversarial training methods, decompose the training procedure into two stages, first calculate the adversarial perturbation by Eq. 3, and then feed the adversarial input and perform the ordinary training procedure to minimize the loss function by gradient descent on . 2). Summarize the influence of perturbations into a regularization term, as shown in Eq. 4, and allow the gradient of flows back to . Empirically, 2). is superior than 1)., which is also verified by adversarial attack and defense history, , adversarial training methods usually harm the model performance and gradient regularizer helps the model converge to a flatter local optimum. Thus, we directly optimize the regularization, leverage the auto-derivation ability provided by the modern deep learning framework. For case , regularization term is induced to penalized gradient in an anisotropic way, as for , the induced only penalizes the gradient in one direction. Empirically, experiments show that is more appropriate than both cases above.

Finally, as to multi-scale gradient regularizer, denote as model parameters; as the features that hierarchically extracted by -stage neural network; and as a loss function. For MSFL that based on ResNet backbone, there are are scales of features. Instead of solving , ideally we would like to solve the following problem if we try to build a robust model against small perturbation on the features of any scale, including perturbation of texture, color and semantically abstracted attributes such as pose, gender, race and clothing style.

(5)

Since the nested hierarchical function can be approximated by first order taylor expansion level by level, the inner problem is simplified as

(6)

Applying chain rule,

is simplified as . Similarly, the loss with multi-scale gradient regularizer is

(7)

Iv Experiments

Iv-a Implementation Details

We follow the setting of widely used open-source implementation of

[46]. To make sure fair comparison, we do not apply the tricks like Last Stride, Label Smooth [36] and LR Warmup, mentioned in [19]. It is worthy to mention that although Luo [19] achieves astonishing performance based on ResNet50 backbone [13], the dimension of feature embedding is 2048, and the Rank-1 accuracy on market1501 will drop from to if the dimension is reduced to . During the training stage, our implementation details includes:

1).

We initialize the ResNet50 with pretrained parameters on ImageNet. To reduce the feature dimension to

, which is the same as [46], we append BN-FC-ReLU layers after global average pooling of ResNet50, and change the output dimension of classifier to the number of identities in the training dataset.

2). To constitute a training batch, identities and images of per person are sampled randomly. The batch size equals to

. This approach has shown very good performance in similarity-based ranking and avoids the need to generate a combinatorial number of examplar pairs. During a training epoch each identity is selected in its batch in turn, and the remaining

batch identities are sampled at random. We set , and .

3). For image preprocessing and augmenting, we resize each image to pixels, randomly crop it into a

rectangular image, and flipped horizontally with 0.5 probability. Follow the fashion of ImageNet, each image is first divided by 255, normalized to

range, substracted by , and finally divided by . These statistics are calculated according to the natural images in ImageNet.

4).

Adam is adopted to optimize the model. The initial learning rate is 0.00035 and decrease by a ratio of 0.1 at the 40th epoch and 70th epoch respectively. Totally there are 120 training epochs. We train the model two NVIDIA TITAN XP GPUs and Pytorch as the platform, with the float16 training and BatchNorm synchronized across GPU as default strategies.

Iv-B Dataset and Protocol

We focus on three widely-used large datasets: CUHK03[16], Market1501[43]

, DukeMTMC-ReID

[44], and MSMT17[38].

CUHK03[16]dataset contains 13,164 images of 1,360 identities. It provides bounding boxes detected from deformable part models (DPMs) and manual labeling. We adopt the new training/testing protocol raised by Zhong [45]

using a new training/testing protocol similar to that of Market-1501. The new protocol splits the dataset into training set and testing set, which consist of 767 identities and 700 identities respectively. In testing, one image from each camera is randomly selected as the query for each identity and use the rest of images to construct the gallery set. Compared to the traditional protocol that splits the dataset into training set with 1,160 identities and testing set with 100 identities, the new protocol has two advantages:

1). For each identity, there are multiple ground truths in the gallery, which is more consistent with practical application scenario. 2). Evenly dividing the dataset into training set and testing set at once helps avoid repeating training and testing multiple times.

Market1501[43] dataset contains 32,668 images of 1,501 labeled persons of six camera views. There are 751 identities in the training set and 750 identities in the testing set. In the original study on this proposed dataset, the author also uses mAP as the metric to evaluate the overall ranking quality of predicted rank list.

DukeMTMC-ReID[44] is a subset of DukeMTMC[24] and contains 36,411 images of 1,812 identities captured by eight high-resolution cameras. The pedestrian images are cropped by hand-drawn bounding boxes. It consists of 16,522 training images of 702 identities, 2,228 query images and 17,661 gallery images of the other 702 identities.

MSMT17[38] is currently the largest Person ReID dataset, which contains 126,441 images of 4,101 identities in 15 cameras. This dataset is composed of the training set, which contains 32,621 bounding boxes of 1,041 identities and the test set including 93,820 bounding boxes of 3,060 identities. From the test set, 11,659 images are used as query images and the other 82,161 bounding boxes are used as gallery images. This challenging dataset has more complex scenes and backgrounds, , indoor and outdoor scenes, than others.

To evaluate the performance of proposed methods and compare with other ReID methods, we report two common evaluation metrics: the cumulative matching characteristics (CMC) at rank-1and mean average precision (mAP) on the above four benchmarks following the common settings

[43, 38].

Iv-C Visualization of Learned Features

To understand how MSFL+MSGR help learn discriminative features, we visualize the activation values of high-level feature maps to investigate which semantic parts the network focuses on. Following [46], the activation maps are computed as the sum of absolute-valued feature maps along the channel dimension followed by a spatial normalization.

Fig. 3 compares the activation maps of the ResNet50 baseline and the model with MSFL+MSGR. It is clear that MSFL+MSGR can capture the local discriminative patterns of a person, such as clothing logo, bags (Column 3), shoes (Column 9) and reliably clear frontal face (Column 4), to distinguish person from person. In contrast, the baseline model over-concentrates on less informative region like background.

Fig. 3: First line contains original images. Second line and third line contain the activation map of ResNet50 baseline and MSFL+MSGR respectively. These images indicate that MSFL+MSGR detects subtle details like bags (Column 3), shoes (Column 9) and reliably clear frontal face (Column 4), to help discriminate visually similar persons.

To understand how MSFL helps collect multi-scale discriminative clues, we visualize the feature in the hierarchy. As shown in Fig. 4, due to lack of representative power, may over-concentrate on parts or background with salient color (Row 1 Column 2 and Row 4 Column 2) and edge and context without semantic meaning. With the help of global context from top-down path focus on discriminative person parts. Meanwhile, attend to multiple discriminative parts which is helpful for robust ReID.

Fig. 4: Each line contains a visual case. First Column is the original image; the second to the last column corresponds to activation maps of of MSFL+MSGR respectively. Due to lack of representative power, may over-concentrate on parts or background with salient color (Row 1 Column 2 and Row 4 Column 2) and edge and context without semantic meaning. With the help of global context from top-down path focus on discriminative person parts. Meanwhile, attend to multiple discriminative parts which is helpful for robust ReID.

To conclude, the qualitative results demonstrate that our multi-scale design, aggregation methods and regularization enable the model to identify subtle differences between visually similar persons –- a vital requirement for accurate ReID.

Iv-D Ablation Study

We conduct ablation study on Market1501 dataset. Comparing (a) with (c) in Tab. I, we verify that direct fusing the feature on deep stage can slightly improve the performance, however, directly incorporating the shallow feature (Tab. I(b)) hurts the performance. The reasons are discussed in Sec. III-A, , low-level features lack of semantic abstraction and direct deep supervision on low-level features altering the internal representation and making it premature.

On the contrary, CSIP (Tab. IV(e)) allow the shallow feature to communicate with global context and be conditionally decoded, thus improve the rank-1 from 90.4% to 91.5%. CSIP consists of two complementary modules, , top-down pathway and lateral connection. Lacking of top-down pathway hurts the performance (Tab. I(a) and (c)), and lacking of lateral connection cannot improve the performance (Tab. I(a) and (d)).

Meanwhile, MSFF (Tab. IV(f)) empower the shallow feature to abstract its detailed information with complex transformation and further improve the rank-1 to 92.3%.

Methods lateral top-down MSFF rank-1 mAP
(a) ResNet50 90.4 78.3
(b) direct fuse and 90.7 78.6
(c) direct fuse from to 87.1 75.9
(d) fuse from to , w.o. lateral 90.3 78.3
(e) fuse from to , with lateral 91.5 79.8
(f) fuse from to 92.3 81.9
TABLE I: Ablation Study on Multi-Scale Feature Learning (MSFL)

Our design of MSFL modifies the network architecture with neglectable computation overhead. The Last stride=1 trick [35] (Tab. II(b)) is widely used in Person ReID community which improves the flops from 2.6G to 4.1G. This trick removed the last spatial downsampling operation after stage 3 in the backbone network to increase the size of the feature map, with the hope of improving performance by increasing spatial resolution. We do not need to apply this trick, since the MSFL maintains roughly equivalent amount of parameters and flops whiles become superior than Last stride=1 trick.

Methods params (M) flops (G) rank-1 mAP
(a) ResNet50 baseline 24.56 2.6 90.4 78.3
(b) ResNet50, Last Stride=1 24.56 4.1 91.3 79.8
(c) fuse from to 27 5.9 92.3 81.9
TABLE II: Complexity of MSFL

As for MSGR, we first decide that -norm is the most beneficial for regularizer term (Tab. III(a)–(d)), which meets our hypothesis that a model with smooth gradients of fewer extreme values is capable of extracting balanced ID-related information, and thus more robust and accurate. The improvement from Gradient Regularizer is stable (Tab. III(d)–(f)), which is not sensitive to the choice of in a long range. Thus, we fix in the latter experiments. Meanwhile, via regularizing the loss the output of middle stages to be smooth, the Multi-scale Gradient Regularizer further improve the performance. The most improvement happens when is applied to input and early stages, it may because the regularization on input gradient includes the effects to encourage the output of middle stages to be smooth, , . However, it still improves the performance when applying the MSGR to middle outputs, since different stages is responsible for different levels of abstraction. In the latter experiments, we default adopt MSGR on the outputs of stage 1 and 2.

Methods norm rank-1 mAP
(a) r50 baseline / / 90.4 78.3
(b) 1 1e-2 85.1 70.1
(c) 2 1e-2 91.7 80.1
(d) 1e-2 88.2 72.2
(e) 2 1e-1 90.7 79.4
(f) 2 1e-3 91.2 80.0
(g) 2 1e-2 92.1 80.4
(h) 2 1e-2 90.2 78.3
(i) 2 1e-2 92.5 80.3
(j) 2 1e-2 92.7 80.1
(k) 2 1e-2 92.5 80.1
(l) 2 1e-2 92.4 79.8
TABLE III: Ablation Study on Multi-scale Gradient Regularizer (MSGR)

Iv-E Comparison with the state-of-the-art methods

Market1501 Detected CUHK03 DukeReID MSMT17
Rank-1 mAP Rank-1 mAP Rank-1 mAP Rank-1 mAP
PAN 82.2 63.3 36.3 34 71.6 51.5
SVDNet [34] 82.3 62.1 57.1 54.2 76.7 56.8
PDC [32] 84.1 63.4 58 29.7
HAP2S [42] 84.6 69.4 75.9 60.6
DPFL [7] 88.6 72.6 40.7 37 79.2 60.6
DaRe[15] 86.4 69.3 55.1 51.3 74.5 56.3
DaRe+RE 89 76 63.3 59 80.2 64.5
PNGAN [22] 89.4 72.6 73.6 53.2
GLAD [39] 89.9 73.9 61.4 34
KPM [29] 90.1 75.3 80.3 63.2
MLFN [3] 90 74.3 52.8 47.8 81 62.8
DuATM [30] 91.4 76.6 81.8 64.6
Bilinear [33] 91.7 79.6 84.4 69.3
G2G [27] 92.7 82.5 80.7 66.4
DeepCRF [5] 93.5 81.6 84.9 69.5
PCB+RPP [35] 93.8 81.6 63.7 57.5 83.3 69.2
SGGNN [28] 92.3 82.8 81.1 68.2
Auto-ReID+RE [23] 95.4 94.2 73.3 69.3 91.4 89.2 78.2 52.5
Mancs [37] 93.1 82.3 65.5 60.5 84.9 71.8
OSNet [46] 93.6 81 57.1 54.2 84.7 68.6 71 43.5
OSNet+RE 94.8 84.9 72.3 67.8 88.6 73.5 78.7 52.9
ResNet50 90.4 78.3 63.4 58.3 82.9 70.6 6.7 38.9
+CSIP 91.5 79.3 65.6 60.4 84.1 74.5 69.3 40.2
+MSFF 92.3 81.9 65.1 61.2 85.7 75.4 70.1 40.8
+MSGR 93.7 83.6 67.7 63.3 86.8 76.9 71.3 45.3
+RE 94.4 89.5 71.3 68.6 89.6 89 78.4 51.7
TABLE IV: Comparison of the state-of-the-art results on four large benckmarks. The CMC scores (%) at rank-1 and mAP are listed. To show the effectiveness of each component, we gradually stack them to the ResNet50 baseline.

On Market101 dataset (Tab. IV, via gradually stacking each component to the ResNet50 model, we improve the rank-1 from 90.4% to 93.7%, outperform the most of the state-of-the-art methods, and reach 94.4% when applying Rerank. It worth to mention thath we build our model upon strong baseline, and improve the performance steadily without exhausted hyper-parameter tuning.

On CUHK03 dataset (Tab. IV), due to misalignment, the performance on detected CUHK03 is worse than labeled CUHK03. Compared to Market1501, the improvement brought by MSFF is relatively small, but by further applying MSGR, the rank-1 on labeled CUHK03 improves to 71.6%. We presume that the training data scarcity of new CUHK03 protocol causes the optimization difficulty of MSFF block, but MSGR is an appropriate regularizer that helps smooth the loss landscape and reduces the optimization difficulty.

On DukeMTMC-ReID (Tab. IV), the effect of each method is similar to that on Market1501. MSFL+MSGR achieves rank-1 of 86.8%, superior than the most state-of-the-art model, and even as competitive as Auto-ReID that applies NAS to automatic discover the best architecture for ReID task. On MSMT17 (Tab. IV), despite of complex scenes and diverse backgrounds, MSFL+MSGR improves rank-1 metric from 82.9% to 86.8%.

V Conclusion

Person ReID faces many challenges, which are summarized as two aspects, , the dramatic scale variance and unreliability of each clue for distinguish person. To address these problems, this paper propose a model with potential to discover global-scale and multiple local-scale clues and introduce adversarial learning to encourage the model to mine subtle clues while determine ID-relativity of each clue by multi-scale gradient regularizer.

It worth to mention that the architecture and regularizer designed by our experiences may not be the optimal. In our future work, we may apply the idea of AutoML to discover more superior model architecture and loss function that tailored for Person ReID task.

Acknowledgment

This work was partially supported by

Appendix A Solving in Eq. 2

We need to solve from Eq. 2, which is equivalent to

(8)

The optimal would be achieved when , otherwise, we can increase the norm of and increase the objective value. Thus, we are set to solve

(9)

By introduce Lagrangian multiplier , we have

(10)
(11)
(12)
(13)

Sum each element of the vector over two sides, we have

(14)
(15)

Combine Eq. 12 and Eq. 15, it is easy to see

(16)

References

  • [1] Y. Bengio, N. Léonard, and A. Courville (2013)

    Estimating or propagating gradients through stochastic neurons for conditional computation

    .
    arXiv preprint arXiv:1308.3432. Cited by: §II-B.
  • [2] S. Cai, W. Zuo, and L. Zhang (2017) Higher-order integration of hierarchical convolutional activations for fine-grained visual categorization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 511–520. Cited by: §II-A.
  • [3] X. Chang, T. M. Hospedales, and T. Xiang (2018) Multi-level factorisation net for person re-identification. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 2109–2118. Cited by: TABLE IV.
  • [4] B. Chen and W. Deng (2019) Energy confused adversarial metric learning for zero-shot image retrieval and clustering. arXiv preprint arXiv:1901.07169. Cited by: §II-B.
  • [5] D. Chen, D. Xu, H. Li, N. Sebe, and X. Wang (2018) Group consistent similarity learning via deep crf for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8649–8658. Cited by: TABLE IV.
  • [6] S. Chen, C. Gong, J. Yang, X. Li, Y. Wei, and J. Li (2018-07) Adversarial metric learning.

    Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence

    .
    External Links: ISBN 9780999241127, Link, Document Cited by: §II-B.
  • [7] Y. Chen, X. Zhu, and S. Gong (2017) Person re-identification by deep learning multi-scale representations. In Proceedings of the IEEE international conference on computer vision, pp. 2590–2600. Cited by: TABLE IV.
  • [8] Y. Duan, W. Zheng, X. Lin, J. Lu, and J. Zhou (2018) Deep adversarial metric learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2780–2789. Cited by: §II-B.
  • [9] G. Ghiasi and C. C. Fowlkes (2016) Laplacian pyramid reconstruction and refinement for semantic segmentation. In European Conference on Computer Vision, pp. 519–534. Cited by: §II-A.
  • [10] I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §II-B.
  • [11] C. Gulcehre, M. Moczulski, M. Denil, and Y. Bengio (2016) Noisy activation functions. In

    International conference on machine learning

    ,
    pp. 3059–3068. Cited by: §II-B.
  • [12] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik (2015) Hypercolumns for object segmentation and fine-grained localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 447–456. Cited by: §II-A.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §III-A, §IV-A.
  • [14] S. Honari, J. Yosinski, P. Vincent, and C. Pal (2016) Recombinator networks: learning coarse-to-fine feature aggregation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5743–5752. Cited by: §II-A.
  • [15] G. Huang, D. Chen, T. Li, F. Wu, L. van der Maaten, and K. Q. Weinberger (2017) Multi-scale dense networks for resource efficient image classification. arXiv preprint arXiv:1703.09844. Cited by: §III-A, TABLE IV.
  • [16] W. Li, R. Zhao, T. Xiao, and X. Wang (2014) Deepreid: deep filter pairing neural network for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 152–159. Cited by: §IV-B, §IV-B.
  • [17] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §II-A.
  • [18] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440. Cited by: §II-A.
  • [19] H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang (2019) Bag of tricks and a strong baseline for deep person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0. Cited by: §IV-A.
  • [20] A. Newell, K. Yang, and J. Deng (2016)

    Stacked hourglass networks for human pose estimation

    .
    In European conference on computer vision, pp. 483–499. Cited by: §II-A.
  • [21] P. O. Pinheiro, T. Lin, R. Collobert, and P. Dollár (2016) Learning to refine object segments. In European Conference on Computer Vision, pp. 75–91. Cited by: §II-A.
  • [22] X. Qian, Y. Fu, T. Xiang, W. Wang, J. Qiu, Y. Wu, Y. Jiang, and X. Xue (2018) Pose-normalized image generation for person re-identification. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 650–667. Cited by: TABLE IV.
  • [23] R. Quan, X. Dong, Y. Wu, L. Zhu, and Y. Yang (2019) Auto-reid: searching for a part-aware convnet for person re-identification. External Links: 1903.09776 Cited by: TABLE IV.
  • [24] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi (2016) Performance measures and a data set for multi-target, multi-camera tracking. In European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking, Cited by: §IV-B.
  • [25] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §II-A.
  • [26] A. S. Ross and F. Doshi-Velez (2018) Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Thirty-second AAAI conference on artificial intelligence, Cited by: §II-B, §III-B.
  • [27] Y. Shen, H. Li, T. Xiao, S. Yi, D. Chen, and X. Wang (2018) Deep group-shuffling random walk for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2265–2274. Cited by: TABLE IV.
  • [28] Y. Shen, H. Li, S. Yi, D. Chen, and X. Wang (2018) Person re-identification with deep similarity-guided graph neural network. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 486–504. Cited by: TABLE IV.
  • [29] Y. Shen, T. Xiao, H. Li, S. Yi, and X. Wang (2018) End-to-end deep kronecker-product matching for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6886–6895. Cited by: TABLE IV.
  • [30] J. Si, H. Zhang, C. Li, J. Kuen, X. Kong, A. C. Kot, and G. Wang (2018) Dual attention matching network for context-aware feature sequence based person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5363–5372. Cited by: TABLE IV.
  • [31] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §II-A.
  • [32] C. Su, J. Li, S. Zhang, J. Xing, W. Gao, and Q. Tian (2017) Pose-driven deep convolutional model for person re-identification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3960–3969. Cited by: TABLE IV.
  • [33] Y. Suh, J. Wang, S. Tang, T. Mei, and K. Mu Lee (2018) Part-aligned bilinear representations for person re-identification. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 402–419. Cited by: TABLE IV.
  • [34] Y. Sun, L. Zheng, W. Deng, and S. Wang (2017) Svdnet for pedestrian retrieval. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3800–3808. Cited by: TABLE IV.
  • [35] Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang (2018) Beyond part models: person retrieval with refined part pooling (and a strong convolutional baseline). In Proceedings of the European Conference on Computer Vision (ECCV), pp. 480–496. Cited by: §IV-D, TABLE IV.
  • [36] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826. Cited by: §II-B, §IV-A.
  • [37] C. Wang, Q. Zhang, C. Huang, W. Liu, and X. Wang (2018) Mancs: a multi-task attentional network with curriculum sampling for person re-identification. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 365–381. Cited by: TABLE IV.
  • [38] L. Wei, S. Zhang, W. Gao, and Q. Tian (2018) Person transfer gan to bridge domain gap for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 79–88. Cited by: §IV-B, §IV-B, §IV-B.
  • [39] L. Wei, S. Zhang, H. Yao, W. Gao, and Q. Tian (2017) GLAD: global-local-alignment descriptor for pedestrian retrieval. CoRR abs/1709.04329. External Links: 1709.04329, Link Cited by: TABLE IV.
  • [40] S. Xie and Z. Tu (2015) Holistically-nested edge detection. In Proceedings of the IEEE international conference on computer vision, pp. 1395–1403. Cited by: §II-A.
  • [41] C. You, Q. Yang, L. Gjesteby, G. Li, S. Ju, Z. Zhang, Z. Zhao, Y. Zhang, W. Cong, G. Wang, et al. (2018) Structurally-sensitive multi-scale deep neural network for low-dose ct denoising. IEEE Access 6, pp. 41839–41855. Cited by: §II-A.
  • [42] R. Yu, Z. Dou, S. Bai, Z. Zhang, Y. Xu, and X. Bai (2018) Hard-aware point-to-set deep metric for person re-identification. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 188–204. Cited by: TABLE IV.
  • [43] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian (2015) Scalable person re-identification: a benchmark. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1116–1124. Cited by: §IV-B, §IV-B, §IV-B.
  • [44] Z. Zheng, L. Zheng, and Y. Yang (2017) Unlabeled samples generated by gan improve the person re-identification baseline in vitro. arXiv preprint arXiv:1701.07717 3. Cited by: §IV-B, §IV-B.
  • [45] Z. Zhong, L. Zheng, D. Cao, and S. Li (2017) Re-ranking person re-identification with k-reciprocal encoding. CoRR abs/1701.08398. External Links: 1701.08398, Link Cited by: §IV-B.
  • [46] K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang (2019) Omni-scale feature learning for person re-identification. arXiv preprint arXiv:1905.00953. Cited by: §IV-A, §IV-A, §IV-C, TABLE IV.