Multi-Scale Cascading Network with Compact Feature Learning for RGB-Infrared Person Re-Identification

12/12/2020 ∙ by Can Zhang, et al. ∙ HUAWEI Technologies Co., Ltd. Peking University 0

RGB-Infrared person re-identification (RGB-IR Re-ID) aims to match persons from heterogeneous images captured by visible and thermal cameras, which is of great significance in the surveillance system under poor light conditions. Facing great challenges in complex variances including conventional single-modality and additional inter-modality discrepancies, most of the existing RGB-IR Re-ID methods propose to impose constraints in image level, feature level or a hybrid of both. Despite the better performance of hybrid constraints, they are usually implemented with heavy network architecture. As a matter of fact, previous efforts contribute more as pioneering works in new cross-modal Re-ID area while leaving large space for improvement. This can be mainly attributed to: (1) lack of abundant person image pairs from different modalities for training, and (2) scarcity of salient modality-invariant features especially on coarse representations for effective matching. To address these issues, a novel Multi-Scale Part-Aware Cascading framework (MSPAC) is formulated by aggregating multi-scale fine-grained features from part to global in a cascading manner, which results in a unified representation containing rich and enhanced semantic features. Furthermore, a marginal exponential centre (MeCen) loss is introduced to jointly eliminate mixed variances from intra- and inter-modal examples. Cross-modality correlations can thus be efficiently explored on salient features for distinctive modality-invariant feature learning. Extensive experiments are conducted to demonstrate that the proposed method outperforms all the state-of-the-art by a large margin.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Person re-identification (Person Re-ID) has witnessed significant progress in computer vision community

[23]. Early works [7, 24]

mostly rely on designing handcrafted feature descriptors. Recently, deep convolution neural networks (CNNs) are adopted to enhance visual representations, which have dominated benchmarks in person Re-ID. In addition to inherent challenges such as variances in poses and viewpoints, RGB-based Re-ID relies heavily on good illumination, which greatly degrades its performance in real scenarios due to the complex environments especially during nighttime.

In order to capture valid appearance information in dark environment, infrared cameras are configured as complementary to visible cameras. In this paper, we mainly focus on a new task proposed recently, namely cross-modality RGB-IR Re-ID, aiming to retrieve query persons in visible (thermal) modality from galleries in another thermal (visible) modality. Some examples of RGB and thermal images are shown in Fig. 1, which are captured by visible cameras at daytime and infrared cameras at night respectively. From observations, thermal images enable to provide more invariant person appearances even under poor illumination conditions. As most surveillance cameras can switch between RGB mode and thermal mode automatically [19], RGB-IR Re-ID research has practical significance.

Fig. 1: Examples of person images selected from public datasets. RGB images upper are captured at daytime by visible cameras while thermal images bottom are captured at night by infrared cameras. Image pairs in each column are of the same identity but different modalities.

Despite some previous efforts devoted to RGB-IR Re-ID, it still remains challenging and leaves a large space for further explorations. Challenges mainly lie in two folds. Firstly, over-fitting problem results from the limited training data. Making constraints particularly for modality-related variances usually requires abundant training image pairs from different modalities, whereas available datasets are not sufficient enough to support the optimization of network which is expected to narrow the significant gap between modalities. The carefully designed modality-related metrics, such as bi-directional top-ranking (BDTR) loss [22] and its derivative method eBDTR [21], are thus hard to bring about much obvious improvement. Secondly, large modality gap exists due to the inherently homogeneous and heterogeneous variances among RGB and thermal images. Homogeneous variances (intra-modality variances) refer to discrepancies within the same modality such as different poses and viewpoints, while inter-modality variances is caused by heterogeneous data as RGB and thermal images have different spectrum patterns. It is worth noting that inter-modality variances are always more significant than intra-modality variances, which makes the cross-modality matching task much more challenging. Existing works make constraints for discrepancy elimination from image-level [19, 8], feature-level [20, 2, 21, 4] and a hybrid of above two [18]. However, these methods fail to explore sufficient and distinctive modality-invariant information simply based on the raw global representations.

To this end, this paper proposes a Multi-Scale Part-Aware Cascading framework (MSPAC) to fully exploit sharable information across modalities, and a marginal exponential center loss (MeCen) to explore cross-modality correlations. More specifically, MSPAC is build upon the standard attention mechanism to enhance regions of interest, while in order to leverage discriminative and fine-grained features, we decompose global features into multi-scale parts first, and apply attention module to each single-scale part. To achieve this, attention learning is decomposed into several stages and each stage corresponds to a partition scale for enhancement. In order to extract more deep cross-modality clues, MSPAC unifies the feature representations by aggregating part attentions from small to large scale in a cascading manner. Moreover, based on the unified features, MeCen forces network to learn modality-invariant correlations which enables images of the same identities to cluster compactly in feature space regardless of the modality form. In this case, no additional modality-related constraints are required in RGB-IR Re-ID model, which is conducive to solving the problem of insufficient number of image pairs with different modalities.

The main contributions of this paper are three-fold:

(1) Proposing a multi-scale part aware mechanism to enhance discrimination ability of fine-grained part features in both channel and spatial dimension.

(2) Constructing a hierarchical part aggregation architecture which results in an unified representation by integrating spatial structured information into concatenated local salient features in a cascading fashion.

(3) Introducing a novel MeCen loss to model cross-modality correlations by imposing strong constraints on significant variances which shows superior clustering ability.

Ii Cross-Modality Correlation Via Multi-Scale Part-Aware Cascading Framework

Problem definition. Let and denote RGB image set and thermal image set respectively, where and are the height and width of the image. Each image or in dataset has a corresponding label , here

represents the total number of person identities. A common strategy for deep feature learning is to map images

and into feature embeddings and , where denotes transformation function of the deep network with parameter . Given a query image (in RGB modality) or (in thermal modality) and a gallery set which is correspondingly constructed by the image (in thermal modality) or (in RGB modality), the goal of cross modal Re-ID is to retrieve persons with the same identity as query in the gallery set. Feature representations and are projected into a common feature space by feature embedding module where

is the dimension of feature vectors.

Fig. 2: The overall framework of our cross-modality person Re-ID model. There are three components: (1) Input image batches. Each identity in the training batch has the same number of RGB images and thermal images randomly selected from datasets. (2) Feature encoding. Two heterogeneous branches (blue and green) and a shared feature embedding module are applied for projecting images into common feature space. (3) Metric learning. Marginal exponential center loss is integrated with identity loss for eliminating complex variances. Geometric figures with different colors and shapes represent differences of identity and modality respectively.

Ii-a Overall Structure

The overall structure of the proposed RGB-IR Re-ID model, as illustrated in Fig. 2, mainly consists of two modules:

Feature encoder. The two-branch architecture adopts ResNet50 [5] as the backbone, which follows modifications made in [14]. Person images and are fed into two heterogeneous branches respectively for deep feature encoding. To eliminate modality-specific noises, multi-scale part aware attention module is designed to incorporate structured semantic information and local salient information into a unified global representations.

Metric learning. A novel loss, MeCen, is proposed to model cross-modality correlations. Owing to the superior clustering ability, features of the same identity are pulled closely around the cluster center in the feature space, which leads to an elimination of intra-class variances.

Ii-B Multi-Scale Part-Aware Cascading Framework in Feature Encoder

The Multi-Scale Part-Aware Cascading framework (MSPAC) is described in detail in this section, aiming to obtain unified feature representations encoded with both spatial structured information and fine-grained information of the person.

Given the fixed partition scale , as shown in Fig. 3, global feature maps from branch (or ) of the backbone are firstly divided into horizontal stripes. Formally, when the size of global representations is denoted by , the height of part features in scale is fixed to , where represents the ceiling operation. Suppose there are partition scales with , a diversity of granularities can be obtained. To deal with such multiple feature representations, an intuitive motivation is to concatenate all the feature vectors to form a mixed representation or arrange them separately such as the multi-branch architecture in [17], whereas these strategies either has limited performance or complex network architecture. In comparison, MSPAC jointly integrates local enhanced features into the unified global representations of good discrimination ability in a more simplified manner. The framework is decomposed into two parts for detailed discussions.

A. Attention Mechanism with Two-branch Structure

Attention mechanism is composed of two branches as shown in Fig. 3: trunk branch connects original part features from the two-branch backbone network while attention branch contains a spatial attention module and a channel attention module. Spatial attention contributes more to guide the network where to focus and channel attention contributes more to decide what to focus on. Therefore, in order to deal with large intrinsic variances among cross-modal features, both attention structures are adopted to inherit merits of each attention strategy. What’s more, as combined attention architecture is computationally efficient, it can be adopted as a plug-and-play module.

Given feature maps with size of

, two different descriptors are generated by average-pooling and max-pooling operation respectively, which are denoted as

and . With only channel dimension, and are further embedded by a shared multiple fully connected network and fused into a unified representations by element-wise summation. In brief, channel attention process is formulated as:

(1)

where

denotes the sigmoid function,

and are the learnable parameters.

The spatial attention module, which concentrates on distinct regions of the body, is introduced as another complementary attention form to the above channel attention module. Based on two types of pooling operations, feature maps and with size of are generated and concatenated along the channel axis. The combined feature maps are encoded into a 2D spatial attention map by a convolutional layer. Formula of the detailed computation is as follows:

(2)

where means concatenation operation and denotes parameters in the convolutional layer.

Attention features generated from channel and spatial attention modules are written as the following formula respectively:

(3)

where denotes element-wise multiplication.

In our attention learning architecture, original features are incorporated into part attention maps by residual connection for supervising attention learning. In this case, enhanced feature representations are more robust to unnecessary noises and fatal modifications. Formally, raw features

from trunk branch and enhanced features from attention branch are combined as follows:

(4)

B. Part Feature Aggregation in Cascading Framework

Considering global features lack fine-grained information and local representations are of better discrimination ability, we partition global features into multi-scale parts for deeply fine-grained information extraction. In this case, clues easily ignored in global representations can be fully exploited and enhanced. Whereas, reliability of part features can be easily affected by the occlusion and pose variances, because the integrity of structured information is inevitably damaged by the rigid partition strategy. To address aforementioned problems, a cascading feature aggregation framework is introduced to unify salient information from multiple parts into global representations. In the cascading framework, attention learning endows modality-invariant feature representations with more complete and rich semantic information. Three main components can be summarized, fine-grained feature learning, hierarchical part aggregation and global representation unification. Each component is detailed as follows:

Fig. 3: The structure of MSPAC. Upper part: Multi-scale part feature aggregation in cascading framework. Original feature maps are split horizontally into multi-scale parts. In each stage, original features with corresponding partition size are incorporated into attention module by residual connection, denoted as arrow lines. Arrow thickness represents partition size. Bottom part: Attention mechanism with two-branch structure. Attention branch consists of both spatial attention and channel attention where average-pooling and max-pooling operations are included.

Fine-grained feature partition. Suppose global representations being in the coarsest partition scale of , when the scale is increased, more diverse spatial semantic information can be available from multiple granularities of features. To arrange multiple part features, we denote them as follow:

(5)

where is the -th part feature in scale with size of , denotes that the total number of multi-scale parts, is the number of parts in scale and is the total number of partition scales.

Hierarchical part aggregation. In our cascading framework, diverse granularities of features are incorporated with additional global structured body information and aggregated from part to global to form unified representations. Part features with small scale are combined pairwise with neighbors to form larger representations, and then attention modules in the coarser stage are adopted to further exploit local semantic information. The formulation is written as :

(6)

where represents the -th original part features introduced by residual connection and is the corresponding enhanced part features in scale , denotes the concatenation operation along the height dimension, arrow represents the corresponding relation between local features in different stages, e.g., in Eq.6 it indicates that the -th part features with scale are obtained by the -th part with its neighbor -th part in previous stage.

Global representation unification. In the last stage, all the enhanced part features are concatenated together to form the final global representations. As the noises introduced by repeated splitting and combining process can be gradually exaggerated along the optimization chain, in order to alleviate the adverse influence of brutal combination by simply stacking vertically, the initial global features from the backbone are introduced by residual connection. This typical setting of residual learning mechanism provides valid spatial supervision to make separate salient information in concatenated features more complete and structural. The final composite global representation can be formulated as Eq. 6 with .

Ii-C Modeling Cross-Modality Correlation

As mentioned before, challenges of RGB-IR Re-ID mainly lie in significant cross-modality variances due to the heterogeneous image forms. In order to deal with complex discrepancies across modalities, a novel marginal exponential center loss (MeCen) is presented with expectation of supervising modality-independent feature learning and intra-class variances reduction.

In cross-modality task, hard positive examples, which refer to images with very similar appearances but different identities, mainly comprise images with different modalities. By contrast, images within the same modality compose the easy positive example set due to the smaller discrepancies. Therefore, two key points of the proposed loss lie in, i.e., reducing acceptable variances among easy examples and imposing strong exponential constraints on hard positive examples. The formula can be written as:

(7)

where denotes normalization on the center feature vectors.

To eliminate the intra-class accumulation influence, a margin threshold is introduced. For instance, intra-class variances of feature points, whose distances to clustering centers are less than margin , ought to have no contributions to loss, while most hard positive examples dominate the loss calculation. Furthermore, the clustering ability is further strengthened by imposing restriction power on hard examples. Note that compared with triplet-center loss [6], in addition to class-related variances, dealing with modality-related variances among intra-class features is also necessary. As modality-related variances are mostly contained in hard positive examples, the key to improve RGB-IR Re-ID model is closely related to the way in imposing the constraints on such images.

In our cross-modality Re-ID model, the MeCen loss is introduced to impose strong constraints on intra-class variances while identify loss is mainly used to deal with inter-class variances as complementary. The joint optimization can be defined in a multi-loss function form as follows:

(8)

where and denote the identity loss and the MeCen loss respectively, and is the hyper-parameter to control contributions of the MeCen loss.

Iii Experiments and Discussions

Extensive experiments are conducted on public datasets to evaluate the performance of the proposed method for cross-modality person Re-ID, which aim to answer the following questions:

RQ1:

How does MSPAC-MeCen model perform as compared with traditional methods and state-of-the-art deep learning methods?

RQ2: How do multiple components of the MSPAC-MeCen model contribute to the performance of the overall framework?

RQ3: How do hyper-parameters introduced by the loss function affect the Re-ID model performance?

Iii-a Experimental Setting

A. Datasets

RegDB [11] is a small dataset collected by the camera system mounted with both visible and infrared cameras. There are 412 persons in total, each of which has both 10 RGB and 10 thermal images with various gaits. SYSU-MM01 [19] contains 491 persons captured by six cameras, including 2 infrared cameras and 4 visible cameras. Training set contains 22258 RGB images and 11909 thermal images of 395 identities, while the remaining 96 identities are divided into query set and gallery set for testing with 3803 thermal images and 301 RGB images respectively. This dataset is collected in both indoor and outdoor environments. All-search mode includes 4 RGB cameras as the gallery set and 2 infrared cameras as the query set, while indoor-search mode excludes two outdoor RGB cameras from the gallery set. Note that our experiments are conducted under the single-shot setting with both two modes for evaluation.

B. Evaluation Protocol

Two standard metrics, cumulative matching characteristics (CMC) and mean average precision (mAP), are adopted to evaluate performance. The detailed CMC rank-k (k=1,10,20) accuracy is listed to make comparison with the existing state-of-the-arts.

For traditional Re-ID models, two commonly used handcrafted feature descriptors, HOG [3] and LOMO [10], are combined with distance metrics for evaluation, which include KISSEME [9], LFDA [12], CCA [13] and CRAFT [1]. The deep learning baselines includes: TONE [20], Two-stream [19], One-stream [19]

, Zero-Padding

[19], HCML[20] which is a two-stage algorithm based on the TONE structure, BDTR [22], cmGAN [2], DRL [18], eBDTR [21] which is a derivative method of BDTR model, MSR [4] and AlignGAN [16].

Iii-B Implementation Details

Experiment settings. Our network follows parameter settings in [22]. SGD optimizer is applied for optimization with a weight decay of 5e-4 and the momentum term

. The initial learning rate is set as 0.01 and decays with 0.1 times every 10 epochs.

Batch sampling strategy. As the mini-batch sampling strategy [22] can not well satisfy clustering prerequisites of MeCen loss. Therefore, this paper revises training strategy with more sample images for each person in a single batch. Specifically, person identities are randomly chosen in each batch, where each identity has RGB images and thermal images. The batch size is represented as , e.g., this paper set and for training.

Iii-C Comparison with State-of-the-Art Methods (Rq1)

Datasets SYSU
Feature Metric All-search Indoor-search
Single-shot Multi-shot Single-shot Multi-shot
r=1 r=10 r=20 mAP r=1 r=10 r=20 mAP r=1 r=10 r=20 mAP r=1 r=10 r=20 mAP
HOG KISSME 2.12 16.21 29.13 3.53 2.79 18.23 31.25 1.96 3.11 25.47 46.47 7.43 4.10 29.32 50.59 3.61
LFDA 2.33 18.58 33.38 4.35 3.82 20.48 35.84 2.20 2.44 24.13 45.50 6.87 3.42 25.27 45.11 3.19
CCA 2.74 18.91 32.51 4.28 3.25 21.82 36.51 2.04 4.38 29.96 50.43 8.70 4.62 34.22 56.28 3.87
CRAFT 2.59 17.93 31.50 4.24 3.58 22.90 38.59 2.06 3.03 24.07 42.89 7.07 4.16 27.75 47.16 3.17
LOMO KISSME 2.23 18.95 32.67 4.05 2.65 20.36 34.78 2.45 3.83 31.09 52.86 8.94 4.46 34.35 58.43 4.93
LFDA 2.89 21.11 35.36 4.81 3.86 24.01 40.54 2.61 4.81 32.16 52.50 9.56 6.27 36.29 58.11 5.15
CCA 2.42 18.22 32.45 4.19 2.63 19.68 34.82 2.15 4.11 30.60 52.54 8.83 4.86 34.40 57.30 4.47
CRAFT 2.34 18.70 32.93 4.22 3.03 21.70 37.05 2.13 3.89 27.55 48.16 8.37 2.45 20.20 38.15 2.69
TONE HCML 14.32 53.16 69.17 16.16 - - - - 20.82 68.86 84.46 26.38 - - - -
Two-stream 11.65 47.99 65.50 12.85 16.33 58.35 74.46 8.03 15.60 61.18 81.02 21.49 22.49 72.22 88.61 13.92
One-stream 12.04 49.68 66.74 13.67 16.26 58.14 75.05 8.59 16.94 63.55 82.10 22.95 22.62 71.74 87.82 15.04
Zero-Padding 14.80 54.12 71.33 15.95 19.13 61.40 78.41 10.89 20.58 68.38 85.79 26.92 24.43 75.86 91.32 18.64
BDTR 27.32 66.96 81.07 27.32 - - - - 31.92 77.18 89.28 41.86 - - - -
cmGAN 26.97 67.51 80.56 27.80 31.49 72.74 85.01 22.27 31.63 77.23 89.18 42.19 37.00 80.94 92.11 32.76
DRL 28.90 70.60 82.40 29.20 - - - - - - - - - - - -
eBDTR 27.82 67.34 81.34 28.42 - - - - 32.46 77.42 89.62 42.46 - - - -
MSR 37.35 83.40 93.34 38.11 43.86 86.94 95.68 30.48 39.64 89.29 97.66 50.88 46.56 93.57 98.80 40.08
AlignGAN 42.4 85.0 93.7 40.7 51.5 89.4 95.7 33.9 45.9 87.6 94.4 54.3 57.1 92.7 97.4 45.3
MSPAC-MeCen 46.62 87.59 95.77 47.26 47.57 87.64 96.11 38.53 51.63 93.48 98.82 61.54 52.81 94.16 99.37 47.09
TABLE I: Overall performance comparison with different methods on SYSU-MM01 dataset under all-search and indoor-search mode in terms of rank-k accuracies and mAP (%).

This section compares the proposed MSPAC-MeCen with baselines introduced above, including traditional methods and deep learning methods. Table I and Table II demonstrate the overall performance evaluated on the SYSU-MM01 and RegDB dataset in terms of RankK (K=1,10,20) and mAP. From comparison results, the following observations can be derived:

(1) Compared with traditional methods typically based on handcrafted features, among which the highest mAP value even fail to achieve 10%, MSPAC-MeCen shows superior performance of deep neural network for feature learning.

(2) In all end-to-end deep learning methods, MSPAC-MeCen still achieves attracting performance. It further verifies that the superiority of the proposed method results from its better ability to obtain distinct features and construct modality-invariant correlations. Moreover, our model need no additional modal-related constraints.

(3) For both all-search and indoor-search mode in Table I, MSPAC-MeCen model always achieves the best performance with remarkable improvements, which indicates that our model can effectively eliminate background noise influence.

Methods Evaluation Metrics
r=1 r=10 r=20 mAP
LOMO 0.85 2.47 4.10 2.28
HOG 13.49 33.22 43.66 10.31
Two-stream 12.43 30.36 40.96 13.42
One-stream 13.11 32.98 42.51 14.02
Zero-Padding 17.75 34.21 44.35 18.90
TONE+HCML 24.44 47.53 56.78 20.80
BDTR 33.47 58.42 67.52 31.83
eBDTR 34.62 58.96 68.72 33.46
MSR 48.43 70.32 79.95 48.67
AlignGAN 57.9 - - 53.6
MSPAC-MeCen 49.61 72.28 80.63 53.64
TABLE II: Overall performance comparison with different methods on RegDB dataset in terms of rank-k accuracies and mAP (%).

Iii-D Ablation Study and Discussions (Rq2)

Individual component of MSPAC-MeCen, including cascading aggregation mechanism and MeCen loss, is further investigated on both datasets in this section. Note our experiments are evaluated in the challenging all-search mode with single-shot setting.

A. Effectiveness of Pyramid Part-aware Attention Mechanism

Methods r=1 r=5 r=10 r=20 mAP
Baseline+gid 27.52 43.54 53.45 63.88 29.44
MSPAC+gid 35.58 49.22 58.74 67.91 36.89
Baseline+pid 39.08 52.77 62.48 72.14 42.31
MSPAC+pid 49.61 62.77 72.28 80.63 53.64
TABLE III: Effectiveness of MSPAC in the proposed cross-modality model on RegDB dataset.
Methods Evaluation Metrics
r=1 r=5 r=10 r=20 mAP
Baseline+gid 27.14 58.77 72.60 84.35 29.22
MSPAC+gid 38.86 68.16 79.78 89.93 40.69
Baseline+pid 37.97 73.28 84.33 92.30 41.08
MSPAC+pid 41.41 70.87 84.09 93.66 41.98
TABLE IV: Effectiveness of MSPAC in the proposed cross-modality model on SYSU-MM01 dataset.
Methods Evaluation Metrics
r=1 r=10 r=20 mAP
MSPAC w/o CH 42.64 86.41 93.24 44.57
MSPAC w/o SP 40.63 86.62 95.11 43.97
Channel w/o MP 42.41 87.09 96.29 44.56
Channel w/o AP 42.23 84.07 94.16 43.40
MSPAC-Comb 46.62 87.59 95.77 47.26
TABLE V: Comparison of different attention modules. We evaluate the combining methods of spatial attention and channel attention branches, where 3 variants of pooling strategies are further verified, i.e., average pooling, max pooling, and the joint pooling form.

To validate this, MSPAC is removed as the baseline for comparison on both datasets in Table III and Table IV. As max pooling (MP), average pooling (AP), spatial attention (SP) and channel (CH) attention in MSPAC are all adopted as a joint module, a deep analysis on the advantages of such combination is further made as shown in Table V. The following conclusions can be drawn:

(1) Compared with the baseline, MSPAC achieves much better performance under both part-based (pid) and global-based constraints (gid). MSPAC with global constraints (gid) reach to the similar good performance to the part-based baseline especially on SYSU-MM01 dataset. The unified global representations from MSPAC are thus verified to be of superior discrimination ability.

(2) Feature representations in MSPAC are verified to be discriminative as there is little performance difference between global- and part-level constraints.

(3) Model usually shows better performance on RegDB dataset due to simple diversities of camera views and background noise.

B. Effectiveness of Hierarchical Part Aggregation Architecture

In order to demonstrate the effect of the hierarchical strategy employed in MSPAC, we compare different configurations and show results in Table VI. Conventional aggregation usually combines multiple single-scale parts into a unified representation where the number of body parts result in different performance, while MSPAC integrates multi-scale body parts in a stage-wise fusion manner. We choose scale value as 3 for experimental verification.

Experiments show our hierarchical fusion yields better performance, which benefits the joint representation from learning multi-scale part-aware features. Besides, a larger part number can lead to better results, e.g., a partition strategy by P=6 brings more fine-grained information of person appearance which makes it superior to the case P=3 or P=1. Such observation gives us a more reasonable insight in adopting hierarchical aggregation strategy to integrate both coarse and fine level of granularity into a unit.

Methods Evaluation Metrics
Strategy Part number r=1 r=10 r=20 mAP
Normal {1} 40.94 84.22 92.74 44.86
{3} 40.52 84.28 93.61 42.84
{6} 42.89 84.91 93.29 44.31
Hierarchical {1,3} 45.31 84.70 92.43 44.59
{3,6} 33.63 81.28 92.32 36.65
{1,3,6} 46.62 87.59 95.77 47.26
TABLE VI: Effectiveness of hierarchical part aggregation architecture in the proposed cross-modality model on the SYSU-MM01 dataset.

C. Effectiveness of Marginal Exponential Center Loss

Methods Evaluation Metrics
r=1 r=5 r=10 r=20 mAP
MSPAC+gid 38.86 68.16 79.78 89.93 40.69
+center 40.36 72.44 83.09 91.22 41.40
+margin 42.44 72.36 82.80 90.98 43.19
+exp 46.62 77.20 87.59 95.77 47.26
TABLE VII: Effectiveness of marginal exponential center loss in the proposed cross-modality model on the SYSU-MM01 dataset.

The MeCen loss introduces two modifications, margin value for eliminating accumulation influence and exponential form for strengthening the restriction power on hard examples. Table VII shows experiment results conducted on the SYSU-MM-1 dataset to evaluate effectiveness of each modification.

From observations, incorporating center loss (+center) into the MSPAC model could only have some slight improvements, while further introducing an additional margin and then even replacing the loss function to an exponential form (+ exp) could improves results with a significant margin in both two terms of evaluation metrics. Therefore, our MeCen loss could achieve significant improvements in eliminating intra-class variances by using the effectiveness of clustering strategy.

D. Visualization Results

The proposed MSPAC-MeCen model is designed to own two main characteristics, robust modality correlations between heterogenous feature representations and superior clustering ability for intra-class discrepancies elimination. Therefore, visualization results are shown in this section to explicitly demonstrate the effectiveness of the proposed method.

(a)
(b)
Fig. 4: Visualization of attention score in heat maps. (a) Examples of persons where each identity corresponds to six images in RGB (upper) and infrared (bottom) modality. Attention maps in middle and right column are results of Baseline+gid model and MSPAC-MeCen model respectively. (b) Positive and hard negative examples of the anchor person. Intra-person refers to positive examples of the same identity (upper), and inter-person refers to the hard negative examples with similar appearances but different identities (bottom). The attention maps is obtained by the MSPAC-MeCen model.

Fig.4 shows the attention maps of some example images by means of CAM [25], where more salient visual clues will be assigned with higher attention scores.

To make comparison, RGB images and thermal images of the same person are selected in Fig.(a)a. It can be observed that MSPAC concentrate more on common discriminative regions across modalities such as head and leg regions while baseline method also includes some confusing information like objects in the background. Also more complete semantic information can be obtained by the MSPAC especially under the supervision of the MeCen loss.

In addition, to explicitly evaluate the clustering ability of the MeCen loss, 20 persons are randomly selected from SYSU-MM01 dataset and their feature vectors are projected into 2D feature space using t-SNE approach [15]. Fig. (a)a shows distance between modalities in the proposed MSPAC-MeCen model is greatly reduced and feature points are clustered more compact. This demonstrates the superior ability of our model in eliminating modality-related variances.

(a)
(b)
Fig. 5: Visualization of clustering performance of MeCen in comparison with the baseline on the SYSU-MM01 dataset. (a) Baseline + gid. (b) The proposed model (MSPAC-MeCen). Only 20 person identities are randomly selected for illustration.

Iii-E Impact of Hyper-parameter (Rq3)

Two additional hyper-parameters, margin threshold and weight value , which are introduced in MSPAC-MeCen model, are evaluated in details to give an insight in how to set their values. (Experiment results will be provided in Supplementary Materials due to the paper length limit.)

Iv Conclusions

For cross-modality Re-ID task, this paper proposes a novel multi-scale part-aware cascading framework (MSPAC) for aggregating sharable features across heterogeneous images and a MeCen loss for modeling cross-modality correlations. The core of our MSPAC is to adopt attention to multi-scale body parts for salient feature enhancement and then incorporate fine-grained features to coarser partition scale step by step in a cascading framework. MeCen loss greatly benefits the elimination of intra-class variances and modality-related discrepancies by imposing strong constraints on hard examples for better cross-modality correlations. Extensive experiments on public datasets demonstrate the superiority of the proposed model. Future work involves dynamic weight learning technique for better mutual optimization. Besides, we intend to further investigate effective data augmentations for enlarging dataset.

V Acknowledgment

This work is supported by National Natural Science Foundation of China (No.U1613209), National Natural Science Foundation of Shenzhen (No.JCYJ20190808182209321).

References

  • [1] Y. C. Chen, X. Zhu, W. S. Zheng, and J. H. Lai (2017) Person re-identification by camera correlation aware feature augmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (2), pp. 392–408. External Links: Document, ISSN 01628828 Cited by: §III-A.
  • [2] P. Dai, R. Ji, H. Wang, Q. Wu, and Y. Huang (2018) Cross-modality person re-identification with generative adversarial training. In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 677–683. External Links: Document, ISBN 9780999241127, ISSN 10450823 Cited by: §I, §III-A.
  • [3] N. Dalal and B. Triggs (2005) Histograms of oriented gradients for human detection. In

    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

    ,
    Vol. 1, pp. 886–893. External Links: Document, ISBN 0769523722 Cited by: §III-A.
  • [4] Z. Feng, J. Lai, and X. Xie (2020) Learning modality-specific representations for visible-infrared person re-identification. IEEE Transactions on Image Processing 29, pp. 579–590. Cited by: §I, §III-A.
  • [5] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 770–778. External Links: Document, 1512.03385, ISBN 9781467388504, ISSN 10636919 Cited by: §II-A.
  • [6] X. He, Y. Zhou, Z. Zhou, S. Bai, and X. Bai (2018) Triplet-center loss for multi-view 3d object retrieval. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1945–1954. External Links: Document, 1803.06189, ISBN 9781538664209, ISSN 10636919 Cited by: §II-C.
  • [7] S. Iodice and A. Petrosino (2015) Salient feature based graph matching for person re-identification. Pattern Recognition 48 (4), pp. 1074–1085. External Links: Document, ISSN 00313203 Cited by: §I.
  • [8] J. K. Kang, T. M. Hoang, and K. R. Park (2019) Person re-identification between visible and thermal camera images based on deep residual CNN using single input. IEEE Access 7, pp. 57972–57984. External Links: Document, ISSN 21693536 Cited by: §I.
  • [9] M. Kostinger, M. Hirzer, P. Wohlhart, P. M. Roth, and H. Bischof (2012) Large scale metric learning from equivalence constraints. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2288–2295. External Links: Document, ISBN 9781467312264, ISSN 10636919 Cited by: §III-A.
  • [10] S. Liao, Y. Hu, X. Zhu, and S. Z. Li (2015) Person re-identification by local maximal occurrence representation and metric learning. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2197–2206. Cited by: §III-A.
  • [11] D. T. Nguyen, H. G. Hong, K. W. Kim, and K. R. Park (2017) Person recognition system based on a combination of body images from visible light and thermal cameras. Sensors (Switzerland) 17 (3), pp. 605. External Links: Document, ISSN 14248220 Cited by: §III-A.
  • [12] S. Pedagadi, J. Orwell, S. Velastin, and B. Boghossian (2013) Local fisher discriminant analysis for pedestrian re-identification. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3318–3325. External Links: Document, ISSN 10636919 Cited by: §III-A.
  • [13] N. Rasiwasia, J. Costa Pereira, E. Coviello, G. Doyle, G. R.G. Lanckriet, R. Levy, and N. Vasconcelos (2010) A new approach to cross-modal multimedia retrieval. In Proceedings of the ACM international conference on Multimedia, pp. 251–260. External Links: Document, ISBN 9781605589336 Cited by: §III-A.
  • [14] Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang (2018) Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In Proceedings of the European Conference on Computer Vision (ECCV), pp. 480–496. External Links: Document, 1711.09349, ISBN 9783030012243, ISSN 16113349 Cited by: §II-A.
  • [15] L. Van Der Maaten and G. Hinton (2008) Visualizing data using t-SNE.

    Journal of Machine Learning Research

    9 (Nov), pp. 2579–2625.
    External Links: ISSN 15324435 Cited by: §III-D.
  • [16] G. Wang, T. Zhang, J. Cheng, S. Liu, Y. Yang, and Z. Hou (2019) Rgb-infrared cross-modality person re-identification via joint pixel and feature alignment. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3622–3631. External Links: Document, ISBN 9781728148038, ISSN 15505499 Cited by: §III-A.
  • [17] G. Wang, Y. Yuan, X. Chen, J. Li, and X. Zhou (2018) Learning discriminative features with multiple granularities for person re-identification. In Proceedings of the ACM Multimedia Conference, pp. 274–282. External Links: Document, 1804.01438, ISBN 9781450356657 Cited by: §II-B.
  • [18] Z. Wang, Z. Wang, Y. Zheng, Y. Y. Chuang, and S. Satoh (2019) Learning to reduce dual-level discrepancy for infrared-visible person re-identification. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 618–626. External Links: Document, ISBN 9781728132938, ISSN 10636919 Cited by: §I, §III-A.
  • [19] A. Wu, W. S. Zheng, H. X. Yu, S. Gong, and J. Lai (2017) Rgb-infrared cross-modality person re-identification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5380–5389. Cited by: §I, §I, §III-A, §III-A.
  • [20] M. Ye, X. Lan, J. Li, and P. C. Yuen (2018) Hierarchical discriminative learning for visible thermal person re-identification. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 7501–7508. External Links: ISBN 9781577358008 Cited by: §I, §III-A.
  • [21] M. Ye, X. Lan, Z. Wang, and P. C. Yuen (2020) Bi-directional center-constrained top-ranking for visible thermal person re-identification. IEEE Transactions on Information Forensics and Security 15, pp. 407–419. External Links: Document, ISSN 15566021 Cited by: §I, §III-A.
  • [22] M. Ye, Z. Wang, X. Lan, and P. C. Yuen (2018) Visible thermal person re-identification via dual-constrained top-ranking. In Proceedings of the IEEE International Conference on Computer Vision International Joint Conference on Artificial Intelligence, pp. 1092–1099. External Links: Document, ISBN 9780999241127, ISSN 10450823 Cited by: §I, §III-A, §III-B, §III-B.
  • [23] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian (2015) Scalable person re-identification: A benchmark. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1116–1124. External Links: Document, ISBN 9781467383912, ISSN 15505499 Cited by: §I.
  • [24] W. S. Zheng, S. Gong, and T. Xiang (2013) Reidentification by relative distance comparison. IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (3), pp. 653–668. External Links: Document, ISSN 01628828 Cited by: §I.
  • [25] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016) Learning deep features for discriminative localization. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2921–2929. External Links: Document, 1512.04150, ISBN 9781467388504, ISSN 10636919 Cited by: §III-D.