Person re-identification (ReID) aims to find out images of the same person to the query image from a large gallery. It is a key technology for intelligent video analysis like cross-camera pedestrian tracking. Many works focus on feature learning [hermans2017defense, suh2018part] and metric learning [chen2015mirror, liao2015person]
on the RGB modality. These methods have achieved great success, especially with the most recent deep learning technology[sun2018beyond]. However, the dependency on bright lighting environments limits their applications in real complex scenarios. The performance of these methods degrades dramatically in dark environments where most cameras cannot work well [wu2017rgb]. Hence, other kinds of visual sensors like infrared cameras are now widely used as a complement to RGB cameras to overcome these difficulties, yielding popular research interest on RGB-Infrared cross-modality person ReID (cm-ReID).
Compared to conventional ReID task, the major difficulty of cm-ReID is the modality discrepancy resulting from intrinsically distinct imaging processes of different cameras. Some discriminative cues like colors in RGB images are missing in infrared images. Previous methods can be summarized into two major categories to overcome the modality discrepancy: modality-shared feature learning and modality-specific feature compensation. The shared feature learning aims to embed images of whatever modality into a same feature space [wu2017rgb, ye2018hierarchical, ye2018visible]. The specific information of different modalities such as colors of RGB images and thermal of infrared images are eliminated as redundant information [dai2018cross]. However, the specific information like colors plays an important role in conventional ReID. With shared cues only, the upper bound of the discrimination ability of the feature representation is limited. As a result, modality-specific feature compensation methods try to make up the missing specific information from one modality to another. Dual-level Discrepancy Reduction Learning (DRL) [wang2019learning] is the typical work to generate multi-spectral images to compensate for the lacking specific information by utilizing the generative adversarial network (GAN) [goodfellow2014generative]. However, a person in the infrared modality can have different colors of clothes in the RGB space. There can be multiple reasonable results for image generation. It’s hard to decide which one is the correct target to be generated for re-identification without memorization of the limited gallery set.
In this paper, we tackle the above limitations by proposing a novel cross-modality shared-specific feature transfer algorithm (termed cm-SSFT) to explore the potential of both the modality-shared information and the modality-specific characteristics to boost the re-identification performance. It models the affinities between intra-modality and inter-modality samples and utilizes them to propagate information. Every sample accepts the information from its inter-modality and intra-modality near neighbors and meanwhile shares its own information with them. This scheme can compensate for the lack of specific information and enhance the robustness of the shared feature, thus improving the overall representation ability. Comparison with the shared feature learning methods are shown in Figure 1. Our method can exploit the specific information that is unavailable in traditional shared feature learning. Since our method is dependent on the affinity modeling of neighbors, the compensation process can also overcome the choice difficulty of generative methods. Experiments show that the proposed algorithm can significantly outperform state-of-the-arts by 22.5% and 19.3% mAP, as well as 19.2% and 14.4% Rank-1 accuracy on the two most popular benchmark datasets SYSU-MM01 and RegDB, respectively.
The main contributions of our work are as follows:
We propose an end-to-end cross-modality shared-specific feature transfer (cm-SSFT) algorithm to utilize both the modality shared and specific information, achieving the state-of-the-art cross-modality person ReID performance.
We put forward a feature transfer method by modeling the inter-modality and intra-modality affinity to propagate information among and across modalities according to near neighbors, which can effectively utilize the shared and specific information of each sample.
We provide a novel complementary learning method to extract discriminative and complementary shared and specific features of each modality, respectively, which can further enhance the effectiveness of the cm-SSFT.
2 Related Work
Person ReID. Person ReID [zheng2016person] aims to search target person images in a large gallery set with a query image. The recent works are mainly based on deep learning for more discriminative features [fang2019bilinear, hou2019interaction, xia2019second, zhou2019discriminative]. Some of them treat it as a partial feature learning task and pay much attention to more powerful network structures to better discover, align, and depict the body parts [guo2019beyond, sun2019perceive, sun2018beyond, liu2015spatio]
. Other methods are based on metric learning, focusing on proper loss functions, like the contrastive loss[rama16siamese], triplet loss [hermans2017defense], quadruplet loss [chen2017beyond], etc. Both kinds of methods try to discard the unrelated cues, such as pose, viewpoint and illumination changing out of the features and the metric space. Recent disentangle based methods extend along this direction further by splitting each sample to identity-related and identitiy-unrelated features, obtaining purer representations without redundant cues [ham2019learning, zheng2019joint].
The aforementioned methods process each sample independently, ignoring the connections between person images. Recent self-attention [luo2018spectral] and graph-based methods [bai2017scalable, shen2018deep, shen2018person, wu2019unsupervised] tried to model the relationship between sample pairs. Luo et al. proposed the spectral feature transformation method to fuse features between different identities [luo2018spectral]. Shen et al.
proposed a similarity guided graph neural network[shen2018person] and deep group-shuffling random walk [shen2018deep] to fuse the residual features of different samples to obtain more robust representation. Liu et al. utilized the structure from near neighbors to tackle the unsupervised person re-id [liu2017stepwise].
Cross-modality matching. Cross-modality matching aims to match samples from different modalities, such as cross-modality retrieval [gu2018look, he2019new, lee2018stacked, liu2019mtfh] and cross-modality tracking [zhu2018fanet]
. Cross-modality retrieval has been widely studied for heterogeneous face recognition[he2018wasserstein]
and text-to-image retrieval[gu2018look, he2019new, lee2018stacked, li2018self, liu2019mtfh]. [he2018wasserstein] proposed a two-stream based deep invariant feature representation learning method for heterogeneous face recognition.
Cross-modality person ReID. Cross-modality person ReID aims to match queries of one modality against a gallery set of another modality [wang2019beyond], such as text-image ReID [li2017person, niu2019improving, sarafianos2019adversarial], RGB-Depth ReID [hafner2018cross, wu2017robust] and RGB-Infrared (RGB-IR) ReID [dai2018cross, feng2019learning, hao2019hsme, kang2019person, kniaz2018thermalgan, lin2019hpiln, wang2019rgb, wang2019learning, wu2017rgb, ye2018hierarchical, ye2018visible, zhang2019attend]. Wu et al. built the largest SYSU-MM01 dataset for RGB-IR person ReID evaluation [wu2017rgb]. Ye et al. advanced a two-stream based model and bi-directional top-ranking loss function for the shared feature embedding [ye2018hierarchical, ye2018visible]. To make the shared features purer, Dai et al. suggested a generative adversarial training method for the shared feature learning [dai2018cross]. These methods only concentrate on the shared feature learning and ignore the potential values of specific features. Accordingly, some other works try to utilize modality-specific features and focus on cross-modality GAN. Kniaz et al. proposed ThermalGAN to transfer RGB images to IR images and extracted features in IR domain [kniaz2018thermalgan]. Wang et al. put forward dual-level discrepancy reduction learning based on a bi-directional cycle GAN to reduce the gap between different modalities [wang2019learning]. More recently, Wang et al. [wang2019rgb] constructed a novel GAN model with the joint pixel-level and feature-level constraint, which achieved the state-of-the-art performance. However, it is hard to decide which one is the correct target to be generated from the multiple reasonable choices for re-identification.
3 Cross-Modality Shared-Specific Feature Transfer
The framework of the proposed cross-modality shared-specific feature transfer algorithm (cm-SSFT) is shown in Figure 2. Input images are first fed into the two-stream feature extractor to obtain the shared and specific features. Then the shared-specific transfer network (SSTN) models the intra-modality and inter-modality affinities. It then propagates the shared and specific features across modalities to compensate for the lacked specific information and enhance the shared features. To obtain discriminative and complementary shared and specific features, two project adversarial and reconstruction blocks and one modality-adaptation module are added on the feature extractor. The overall algorithm is trained in an end-to-end manner.
To better illustrate how the proposed algorithm works, we distinguish the RGB modality, infrared modality and shared space with , and in superscript. We use and to denote sHared and sPecific features, respectively.
3.1 Two-stream feature extractor
As shown in Figure 2, the two-stream feature extractor includes the modality-shared stream (in blue blocks) and the modality-specific stream (green blocks for RGB and yellow blocks for IR). Each input image (). will pass the convolutional layers and the feature blocks to generate the shared feature and specific feature:
To make sure that the two kinds of features are both discriminative, we add the classification loss on each kind of features respectively:
is the predicted probability of belonging to the ground-truth classfor the input image . The classification loss ensures that features can distinguish the identities of the inputs. Besides, we add a single modality triplet loss () [hermans2017defense] on specific features and a cross-modality triplet loss () [dai2018cross, ye2018visible] on shared features for better discriminability:
where and are the margins of and , respectively. , , represent indices of the anchor, positive of the anchor and negative of the anchor of triplet loss .
3.2 Shared-Specific Transfer Network
The two-stream network extracts the shared and specific features for each modality. For unified feature representation, we pad and denote the features of each modality with a three-segment format: [RGB-specific; shared; Infrared-specific] as follows:
denotes the padding zero vector, which means that samples of the RGB modality have no specific features of infrared modality, and vice versa.means concatenation in the columan dimension. For cross-modality retrieval, we need to transfer the specific features from one modality to another to compensate for these zero-padding vectors. Motivated by graph convolutional network (GCN), we utilize the near neighbors to propagate information and meanwhile maintain the context structure of the overall sample space. The proposed shared-specific transfer network can make up the lacking specific features and enhance the robustness of the overall representation jointly. As shown in Figure 2, SSTN first models the affinity of samples according to the two kinds of features. Then it propagates both intra-modality and inter-modality information with the affinity model. Finally, the feature learning stage guides the optimization of the whole process with classification and triplet losses.
Affinity modeling. We use the shared and specific features to model the pair-wise affinity. We take the specific features to compute the intra-modality affinity and the shared features for inter-modality as follows:
where is the intra-modality affinity between the -th sample and the -th sample, both of which belong to the modality. is the inter-affinity. is the normalized euclidean distance metric function:
The intra-similarity and inter-similarity represent the relation between each sample with others of both the same and different modalities. We define the final affinity matrix as:
where is the near neighbor chosen function. It keeps the top- values for each row of a matrix and sets the others to zero.
Shared and specific information propagation. The affinity matrix represents the similarities across samples. SSTN utilizes this matrix to propagate features. Before this, features of the RGB and infrared modalities are concatenated in the row dimension, each row of which stores a feature of a sample: