Learning to Align Multi-Camera Domain for Unsupervised Video Person Re-Identification

09/29/2019 ∙ by Youngeun Kim, et al. ∙ KAIST 수리과학과 18

Most video person re-identification (re-ID) methods are mainly based on supervised learning, which requires laborious cross-camera ID labeling. Due to this limit, it is difficult to increase the number of cameras for constructing a large camera network. In this paper, we address the person ID labeling issue by presenting novel deep representation learning without ID information across multiple cameras. Specifically, our method consists of both inter- and intra camera feature learning techniques. We maximize feature distances between people within a camera. At the same time, considering each camera as a different domain, we apply domain adversarial learning across multiple camera views for minimizing camera domain discrepancy. To further enhance our approach, we propose person part-level adaptation to effectively perform multi-camera domain invariant feature learning at different spatial regions. We carry out comprehensive experiments on four public re-ID datasets (i.e., PRID-2011, iLIDS-VID, MARS, and Market1501). Our method outperforms state-of-the-art methods by a large margin of about 20 accuracy on the large-scale MARS dataset.



There are no comments yet.


page 1

page 2

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Person re-identification (re-ID) [50, 24, 41, 7, 39, 29, 46, 32] aims to match images of a person-of-interest across multiple distinct camera views. These days, video-based re-ID [10, 37, 22, 52, 44, 13, 40] has been extensively studied in video surveillance systems for public safety. Among the re-ID works, a supervised learning approach leads to substantial advancement on performance. However, annotating person ID across multiple cameras entails significantly high labor cost, which is difficult to generalize in real-world situations since the labeling cost increases proportionally as the number of cameras increases. Therefore, a line of work [28, 36, 8, 20, 42, 43, 41] focuses on unsupervised re-ID approaches that learn ID discriminative feature representations without cross-camera ID labeling.

The existing unsupervised re-ID approaches can be divided into two groups. The first group [7, 28, 35] proposes to train a model from an additional labeled source dataset and transfer the knowledge to unlabeled target camera domains. However, these methods are hard to adapt to target camera domains, which have a large discrepancy with the source domain, and also require an additional labeling cost for the source dataset. The other group [24, 42, 41, 21, 43] suggests to iteratively update a training set with a comparison of the feature distance between samples from different camera domains. These methods have uncertainty about selecting positive/negative samples since there is no ID information between samples in different cameras. Even though the aforementioned works show promising performance in a few-camera system [37, 14], the nature of imperfection and uncertainty induces performance degradation in a large number of cameras [48, 49]. Compared to previous works, we directly close ID feature representations across all cameras.

Figure 1: Our motivation is to maximize the inter-ID distance within a camera (different IDs in a row) and minimize camera domain discrepancy (different camera views in a column).
Figure 2: Illustration of the whole pipeline of our approach. (a) We diversify ID embedding features within a camera. (b) However, some IDs (purple boxes) could be misclassified in the embedding feature space due to camera domain discrepancy. (c) To address this problem, we propose multi-domain feature invariant learning, which reduces the feature distance between the same person ID across all cameras. (d) A consideration with both maximizing inter-ID distance within a camera and minimizing camera domain discrepancy (i.e., learning to align multi-camera domain) helps a feature extractor to generate discriminative ID features.

In this paper, we propose a simple yet effective approach, named LAM: Learning to Align Multi-Camera Domain, for unsupervised video re-ID. Toward this goal, we formulate the video re-ID task as a multi-camera domain invariant feature learning problem across multiple cameras. Ideally, the embedding features belonging to the same person ID have to be close together regardless of the camera domain (see Fig. 1). To achieve this, we enforce the features in the learned embedding space to maximize inter-ID distance within a camera and minimize camera domain discrepancy, which are described as follows:

To maximize the inter-ID distance within a camera, we adopt a multi-branch classification scheme [1, 6, 21]

. Each branch improves the discriminative power of the same ID features within a camera. In addition, we propose an effective batch composition technique for multi-domain feature learning, which is named Cross-view Batch Normalization (CBN).

To minimize the camera domain discrepancy, we propose Multi-camera Domain Invariant Feature Learning (MDIFL). Specifically, we apply the concept of domain adversarial learning [17, 30], which obtains the domain-invariant features by generating features to confuse a domain discriminator. By this concept, the network can align the feature distributions across all camera domains. In our work, we propose two versions of LAM according to the configuration of the camera domain discriminator. LAM is designed with multiple camera discriminators on the basis of conventional cross-domain (i.e., source and target domains) adaptation methods. Here, we construct cross-domain discriminators for every pair of domains. LAM is a memory efficient version, which compresses multiple domain discriminators into a single multi-domain camera discriminator. Depending on the camera network system, one can choose between two versions since they have a performance-memory trade-off.

Furthermore, we present Person Part-level Adaptation (PPA) for effective feature alignment with MDIFL. Our motivation is that each body part is located in a similar area in the re-ID images [39, 29, 46, 32]. For example, head, torso, and legs usually appear on the top, middle, and bottom part of images, respectively. Therefore, we divide the person feature map into a few parts and align the feature distributions of the same body parts across multi-camera domains. Consequently, the multi-camera domain invariant feature learning more effectively minimizes inter-camera ID variation by aligning part-level distributions.

To sum up, our main contributions can be summarized as follows: 1) We propose a novel learning scheme for unsupervised video person re-identification, named Learning to Align Multi-camera domains (LAM), which aims to maximize the inter-ID distance within a camera and minimize the camera domain discrepancy. 2) We suggest Multi-camera Domain Invariant Feature Learning (MDIFL), which gathers the embedding features belonging to the same class regardless of camera domains. 3) We also present Person Part-level Adaptation (PPA) to effectively minimize the discrepancy from multiple camera domains by aligning part-level distributions. 4) We conduct extensive experiments on three public video-based person re-ID datasets (PRID-2011 [14], iLIDS-VID [37], and MARS [48]) and one image-based person re-ID dataset (Market-1501 [49]). Our experiments show that the effect of MDIFL increases proportionally as the number of cameras increases, which is applicable to a real-world large camera network. The two proposed versions of our method (i.e., LAM and LAM) improve state-of-the-art video re-ID performance on the large-scale Mars dataset by 11% and 20% in rank-1 accuracy and 13% and 23% in mAP, respectively.

2 Related Work

2.1 Unsupervised Person Re-ID

Person re-ID recently has attracted attention due to the importance of video surveillance systems. Most existing re-ID works focused on supervised learning [10, 52, 22, 37, 44, 13, 40, 50] with a large amount of effort for labeling, which leads to the study of unsupervised re-ID [28, 36, 8, 20, 42, 21, 43, 24, 41]. For unsupervised re-ID, a group of works suggests style-transfer methods [38, 5, 51], which generate image pairs between source and target domains and train a network with existing supervised learning techniques. This image generation manner requires labeled source data to train an image translator and is inherently limited from the imperfection of image translation. In addition, a line of work [24, 42, 41, 43] proposes the iterative update scheme. These methods are mainly based on the comparison of the feature distance between samples obtained from different camera domains, which contain uncertainty about selecting positive/negative samples. Although the above methods achieve performance improvement on the few-camera datasets (PRID-2011 [14] and iLIDS-VID [37]), it does not guarantee success on the multiple-camera system (MARS [48]). Compared to previous works, our method shows a significant performance gain as the number of cameras increases in video surveillance systems, which is appropriate for real-world situations.

2.2 Domain Adaptation

Domain adaptation [17, 30] handles the domain discrepancy between the labeled source data and the unlabeled target data. Recently, domain adaptation has been actively studied due to data scarcity in classification [2, 25, 31], segmentation [3, 15, 16, 34], and detection [4, 18]

. Before the deep learning era, a number of methods

[11, 26] attempt to decrease a domain discrepancy with hand-crafted features. With the success of deep learning methods, Maximum Mean Discrepancy (MMD) minimization [2, 25] and domain adversarial learning [9, 34] have achieved impressive performance on the domain adaptation tasks. In this paper, we apply the concept of domain adversarial learning to unsupervised video re-ID. We regard each camera as a different domain without the labeled source data, which is different from the conventional domain adaptation scenario.

3 Methodology

Our goal is to train a network to extract an ID feature that is invariant to camera domains. The proposed approach is a combination of two ideas as follows. 1) We maximize inter-ID distance within a camera. For accurate ID classification, the network extracts features that have a large distance between different IDs. However, as shown in Fig. 2, ID features could be misclassified due to a domain discrepancy from different cameras. 2) To solve this problem, we minimize camera domain discrepancy with multi-domain adversarial learning. After training, the network extracts features from given query/gallery tracklets, and the features are compared by the Euclidean distance for ranking. Our approach can be applied not only to person re-identification but also to various multi-domain tasks.

3.1 Problem Formulation

Video person re-identification (re-ID) [10, 37] is a task of searching the given probe person in the gallery video sets. Since each camera has different camera parameters and viewpoints, we set each camera to a different domain. The problem can be formulated as follows. We have C camera domains, each of which contains a different number of tracklets T of N people. Let represent the tracklet-label pairs for network training, where is the j-th tracklet (i.e., set of images) from the i-th camera and is the ID label.

Figure 3: Illustration of Single-view Tracklet Pesudo Labeling (STPL). The same color (e.g., red or blue) represents the same person. Note that one person could have different IDs across multiple cameras.

3.2 Single-view Tracklet Pesudo Labeling

In video surveillance systems, a tracking result of one person can be divided into several tracklets (i.e., a sequence of detected images), which are mostly generated from challenging situations (e.g., occlusion) during the tracking process. In a single camera view, the separated tracklets of a person could be easily associated with spatial and temporal information [21], which is based on three observations as follows: 1) One person can not be in another place at the same time. 2) In a single camera domain, the appearance of people does not change significantly. 3) Multi-object tracking in video surveillance systems achieves high-performance [33]

due to fixed field-of-view and non-sudden person movement. By these observations, we assign pseudo-labels to every tracklet in each camera. At this time, the same person can be labeled with different IDs in other cameras since the labeling process is performed independently for each camera under the unsupervised learning manner with no ID information across multiple cameras. For example, as shown in Fig.

3, the tracklet fragments of the same person are combined in a single camera and assigned different IDs from other cameras. As a formula of labels and , does not guarantee for different camera domains p and q. We show the robustness of our methods even in the wild situation (i.e., generating tracklet fragments of one person within a camera), which is discussed in Section 4.2.

3.3 Maximizing Inter-ID Distance within a Camera

Multi-camera Branch Classification.    Even though there is no ID information across multiple cameras, the network can learn somewhat ID discriminative feature representations. To this end, we adopt multi-branch classification [1, 6, 21]. In this work, we construct the common feature extractor network and add C

-branch classifiers

followed by a softmax layer, where

is the parameters of the network. The learning for maximizing inter-ID distance within a camera is formulated as a supervised learning problem with pseudo-labels:


Since we use cross-entropy for the loss function, we can reformulate Eq. (

1) as the loss function for an input tracklet , which maximizes inter-ID distance within a camera:


where denotes the -th element of the softmax layer. Note that we omit the network parameters for brevity.

Cross-view Batch Normalization.

   Given a common feature extraction network

across C camera domains, there exist several alternative ways to compose a mini-batch. We enumerate two approaches for comparison between a naïve approach and the proposed Cross-view Batch Normalization (CBN) approach.

Naïve approach. Each mini-batch consists of tracklets sampled from a single camera domain. When a mini-batch passes the entire network, the common network and the classifier of the camera domain

are optimized by the stochastic gradient descent (SGD) method.

Cross-view Batch Normalization. A mini-batch comprises a uniform number of tracklets sampled from C camera domains. Therefore, we optimize and all at once.

Both methods have similar ID distributions in a mini-batch with random sample selection. However, the naïve mini-batch composition does not consider the characteristics of multiple camera domains. Especially, Li et al. [23] introduced the adaptive batch normalization (AdaBN) across multiple domains, and they announced that the parameter statistics of BN layer is highly correlated with the data domain composition of a mini-batch. Inspired by the work, we construct a mini-batch from multiple camera domains to allow the BN layer to learn the statistics of multiple domains. Our CBN improves the performance on the MARS dataset by 7% in rank-1 accuracy and 4% in mAP.

Figure 4: Illustration of two versions of the cross-domain discriminator: Multiple camera domain discriminators (left), A single camera domain discriminator for memory efficiency (right).
Figure 5: (a) The motivation of Person Part-level Adaptation (PPA). Compared to head (purple) and leg (red) parts, a body part (green) shows a large appearance variation. (b) Based on the motivation, we add the spatial average pooling layer in front of the camera domain discriminator for part-level adversarial learning.
Figure 6: Illustration of the network architecture. First, we sample a mini-batch via CBN and extract ID features through ResNet-50. Then, the multi-branch classifiers and the MDIFL module take the ID features as an input. While training the network, the multi-branch classifiers help the feature extractor to maximize ID distance within a camera, and the MDIFL module minimizes camera domain discrepancy.

3.4 Minimizing camera domain discrepancy

Multi-camera Invariant Learning.    The key contribution of this paper is proposing an effective Multi-camera Domain Invariant Feature Learning (MDIFL) method, which enforces the feature extractor to generate similar ID features across all camera domains. We apply the concept of domain adversarial learning to implement MDIFL by considering the characteristics of person re-ID: In video surveillance systems, there are more than two camera domains, which is hard to apply the conventional cross domain adaptation approach. To this end, we design the camera domain discriminator between every pair of domains, where . Note that both and represent the same camera domain discriminator. For each pair, we add the gradient reversal layer [9] between and , which flips the back-propagation gradient, to align different camera domain features in the same embedding feature space. Since a mini-batch contains samples from all camera domains by CBN, we can train all discriminators at the same time. Therefore, we formulate an adversarial loss function for an input tracklet as follows:


However, cross-domain discriminators for camera has a memory complexity , which limits increasing the number of cameras for video surveillance systems. For example, as shown in Fig. 4, the large-scale MARS dataset containing 6 cameras needs 15 discriminators () for training domain invariant features. To effectively implement MDIFL, our camera discriminator takes the ID features of multiple domains with shared parameters. Therefore, a number of the cross-domain discriminators can be compressed into a multi-domain discriminator. We observe that designing a multi-domain discriminator requires performance-memory trade-off by our experiments. The objective function for MDIFL with a single discriminator is formulated as follows:


where denotes the -th element of the softmax layer. The proposed leads to generate inter-camera invariant ID features.


Methods PRID-2011 () iLIDS-VID () MARS ()
Rank 1 5 10 20 1 5 10 20 1 5 10 20 mAP
Salience [47] 25.8 43.6 52.6 62.0 10.2 24.8 35.5 52.9 - - - - -
GRDL [20] 41.6 76.4 84.6 89.9 21.7 42.9 56.2 71.6 19.3 33.2 41.6 46.5 9.6
SMP [24] 80.8 96.0 98.3 99.3 33.2 55.7 64.4 72.5 23.9 35.8 - 44.9 10.5
DGM+IDE [42] 56.4 81.3 88.0 89.9 36.2 62.8 73.6 82.7 36.8 54.0 61.6 68.5 21.3
RACE [41] 50.6 79.4 84.8 91.8 19.3 39.3 53.3 68.7 43.2 57.1 62.1 67.6 24.5
TAUDL [21] 49.4 78.7 - 98.9 26.7 51.3 - 82.0 43.8 59.9 - 72.8 29.1
LAM (Ours) 62.3 86.6 92.2 97.9 34.4 55.4 67.1 83.2 51.4 67.0 76.3 82.4 38.2
LAM (Ours) - - - - - - - - 55.0 72.7 79.1 84.3 42.0
LAM (Ours) - - - - - - - - 63.3 80.8 85.7 89.8 51.7
Supervised [45] 85.2 97.1 98.9 99.6 60.2 84.7 91.7 95.2 71.2 85.7 91.8 94.3 -


Table 1: Performance evaluation on the unsupervised video re-ID datasets, PRID-2011 (2 cameras), iLIDS-VID (2 cameras), and MARS (6 cameras). Here, we denote the original and memory-efficient version of LAM as LAM and LAM, respectively (see Fig. 4). Note, LAM randomly selects one tracklet for each person in one camera, following the previous works [21, 24]. and best results are indicated in red and blue color respectively.

Person Part-level Adaptation for MDIFL.    We further improve MDIFL with Person Part-level Adaptation (PPA). Previous studies on person part-based re-ID [39, 29, 46, 32] suggest that each body part (e.g., head, torso, and legs) has a different feature distribution with respect to multiple camera domains. However, conventional domain adversarial learning methods are generally proposed for image classification tasks, which are hard to align the feature distribution of each part. Therefore, image-level adaptation may hinder the alignment of person-part domain distributions across multiple camera domains.

With PPA, we take advantage of the information about the spatial location of the human body parts. We are motivated by the fact that human body parts appear at similar locations in the video frames, as shown in Fig. 5. For example, head, torso, and legs usually appear on the top, middle, and bottom part of an image, respectively. To this end, we divide the feature map obtained from the feature extractor into P regions. We denote as the set of features located in grid, where is the pixel location set. After that, we average the elements of for all (i.e., a feature map is forwarded through the spatial average pooling layer). Here, we represent the average-pooled feature as . By part-level MDIFL, we can effectively align the ID feature distributions across multiple camera domains as follows:


where . The proposed MDIFL improves the performance on the MARS dataset by 10% in rank-1 accuracy and 8% in mAP. Furthermore, PPA further increases the performance by 2% in rank-1 accuracy and 4% in mAP.

3.5 Network Architecture and Training

Baseline Network.    We illustrate our network in Fig. 6. For a fair comparison with previous methods, we adopt the ResNet-50 [12]

model pre-trained on ImageNet as our ID feature extractor

. Every multi-camera branch classifier consists of a single fully-connected layer followed by a softmax layer. Features from are flipped (i.e., change the sign of gradients to negative when back-propagation) by GRL [9], and feed into the camera domain discriminator.

Camera Domain Discriminator.    To consider the part feature distributions, we adopt fully-convolutional layers, which maintain the spatial information. Specifically, the discriminator network consists of 3 convolutional layers with a kernel

, a stride of 1, and a zero-padding of 1. The channel size of each convolution layer is

, where

is the number of camera domains. We use Leaky ReLU parameterized by 0.2 for the activation function.

Network Training.    We train the feature extractor , the multi-branch classifier (), and the camera domain discriminator by minimizing a joint loss as follows:



is a hyperparameter for balancing the two loss functions. We set

to 1 in our experiments. Also, we set to 28 in Eq. (5).

To train the network, we utilize the Adam optimizer [19] with a weight decay . We use a learning rate of 0.00035 and decay the learning rate by 0.1 every 200 training steps. Note that all training images are resized to

. The whole pipeline is implemented by using the Pytorch framework

[27] on 2 NVIDIA Titan Xp GPUs.


Methods Rank-1 Rank-5 Rank-10 Rank-20 mAP
Baseline 35.5 54.8 63.0 70.3 26.5
Baseline + CBN 42.9 63.6 71.3 78.8 30.6
Baseline + CBN + MDIFL 52.9 69.6 76.7 83.5 38.5
Baseline + CBN + MDIFL + PPA (LAM) 55.0 72.7 79.1 84.3 42.0
Baseline + CBN + MDIFL + PPA (LAM) 63.3 80.8 85.7 89.8 51.7


Table 2: Effect of Cross-view Batch Normalization (CBN), Multi-camera Domain Invariant Feature Learning (MDIFL and MDIFL), and Person Part-level Adaptation (PPA) on the MARS dataset.

4 Experiments

Datasets. To show the generality of our method, we report the performance on three public video re-ID datasets, PRID-2011 [14], iLIDS-VID [37], and large-scale MARS [48]. People in the PRID-2011 dataset is captured by two disjoint cameras with a large domain gap. It contains 385 and 749 people video tracks in camera A and camera B, respectively. Following the previous protocol, 178 people (i.e., 89 people for each training and testing respectively) with no less than 27 frames are used for evaluation. The iLIDS-VID dataset is collected from two non-overlapping cameras located in an airport hall. 300 people videos tracks are sampled in each camera (i.e., 150 people for each training and testing respectively). MARS is a large-scale dataset with non-overlapping 6 cameras. It contains 1,261 people IDs, 625 for training and 636 for testing. Since MARS has multiple tracklets per ID in a camera, we associate multiple tracklets in a single tracklet with pseudo-labeling based on STPL. Furthermore, to show the robustness and generality of our method, we also experiment on the image-based dataset Market-1501 [49]. The Market-1501 dataset contains 32,668 images of 1,501 people IDs obtained from 6 non-overlapped camera views. It contains 12,936 images from 751 people for training and 19,732 images from 750 people for testing.

Evaluation Metric. We use CMC scores for evaluating all datasets. Additionally, we compute mean Average Precision (mAP) for 6-camera MARS and image-based Market-1501 datasets.

4.1 Comparison with State-of-the-art Methods

We compare LAM and LAM (a memory-efficient version) with state-of-the-art methods, including Salience [47], GRDL [20], SMP [24], DGM [42], RACE [41], TAUDL [21] on PRID-2011 [14], iLIDS-VID [37], and large-scale MARS [48] datasets in Table 1. The results show that: (1) For few-camera datasets (PRID-2011 and iLIDS-VID), our approach achieves comparable performance with state-of-the-art methods. (2) For the multiple-camera dataset (MARS), our method achieves state-of-the-art performance with a large margin. Specifically, our LAM exceeds the recent methods (e.g., TAUDL [21] and RACE [41]) on the MARS dataset by 19.5% (63.3-43.8) in rank-1 accuracy and 22.6% (51.7-29.1) in mAP. Also, our memory-efficient version LAM outperforms these methods by 11.2% (55.0-43.8) in rank-1 accuracy and 12.9% (42.0-29.1) in mAP. Recall that one can choose between two versions, LAM and LAM, depending on the camera network system. In our experiments, we represent that LAM is easy-to-expand to a video surveillance system with a large number of cameras. Furthermore, we experiment on the MARS dataset by using the previous tracklet sampling approach [21, 24] and denote this as LAM, which randomly selects one tracklet for each person in one camera (see Table. 1). LAM also achieves much higher performance than state-of-the-art methods. It means that the embedding features belonging to the same class regardless of camera domains gather well by our multi-camera domain invariant feature learning despite of the use of the previous pseudo labeling manner. (3) The other methods are mainly based on the comparison of the feature distance between samples from different camera domains. These methods show promising performance in a few-camera system but not in a large-camera network due to uncertainty from computing distance between different camera domains. Our LAM addresses this problem by adopting a direct domain alignment approach with adversarial learning, which is applicable to a large camera system.

Figure 7: Analysis of the number of cameras for network training. We use the MARS dataset for training and evaluating. Also, we adopt LAM for comparison.

4.2 Analysis of the Proposed Method

Ablation Study.    In Table 2, we show the effect of each component: 1) Cross-view Batch Normalization (CBN), 2) Multi-camera Domain Invariant Feature Learning with a single camera domain discriminator (MDIFL), 3) Multi-camera Domain Invariant Feature Learning with multiple camera domain discriminators (MDIFL), and 4) Person Part-level Adaptation (PPA). Here, we represent the ResNet-50 [12] model with multi-camera branch classifiers as a baseline. The proposed CBN, MDIFL, and PPA improve the performance of LAM by 7.4, 10.0, and 2.1 in rank-1 accuracy, respectively. Moreover, MDIFL with PPA significantly increases the rank-1 accuracy by 20.4 (63.3-42.9), which demonstrates the effect of multi-domain feature invariant learning.

The Effect of Increasing Camera Number.    Figure 7 shows the performance gain from applying MDIFL+PPA as the number of cameras used in training increases. To evaluate the performance of each trained model, we use the same 6 cameras in testing. The experiment results represent that the performance gain is increased as the network is trained with more camera domains. This is because the camera discriminator learns the relationship across all camera domains. The result also implies that our LAM has a significant advantage in a large video surveillance system.

Analysis of a Multi-Domain Discriminator.    To further validate the effect of MDIFL, we partially align the feature distributions between manually selected camera domains. Specifically, we align the ID features across 2- or 3-camera domains by using multi-domain discriminators. Table 3 shows that as aligning more camera domains during the network training, the rank-1 accuracy is improved from 42.9 to 55.0. The experimental results show that: 1) There is a domain discrepancy between camera domains for training, which leads to decrease in the re-ID performance of the network. 2) Our MDIFL reduces the camera-domain discrepancy with domain adversarial learning.


Methods R1 R5 R10 R20 mAP
w/o MDIFL 42.9 63.6 71.3 78.8 30.6
2-domains 3 45.4 66.6 73.5 79.4 33.3
3-domains 2 51.5 70.5 77.6 83.3 38.1
6-domains 1 55.0 72.7 79.1 84.3 42.0


Table 3: Performance comparison between partial-domain alignment and our approach LAM on the MARS dataset. -domains denotes multi-domain discriminators, which classify camera domains. Note, w/o MDIFL and -domains represent Baseline + CBN and LAM in Table 2, respectively.


ID Fragment Rate (%) R1 R5 R20 mAP
w/o MDIFL + PPA 0 42.9 63.6 78.8 30.6
10 39.5 58.9 74.3 26.8
30 34.4 53.2 69.5 23.6
50 31.2 52.3 67.1 22.8
w/ MDIFL + PPA 0 55.0 72.7 84.3 42.0
10 52.3 72.0 83.5 39.9
30 51.5 71.5 83.0 39.2
50 50.2 70.6 82.4 38.5


Table 4: Model robustness analysis of ID fragment rates on the MARS dataset.

Model Robustness Analysis.    For the MARS dataset, which contains multiple tracklets per person, we evaluate the robustness of our approach. In Table 4, we randomly select person IDs and divide the tracklet of one person into two tracklets with different IDs. Here, we vary the percentage of selected IDs. We compare the robustness of the model with/without MDIFL and PPA. As the ID fragment rate increases 0% to 50%, rank-1 accuracy decreases by 11.7% (42.9-31.2) and 4.8% (55.0-50.2), respectively. The experimental results demonstrate that: Using only multi-branch classifiers is vulnerable to ID fragmentation since it hinders the network from extracting similar feature representations of the same ID. On the other hand, MDIFL and PPA only leverage camera information for training, which reduces the distance between the same ID features while aligning multiple domains. Eventually, our approach with MDIFL + PPA improves the model robustness to tracklet fragmentation in the wild situation.


Methods R1 R5 R10 mAP
PTGAN [38] 38.6 - 66.1 -
PUL [7] 44.7 59.1 65.6 20.1
CAMEL [43] 54.5 - - 26.3
SPGAN [5] 58.1 76.0 82.7 26.9
TJ-AIDL [35] 58.2 74.8 81.1 26.5
HHL [51] 62.2 78.8 84.0 31.4
TAUDL [21] 63.7 - - 41.2
LAM (Ours) 66.1 81.4 86.7 46.8


Table 5: Performance evaluation on the Market-1501 dataset.

Apply on Image-based Re-ID.    To show the generality of our method, we compare LAM (a memory-efficient version) with 7 state-of-the-art methods, including 3 style transfer-based methods (i.e., PTGAN [38], SPGAN [5], HHL [51]) and 4 domain alignment methods with adopting the source dataset or metric learning between different camera domains (i.e., TAUDL [21], PUL [7], CAMEL [43], TJ-AIDL [35]) on the Market-1501 dataset [49]. The proposed method also achieves state-of-the-art performance on the image-based Re-ID task, which shows the effectiveness of multi-domain alignment with adversarial learning.

5 Conclusion

In this paper, we have proposed Learning to Align Multi-Camera Domain (LAM), a multi-camera domain feature learning approach for unsupervised person re-identification. Specifically, our method consists of both inter- and intra-camera feature learning techniques. To this end, we adopt the multi-branch classification scheme to maximize feature distance between different people within a camera. At the same time, we apply domain adversarial learning across multiple camera views for minimizing camera domain discrepancy. To further enhance Multi-camera Domain Invariant Feature Learning (MDIFL), we also suggest the Person Part-level Adaptation (PPA) method, which takes advantage of the spatial information of the human body parts. We carry out comprehensive experiments on several public datasets (i.e., PRID-2011, iLIDS-VID, MARS, and Market-1501) and show the superiority of our LAM. Our future work will extend the proposed multi-camera domain invariant feature learning to the other task.


  • [1] R. K. Ando and T. Zhang (jmlr) A framework for learning predictive structures from multiple tasks and unlabeled data. In 2005, Cited by: §1, §3.3.
  • [2] F. M. Cariucci, L. Porzi, B. Caputo, E. Ricci, and S. R. Bulò (2017) Autodial: automatic domain alignment layers. In ICCV, Cited by: §2.2.
  • [3] Y. H. Chen, W. Y. Chen, Y. T. Chen, B. C. Tsai, Y. C. Frank Wang, and M. Sun (2017) No more discrimination: cross city adaptation of road scene segmenters. In ICCV, Cited by: §2.2.
  • [4] Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. Van Gool (2018) Domain adaptive faster r-cnn for object detection in the wild. In CVPR, Cited by: §2.2.
  • [5] W. Deng, L. Zheng, Q. Ye, G. Kang, Y. Yang, and J. Jiao (cvpr) Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In 2018, Cited by: §2.1, §4.2, Table 5.
  • [6] T. Evgeniou and M. Pontil (acm) Regularized multi-task learning. In 2004, Cited by: §1, §3.3.
  • [7] H. Fan, L. Zheng, C. Yan, and Y. Yang (ACM Transactions on Multimedia Computing, Communications, and Applications) Unsupervised person re-identification: clustering and fine-tuning. In 2018, Cited by: §1, §1, §4.2, Table 5.
  • [8] M. Farenzena, L. Bazzani, A. Perina, and M. Cristani (2010) Person re-identification by symmetry-driven accumulation of local feature. In CVPR, Cited by: §1, §2.1.
  • [9] Y. Ganin and V. Lempitsky (icml)

    Unsupervised domain adaptation by backpropagation

    In 2015, Cited by: §2.2, §3.4, §3.5.
  • [10] N. Gheissari, T. B. Sebastian, and R. Hartley (2006) Person reidentification using spatiotemporal appearance. In CVPR, Cited by: §1, §2.1, §3.1.
  • [11] B. Gong, Y. Shi, F. Sha, and K. Grauman (2012) Geodesic flow kernel for unsupervised domain adaptation.. In CVPR, Cited by: §2.2.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun (cvpr) Deep residual learning for image recognition. In 2016, Cited by: §3.5, §4.2.
  • [13] L. He, J. Liang, H. Li, and Z. Sun (2018) Deep spatial feature reconstruction for partial person re-identification: alignment-free approach. In CVPR, Cited by: §1, §2.1.
  • [14] M. Hirzer, C. Beleznai, P. M. Roth, and H. Bischof (2011) Person re-identification by descriptive and discriminative classification. In SCIA, Cited by: §1, §1, §2.1, §4.1, §4.
  • [15] J. Hoffman, E. Tzeng, T. Park, J. Y. Zhu, P. Isola, K. Saenko, A. Efros, and T. Darrell (2018) Cycada: cycle-consistent adversarial domain adaptation. In ICML, Cited by: §2.2.
  • [16] W. Hong, Z. Wang, M. Yang, and J. Yuan (2018) Conditional generative adversarial network for structured domain adaptation. In CVPR, Cited by: §2.2.
  • [17] J. Huang, A. Gretton, K. Borgwardt, B. Schölkopf, and A. J. Smola (2007) Correcting sample selection bias by unlabeled data. In NIPS, Cited by: §1, §2.2.
  • [18] N. Inoue, R. Furuta, T. Yamasaki, and K. Aizawa (2018) Cross-domain weakly-supervised object detection through progressive domain adaptation. In CVPR, Cited by: §2.2.
  • [19] D. P. Kingma and J. Ba (arXiv preprint arXiv:1412.6980, 2014) Adam: a method for stochastic optimization. Cited by: §3.5.
  • [20] E. Kodirov, T. Xiang, Z. Fu, and S. Gong (2016) Person re-identification by unsupervised graph learning. In ECCV, Cited by: §1, §2.1, Table 1, §4.1.
  • [21] M. Li, X. Zhu, and S. Gong (2018) Unsupervised person re-identification by deep learning tracklet association. In ECCV, Cited by: §1, §1, §2.1, §3.2, §3.3, Table 1, §4.1, §4.2, Table 5.
  • [22] S. Li, S. Bak, P. Carr, and X. Wang (2018) Diversity regularized spatiotemporal attention for video-based person re-identification. In CVPR, Cited by: §1, §2.1.
  • [23] Y. Li, N. Wang, J. Shi, X. Hou, and J. Liu

    (Pattern Recognition, Vol 80, pp. 109 - 117.)

    Adaptive batch normalization for practical domain adaptation. In 2018, Cited by: §3.3.
  • [24] Z. Liu, D. Wang, and H. Lu (2017) Stepwise metric promotion for unsupervised video person re-identification. In ICCV, Cited by: §1, §1, §2.1, Table 1, §4.1.
  • [25] M. Long, Y. Cao, J. Wang, and M. I. Jordan (2015) Learning transferable features with deep adaptation networks. In JMLR, Cited by: §2.2.
  • [26] M. Long, G. Ding, J. Wang, J. Sun, Y. Guo, and P. S. Yu (2013) Transfer sparse coding for robust image representation. In CVPR, Cited by: §2.2.
  • [27] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §3.5.
  • [28] P. Peng, T. Xiang, Y. Wang, M. Pontil, S. Gong, T. Huang, and Y. Tian (2016)

    Unsupervised cross-dataset transfer learning for person re-identification

    In ICCV, Cited by: §1, §1, §2.1.
  • [29] C. Su, J. Li, S. Zhang, J. Xing, W. Gao, and Q. Tian (iccv) Pose-driven deep convolutional model for person re-identification. In 2017, Cited by: §1, §1, §3.4.
  • [30] M. Sugiyama, M. Krauledat, and K. R. MÞller (2007) Covariate shift adaptation by importance weighted cross validation. In JMLR, Cited by: §1, §2.2.
  • [31] B. Sun and K. Saenko (2016) Deep coral: correlation alignment for deep domain adaptation. In ECCV Workshops, Cited by: §2.2.
  • [32] Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang (eccv) Beyond part models: person retrieval with refined part pooling (and a strong convolutional baseline).. In 2018, Cited by: §1, §1, §3.4.
  • [33] C. Tomasi (cvpr) Features for multi-target multi-camera tracking and re-identification. In 2018, Cited by: §3.2.
  • [34] Y. H. Tsai, W. C. Hung, S. Schulter, K. Sohn, M. H. Yang, and M. Chandraker (2018) Learning to adapt structured output space for semantic segmentation. In CVPR, Cited by: §2.2.
  • [35] J. Wang, X. Zhu, S. Gong, and W. Li (cvpr) Transferable joint attribute-identity deep learning for unsupervised person re-identification. In 2018, Cited by: §1, §4.2, Table 5.
  • [36] J. Wang, X. Zhu, S. Gong, and W. Li (2018) Transferable joint attribute-identity deep learning for unsupervised person re-identification. In CVPR, Cited by: §1, §2.1.
  • [37] T. Wang, S. Gong, X. Zhu, and S. Wang (2014) Person re-identification by video ranking. In ECCV, Cited by: §1, §1, §1, §2.1, §3.1, §4.1, §4.
  • [38] L. Wei, S. Zhang, W. Gao, and Q. Tian (cvpr) Person transfer gan to bridge domain gap for person re-identification. In 2018, Cited by: §2.1, §4.2, Table 5.
  • [39] L. Wei, S. Zhang, H. Yao, W. Gao, and Q. Tian (acm) Glad: global-local-alignment descriptor for pedestrian retrieval. In 2017, Cited by: §1, §1, §3.4.
  • [40] S. Xu, Y. Cheng, K. Gu, Y. Yang, S. Chang, and P. Zhou (2017) Jointly attentive spatial-temporal pooling networks for video-based person re-identification. In ICCV, Cited by: §1, §2.1.
  • [41] M. Ye, X. Lan, and P. C. Yuen (2018) Robust anchor embedding for unsupervised video person re-identification in the wild. In ECCV, Cited by: §1, §1, §2.1, Table 1, §4.1.
  • [42] M. Ye, A. J. Ma, L. Zheng, J. Li, and P. C. Yuen (2017) Dynamic label graph matching for unsupervised video re-identification. In ICCV, Cited by: §1, §1, §2.1, Table 1, §4.1.
  • [43] H. X. Yu, A. Wu, and W. S. Zheng (iccv) Cross-view asymmetric metric learning for unsupervised person re-identification. In 2017, Cited by: §1, §1, §2.1, §4.2, Table 5.
  • [44] J. Zhang, N. Wang, and L. Zhang (2018) Multi-shot pedestrian re-identification via sequential decision making. In CVPR, Cited by: §1, §2.1.
  • [45] J. Zhang, N. Wang, and L. Zhang (In CVPR, 2018) Multi-shot pedestrian re-identification via sequential decision making. Cited by: Table 1.
  • [46] L. Zhao, X. Li, Y. Zhuang, and J. Wang (iccv) Deeply-learned part-aligned representations for person re-identification. In 2017, Cited by: §1, §1, §3.4.
  • [47] R. Zhao, W. Ouyang, and X. Wang (In CVPR, 2013) Unsupervised salience learning for person re-identification. Cited by: Table 1, §4.1.
  • [48] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian (2016) A video benchmark for large-scale person re-identification. In ECCV, Cited by: §1, §1, §2.1, §4.1, §4.
  • [49] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian (iccv) Scalable person re-identification: a benchmark.. In 2015, Cited by: §1, §1, §4.2, §4.
  • [50] Z. Zhong, L. Zheng, Z. Zheng, S. Li, and Y. Yang (2018) Camera style adaptation for person re-identification. In CVPR, Cited by: §1, §2.1.
  • [51] Zhong (eccv) Generalizing a person retrieval model hetero-and homogeneously. In 2018, Cited by: §2.1, §4.2, Table 5.
  • [52] J. Zhou, B. Su, and Y. Wu (2018) Easy identification from better constraints: multi-shot person re-identification from reference constraints. In CVPR, Cited by: §1, §2.1.