Spatial-Aware GAN for Unsupervised Person Re-identification

11/26/2019 ∙ by Fangneng Zhan, et al. ∙ Nanyang Technological University 5

The recent person re-identification research has achieved great success by learning from a large number of labeled person images. On the other hand, the learned models often experience significant performance drops when applied to images collected in a different environment. Unsupervised domain adaptation (UDA) has been investigated to mitigate this constraint, but most existing systems adapt images at pixel level only and ignore obvious discrepancies at spatial level. This paper presents an innovative UDA-based person re-identification network that is capable of adapting images at both spatial and pixel levels simultaneously. A novel disentangled cycle-consistency loss is designed which guides the learning of spatial-level and pixel-level adaptation in a collaborative manner. In addition, a novel multi-modal mechanism is incorporated which is capable of generating images of different geometry views and augmenting training images effectively. Extensive experiments over a number of public datasets show that the proposed UDA network achieves superior person re-identification performance as compared with the state-of-the-art.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 8

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Person Re-Identification (ReID) aims at retrieving images of the same person from an image set collected with different cameras. It has been attracting increasing interest in both academia and industry in recent years thanks to its importance in surveillance and public security. Existing person ReID systems trained with a large number of labeled images have achieved very high accuracy, but they usually experience a dramatic performance drop while applied to images collected in a different environment due to the domain shift and bias. Similar to many other tasks, this has become a critical issue for person ReID which needs to handle images collected under different lighting, camera parameters, viewpoints, etc. for deployment in various different environments.

Figure 1: Discrepancy exists at both pixel level (in image colors, image styles, etc.) and spatial level (in viewpoint, contextual structures, etc.) across domains. Our proposed Spatial-Aware GAN can adapt images in both spaces simultaneously (DukeMTMC and PRID are two public person ReID datasets that were collected under different lighting and viewpoints).

One typical strategy to tackle domain shift and bias is Unsupervised Domain Adaptation (UDA) which aims to transfer the learned knowledge from a labeled source domain to an unlabeled target domain. With the advances of Deep Neural Networks (DNNs) and Generative Adversarial Networks (GANs)

[9]

, UDA has been applied to various image-to-image translation problems

[53] as well as person ReID [35, 5]. On the other hand, the UDA-based person ReID is still facing various constraints and its performance lags far behind fully supervised models. One major reason is that existing systems focus largely on pixel-level adaptation but ignore spatial discrepancies across domains. This can be seen clearly in Fig. 1 where images in the source and target domains differ not only in colors and styles but also in viewpoints and contextual structures. The ignorance of geometric discrepancies directly leads to unfilled gaps at spatial level which further impairs the usefulness of the adapted images especially when such geometric discrepancy is large. At the other end, such geometric discrepancy acting as certain interference also affects the adaptation of image colors and styles at pixel level.

In this work, we propose an innovative Spatial-Aware Generative Adversarial Network (SA-GAN) to address the challenge of simultaneous image adaptation in both spatial and pixel levels. The proposed SA-GAN embeds spatial transformer modules into a cycle structure and achieves collaborative learning training at both spatial and pixel levels concurrently. The spatial transformer module incorporates a novel multi-modal mapping strategy for generating multiple images of different viewpoints, as well as a spatial discriminator for improving its stability and convergence during training. In addition, a novel disentangled cycle consistency loss is designed for the concurrent and smooth training at spatial and pixel levels in a collaborative manner.

The contributions of this work are threefold. First, it designs an innovative UDA-based person ReID network that is capable of adapting images at both spatial and pixel levels simultaneously. Second, it designs a multi-modal mapping mechanism with a spatial discriminator that performs adversarial training at spatial level and is capable of mapping each source-domain image to multiple images with target-domain geometric characteristics. Third, it designs a novel disentangled cycle loss that balances the cycle-consistency for optimal training at spatial and pixel levels concurrently.

2 Related Works

2.1 Person Re-identification

Quite a number of person ReID systems have been reported in recent years which design different networks and architectures for optimal person ReID [39, 36]. [39] proposes a “Siamese” network for joint learning of color feature, texture feature and metric. [36] proposes an end-to-end network for simultaneous learning of high-level features and a corresponding similarity metric. Some approaches position person ReID as a classification problem with the availability of person labels [46, 38, 47]. [46] proposes the ID-discriminative embedding (IDE) to train the re-ID model as classification task. [37] presents a Feature Fusion Net (FFN) for pedestrian image representation by utilizing color histogram features and texture features. [30]

adopts Singular Vector Decomposition (SVD) and proposes to optimize the deep representation learning with the restraint and relaxation iteration (RRI) training scheme.

[22] introduced a novel data augmentation method for re-identification based on changing the image background. [49]

introduces Random Erasing as a new data augmentation method for training the convolutional neural network.

[52] proposes a Pseudo Positive Regularization (PPR) method to enrich the diversity of the training data. Though the aforementioned networks achieve very high person ReID accuracy, they require a large amount of labeled images and thus have poor scalability while facing various new environments and domains.

In recent years, unsupervised domain adaptation (UDA) has been studied to mitigate the constraints of image labeling and annotation. [47] exploits GANs to generate unlabeled samples and achieves improved person ReID performance. [51] introduces the camera style (CamStyle) as a data augmentation approach to smooth the camera style disparity. [35] proposes a Person Transfer GAN (PTGAN) to bridge the domain gap. [5] proposes a similarity-preserving GAN (SPGAN) that preserves the self-similarity in translation and dissimilarity across the translated domains. [16] performs unsupervised domain adaptation which leverages information across datasets and derives domain-invariant features for Re-ID purposes. The aforementioned UDA-based systems work largely at pixel level but ignore spatial discrepancy across domains. The proposed SA-GAN instead adapts images in both spaces simultaneously which achieves superior person ReID performance, more details to be described in the ensuing Sections.

2.2 Domain Adaptation

Domain adaptation was introduced to minimize the discrepancy across domains so as to obtain domain-invariant features [26, 32]. One typical approach is to minimize the distance in the feature space between the source and target domains. [28] performs the feature adaptation by minimize the correlation distance and [29] extended it to deep architectures. [20, 21] instead strive to minimize the Maximum Mean Discrepancies (MMD) and Joint MMD distance across domains. [4] explicitly models domain-specific features to encourage networks to learn domain-invariant features. [7, 33, 34] further improve the feature adaptation by introducing the adversarial objective.

Another typical approach is to minimizing pixel-level distances by directly converting the style of the source domain to the style of the target domain. This is mostly accomplished by using GANs which have achieved great success in image generation [23, 1], image composition [17, 43, 41] and image-to-image translation [53, 13, 42]. For domain adaptation, [31] presents a domain transfer network that transforms a source image to a target image by enforcing consistency in the embedding space. [27] instead uses a L1 reconstruction loss to force the generated target images to be similar to their original source images. [3] uses a content similarity loss to ensure the similarity between the generated target image and the original source image. [6] instead learn the transformations in the pixel and latent spaces simultaneously. More recently, CycleGAN [53] and similar methods [40, 14, 11] produces compelling image translation results at high resolution by using the cycle-consistency loss.

Figure 2: The structure of the proposed Spatial-Aware GAN (SA-GAN): The parts within the dashed lines are two Spatial Transformer Modules and each of which consists of a spatial transformation and a parameter localization network . , , , and denote the generators and discriminators, respectively. and denote two image domains (the binary masks are determined by U-Net), where (, ) and (, ) denote the two-domain images after the proposed spatial-level and further pixel-level adaptation, respectively.

3 Proposed Method

We design an innovative spatial-aware generative adversarial network (SA-GAN) for multi-modal domain adaptation at spatial and pixel levels simultaneously. Fig. 2 shows the detailed network structure and training strategy as to be described in the following subsections.

3.1 Network Structure

The proposed SA-GAN consists of spatial transformer modules (STMs) and , generators and , and discriminators , , and as illustrated in Fig. 2. It is designed in a cyclic structure, aiming to achieve the adaptation from domain (source) to domain (target) at both pixel and spatial levels simultaneously.

As illustrated in Fig. 2, the space-level adaption is largely achieved by the STMs ( and ) and the spatial discriminator . The STMs apply spatial transformations to source-domain images to adapt them to have similar geometries with target-domain images. The transformed images are then adapted at pixel level to have similar appearance with the target-domain images, marked as . The pixel-level adaption is achieved by the , , , and , where discriminates images and , while discriminates images (images adapted from domain Y and domain X) and . The two discriminators work together to improve the learning of generators and at pixel level.

We also introduce a Siamese loss through Siamese Network (SiaNet) to preserve the semantic of the adapted images. The positive pairs are , , and the negative pairs are , . The input of the SiaNet contains a pair of images, where the objective is to minimize the distance between the positive pairs and enlarge the distance between negative pairs.

A binary human mask is concatenated with the origin image as input of the model, it is also used for a weight of the identity loss (as 3.3(5)). We pre-trained a U-net [25] off-line on Look into Person (LIP) dataset [8] for the function of the binary human mask. It is then applied for inference to predict the foreground of person ReID images.

It should be noted that the image adaptation does not perform well by direct concatenation of spatial-level adaptation GANs (e.g. ST-GAN [17]) and pixel-level adaptation GANs (e.g. CycleGAN [53]). The major reason is that images in source and targets domains have discrepancies at both spatial and pixel levels. By directly concatenating these two types of GANs, the two kinds of discrepancies will act as certain negative interference to affect each other. Our SA-GAN instead coordinates the learning at spatial and pixel levels concurrently, where better adaptation at spatial level drives the model to learn better adaptation at pixel level and vice versa. The cooperative and concurrent learning between spatial and pixel levels also drives the model to converge stably and efficiently.

Figure 3: Cycle-consistency loss decomposition. The part in purple denotes Path 1 in which recover the image at spatial level. The part in orange denotes Path 2 in which the inverse matrix of is used to recover the image. The part in gray denotes the Siamese loss.

3.2 Spatial Level Adaptation

The proposed SA-GAN incorporates a novel multi-modal strategy for spatial-level adaptation. Specifically, the STM (or ) consists of a localization network and a transformer . In (or ) for mapping (or

) at spatial level, a spatial code sampled from normal distribution is concatenated with

(or ) as the input of the localization network to predict the spatial transformation parameters which can be affine, homography or thin plate spline [2]

. By sampling multiple spatial codes, the estimation network predicts multiple different spatial transformations which produces multiple adapted images with similar geometric characteristics of the target domain.

and drive the learning process at spatial and pixel levels, but they are sometimes insufficient for a optimal learning and may even lead to a convergence problem due to the high flexibility of the spatial transformer module and the entangled learning in both spaces concurrently. Hence we introduce a spatial discriminator into SA-GAN for optimal learning at spatial level. Instead of distinguishing images, discriminates the predicted spatial transformation and the inverse spatial transformation . As and are mapping in reverse direction, the inverse of the spatial transformation should be similar to the transformation . By distinguishing the two transformation matrixes, imposes an extra constraint on learning of the spatial transform modules and which greatly improves the effectiveness and stability of the learning process at spatial level.

3.3 Training Loss

The proposed SA-GAN is designed in a cycle structure, where (, ) and (, ) deal with the mappings and , respectively. We adopt the Wasserstein GAN [1] objective to learn the mapping

and the loss functions are formulated as follows:

(1)

where and denote the transformations and , respectively.

It is a challenging task to predict the spatial transformations accurately in the cycle process, as a very small error in the predicted spatial transformation parameters could lead to a large overall cycle-consistency loss which further overwhelms the pixel-level cycle-consistency loss. To solve this problem, we designed a disentangled cycle-consistency loss for a smooth and stable image adaptation at both spatial and pixel levels. Specifically, we disentangle the overall cycle consistency loss into a spatial-level cycle-consistency loss (SCL) and a pixel-level cycle-consistency loss (PCL), and use them for image adaptation at spatial and pixel levels, respectively.

As shown in Fig. 3, the prediction of input image by is first adapted at spatial level by spatial transformation and then at pixel level by . There are two paths for the calculation of cycle-consistency loss. In the first path, predicts a spatial transformation (using the same spatial code as ) to recover the image at spatial level as highlighted by purple color. In the second path, the output of is transformed by an inverse spatial transformation and then adapted by to obtain the image (as highlighted in orange color) which is perfectly recovered in spatial level.

Methods DuekMTMC to Market-1501 Market-1501 to DukeMTMC
R-1 R-10 mAP R-1 R-10 mAP
Original 36.8 59.4 14.3 25.4 46.2 11.2
CycleGAN [53] 40.5 62.7 16.5 32.4 53.4 16.1
CycleGAN* [53] 42.9 64.2 16.7 33.7 53.9 16.8
TJ-AIDL [15] 58.2 81.1 26.5 44.3 65.0 23.0
PT-GAN [35] 38.6 66.1 - 40.2 - 21.8
SPGAN [5] 51.5 76.8 22.8 41.1 63.0 22.3
MMFA [18] 56.7 - - 45.3 66.3 24.7
Camstyle [51] 58.8 84.3 27.4 48.4 68.9 25.1
HHL [50] 62.2 84.0 31.4 46.9 66.7 27.2
SA-GAN 59.3 82.7 27.9 46.8 67.1 26.9
SA-GAN [M=10] 63.1 84.5 30.7 49.5 68.2 27.9
Table 1: Unsupervised person ReID performance by the proposed SA-GAN and state-of-the-art methods: The source dataset is Market-1501 and the target datasets are DukeMTMC on the left and DukeMTMC on the right. ‘Original’ refer to the baseline model that is trained on the source dataset and evaluated on the target datasets without adaptation.
Methods Market-1501 to PRID
R-1 R-10 mAP
Original 7.8 20.3 3.1
CycleGAN [53] 14.5 31.7 6.5
CycleGAN* [53] 14.9 32.2 6.7
TJ-AIDL [15] 26.8 - -
PT-GAN [35] 32.6 70.3 20.7
MMFA [18] 35.1 - -
SA-GAN 37.2 76.1 22.9
SA-GAN [M=10] 41.7 80.7 25.3
Table 2: Unsupervised person ReID performance by the proposed SA-GAN and state-of-the-art methods: The source dataset is Market-1501 and the target datasets are PRID. ‘Original’ refer to the baseline model that is trained on the source dataset and evaluated on the target datasets without adaptation.

As the spatial transformation is totally recovered in , there is only pixel-level difference between and . We therefore formulate the PCL of as follows:

(2)

For the spatial-level cycle-consistency, the as predicted by should be similar to the inverse of . We therefore formulate the SCL as follows:

(3)

The overall cycle-consistency loss can thus be formulated as follows:

(4)

where is the weights of .

We also introduce a mask identity loss to ensure that the translated image preserves feature of the human region:

(5)

where denotes the human mask.

The Siamese loss is calculated through a Siamese Network (SiaNet) as shown in Fig. 3, and loss function can be defined as below:

(6)

where denotes the negative or positive pair, denotes the Euclidean distance between the two input vectors and is the margin that defines the separability in the embedding space.

The overall loss for and can thus be formulated as follows:

(7)

where , and denote the weights of the cycle-consistency loss, identity loss and Siamese loss, respectively. Similar loss can be formulated for and .

4 Experiments

4.1 Datasets

We experimented proposed network in three popular Person reID public datasets as below to prove effectiveness.

Market-1501 [45] contains 32,668 labeled images of 1,501 identities that were collected from 6 camera views. The dataset is split into two fixed parts: 12,936 images from 751 identities for training and 19,732 images from 750 identities for testing. The training set has 17.2 images per identity on average. In testing, 3,368 hand-drawn images from 750 identities are used as queries to retrieve the matching persons in the database. Single-query evaluation is used.

DukeMTMC [48] contains 1,812 identities and 36,411 bounding boxes. 16,522 bounding boxes of 702 identities are used for training. The rest identities are used for testing. DukeMTMC is also denoted as Duke for short.

PRID [10] contains 934 identities from two camera views A and B. There are 385 identities in View A and 749 identities in View B, but only 200 identities appear in both views.

4.2 Implementation

We use the ID-discriminative Embedding (IDE) [46] as the baseline model which is trained with the strategy as described in [46]. ResNet-50 [24]

is used as the backbone, and we only change the output dimension of the last fully-connected layer to be the number of training identities in corresponding datasets. The SGD solver is used to train re-ID model with a batch size of 128. The learning rate starts with 0.01 and will be divided by 10 after 25 epochs with total epoch of 50. In testing, we extract the output of the 2,048-dim vectors of Pool-5 layer as image descriptor and use the Euclidean distance to compute the similarity between images.

As the spatial-level discrepancy is mainly introduced by different camera views, we adopt homography transformation in the spatial transformer module which can convert among images of different viewpoints. The images in DukeMTMC and Market-1501 are captured with multiple cameras including low and high perspectives. We separate their training images into two sets according to the camera perspective and achieve the adaptation of the two sets separately.

Figure 4: Image adaptation from DukeMTMC (with a low viewpoint) to PRID CamB (with a high viewpoint) by SA-GAN and state-of-the-art methods: SA-GAN1, SA-GAN2, and SA-GAN3 refer to 3 adaptation (with 3 spatial codes) for each of the source images in the first column. The Source shows the images to be adapted by the listed methods and the Target shows images randomly sampled from the target dataset (not paired one-to-one adaptation).

4.3 Experimental Results

Methods PRID-CamA PRID-CamB
R-1 R-5 R-10 mAP R-1 R-5 R-10 mAP
Original 3.1 7.8 11.9 4.7 2.6 5.5 8.8 3.9
SA-GAN (WP) 10.3 17.1 20.0 8.5 9.0 20.4 24.5 9.5
SA-GAN (WD) 20.1 35.6 41.7 15.7 20.6 41.4 46.3 16.2
SA-GAN (WS) 32.8 64.5 71.1 20.9 31.4 63.2 69.3 19.7
SA-GAN [M=1] 34.4 66.7 73.3 21.1 32.7 65.4 71.7 20.3
SA-GAN [M=10] 35.3 67.8 74.4 21.5 33.7 66.3 72.6 20.8
Table 3: Ablation study of the proposed SA-GAN for adaptation from Duke-MTMC to PRID: PRID-CamA means using the CamA images as the query and CamB images as gallery, and vice versa. WS, WP, and WD denote ‘without spatial-level adaptation’, ‘with pixel-level adaptation’, and ‘without disentangled cycle loss’, respectively. IDE [46] is used as the baseline model.

DukeMTMC to Market-1501: To demonstrate that the proposed unsupervised person ReID method can work well under a general scenario, we perform another experiment for adaptation from Market-1501 to DukeMTMC. Table 1 shows experimental results. The ’Original’ refers to a baseline model that is trained over DukeMTMC and directly tested over Market-1501. As Table 1 shows, the direct transfer can reach a R-1 accuracy of 36.8 % and mAP of 14.3 % respectively. Then we apply CycleGAN for pixel-level adaptation from DukeMTMC to Market-1501, and experiments show a 3.7% improvement in R-1 accuracy. We also benchmark our network with serveral state-of-the-art unsupervised methods including TJ-AIDL [15], PTGAN [35], SPGAN [5] and MMFA [18]. As Table 1 shows, these unsupervised methods improve the person ReID performance greatly and achieve the best R-1 accuracy of 58.8 % and mAP of 31.4 %, respectively. The proposed SA-GAN (without multi-modal transformation) achieves a R-1 accuracy of 58.3 % which is 0.5 % lower than the state-of-the-art. The improvement of 0.9% can be observed when the proposed multi-modal transformation is included (M=10 spatial codes are used). Note even the SA-GAN[M=10] has an lower mAp than HHL [50], as HHL adopts a strong baseline which reaches a mAP of 16.9% while our baseline only reaches 14.3%.

Market-1501 to DukeMTMC: Table 1 shows experimental results for adaptation from Market-1501 to DukeMTMC. As Table 1 shows, the direct transfer can reach a R-1 accuracy of 25.4 % and mAP of 11.2 % respectively. A clear 7% improvement in R-1 accuracy is observed when applying CycleGAN for pixel-level adaptation from Market-1501 to DukeMTMC. Compared with the methods in the table, the proposed SA-GAN[M=10] achieves the best R-1 and mAP score, which is 1.1% and 0.7% higher than the state-of-the-art.

Market-1501 to PRID: We first evaluate our proposed network for adaptation from dataset Market-1501 to dataset PRID, as images in the two dataset contain clear discrepancy at both spatial and pixel levels. The evaluation protocol on PRID follows the same single-shot experiments as described in [44]. Table 2 shows experimental results. The ‘Original’ in Table 2 refers to a baseline model that is trained over Market-1501 and directly tested over PRID. In addition, we also apply the CycleGAN [53] for image-to-image translation from Market-1501 to PRID. As Table 2 shows, the CycleGAN-translated images outperform the baseline model ‘Original’ by 6.7 % in R-1 accuracy and 3.4 % in mAP, almost doubling the R-1 and mAP scores due to the significant pixel-level discrepancy between Market-1501 and PRID images. On the other hand, CycleGAN still scores a very low R-1 score of 7.8 % because it tends to over-translate images and leads to the loss of person ID information. By including an identity loss into the CycleGAN to preserve more identify information as denoted by CycleGAN*, the R-1 and R-10 scores are improved by 0.4 % and 0.5 %, respectively.

We also benchmark our network with three state-of-the-art unsupervised person ReID networks including Transferable Joint Attribute-Identity Deep Learning (TJ-AIDL)

[15], PT-GAN [35] and Multi-task Mid-level Feature Alignment (MMFA) network [18]. As Table 2 shows, all three state-of-the-art networks clearly outperform the ‘Original’ and CycleGAN. TJ-AIDL [15] improves the R-1 score by 19 % by learning attribute-semantic and identity discriminative features simultaneously. PT-GAN introduces human masks to preserve the person ID information in adaptation and this helps improve R-1, R-10 and mAP by 24.8 %, 50 % and 17.6 % respectively, as compared with CycleGAN. MMFA [18] reaches a R-1 score of 35.1 % by assuming that the source and target datasets share the same set of mid-level semantic attributes.

Our SA-GAN (without multi-modal transformation) outperforms the MMFA by 2.1% in R-1 accuracy, largely due to its concurrently adaptation at both pixel and spatial levels. In addition, the R-1 accuracy is further improved to 39.7 % when ten spatial codes are used in image adaptation as denoted by SA-GAN [M=10].

Qualitative Results: Fig. 4 shows image adaptation results from DukeMTMC to PRID CamB by our proposed SA-GAN and several state-of-the-art methods, where SA-GAN1, SA-GAN2 and SA-GAN3 denote three adaptation results by SA-GAN with three random spatial codes. For the image adaptation from DukeMTMC to PRID CamB, there are not only pixel-level discrepancy but also spatial-level discrepancy because the source images are captured in a low viewpoint while the target images are captured in a high viewpoint. As Fig. 4 shows, the proposed SA-GAN achieves image adaptation at both spatial and pixel levels simultaneously. Specifically, it is capable of transforming a source image to multiple target images of different spatial layout as illustrated in SA-GAN1, SA-GAN2 and SA-GAN3. Additionally, the adapted images have the same style as the target-domain images without losing person identity and the viewpoints are adapted properly as well. As a comparison, UNIT [19], MUNIT [12] and CycleGAN [53] can only perform pixel-level adaptation, and the person ID information is impaired greatly. PTGAN [35], SPGAN [5] and Camstyle [51] are capable of preserving certain person ID information, but their adaptation is still restricted within at pixel level and the adapted images are also quite different from the PRID images due to spatial-level discrepancies.

Figure 5: Experimental analysis on the parameter M of the multi-modal adaptation for the domain adaptation from Market-1501 to PRID: The horizontal axis denotes the number of M and the vertical axis denotes the R-1 score.

4.4 Ablation Study

We perform an ablation study to study the contribution of different technical components within our proposed SA-GAN. The ablation study was conducted by using Duke-MTMC as the source set and PRID as the target sets. Six models are trained as shown in Table 3 including: 1) ‘Original’ that is trained by Duke-MTMC (baseline); 2) SA-GAN(WP) that is trained by using the output of without pixel-level adaptation (for validation of the spatial-level adaptation within the proposed SA-GAN); 3) SA-GAN(WD) that uses a normal instead of the proposed disentangle cycle-consistency loss; 4) SA-GAN(WS) that is trained without including the proposed spatial-level adaptation; 5) SA-GAN[M=1] that does not include the proposed multi-modal transformation; and 6) SA-GAN [M=10] that produces 10 adapted images by using 10 random spatial codes.

As Table 3 shows, ‘Original’ obtains 3.1% and 2.6% R-1 scores on PRID-CamA and PRID-CamB due to the large domain shift from Duke-MTMC to PRID. SA-GAN(WP) and SA-GAN(WS) both outperform the ‘Original’ clearly, demonstrating the contribution of the proposed spatial-level adaptation and pixel-level image adaptation. In addition, SA-GAN[M=1] outperforms both SA-GAN(WP) and SA-GAN(WS) clearly, and this shows that the proposed pixel-level adaptation and spatial-level adaptation are complementary. Further, SA-GAN[M=10] outperform SA-GAN[M=1] clearly, demonstrating the effectiveness of our proposed multi-modal transformation. Note that SA-GAN(WD) performs much worse than SA-GAN[M=1] and this clearly shows the importance of the proposed disentangled cycle loss in concurrent adaptation at pixel and spatial levels.

We provide detailed analysis of the parameter M in Fig. 5. As shown in the figure, the R-1 score keeps rises up with the increasing of M at the ealy stage. When M is lareger than 10, there is minor improvement with increasing of M.

5 Conclusions

This paper presents a spatial-aware generative adversarial network (SA-GAN) that achieves domain adaptation at both spatial and pixel levels simultaneously. A multi-modal transformation module is designed which can generate multiple images of different geometric views from one source image. A novel disentangle cycle-consistency loss is designed which can improve both the adaptation stability and adaptation performance clearly. The proposed SA-GAN helps to improve the person ReID performance clearly and it can be easily extended to other tasks.

References

  • [1] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein generative adversarial networks. PMLR. Cited by: §2.2, §3.3.
  • [2] F. L. Bookstein (1989) Principal warps: thin-plate splines and the decomposition of deformations. TPAMI 11 (6). Cited by: §3.2.
  • [3] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan (2017) Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR, Cited by: §2.2.
  • [4] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan (2016) Domain separation networks. In NIPS, Cited by: §2.2.
  • [5] W. Deng, L. Zheng, Q. Ye, G. Kang, Y. Yang, and J. Jiao (2018) Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In CVPR, Cited by: §1, §2.1, Table 1, §4.3, §4.3.
  • [6] J. Donahue, P. Krähenbühl, and T. Darrell (2017) Adversarial feature learning. In ICLR, Cited by: §2.2.
  • [7] Y. Ganin and V. Lempitsky (2015)

    Unsupervised domain adaptation by backpropagation

    .
    In ICML, pp. 325–333. Cited by: §2.2.
  • [8] K. Gong, X. Liang, D. Zhang, X. Shen, and L. Lin (2017) Look into person: self-supervised structure-sensitive learning and a new benchmark for human parsing. In CVPR, Cited by: §3.1.
  • [9] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial networks. In NIPS, pp. 2672–2680. Cited by: §1.
  • [10] M. Hirzer, C. Beleznai, P. M. Roth, and H. Bischof (2011) Person re-identification by descriptive and discriminative classification. In SCIA, Cited by: §4.1.
  • [11] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, A. A. Efros, and T. Darrell (2018) CyCADA: cycle-consistent adversarial domain adaptation. In ICML, Cited by: §2.2.
  • [12] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018) Multimodal unsupervised image-to-image translation. In ECCV, Cited by: §4.3.
  • [13] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017)

    Image-to-image translation with conditional adversarial networks

    .
    In CVPR, Cited by: §2.2.
  • [14] T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim (2017) Learning to discover cross-domain relations with generative adversarial networks. In ICML, Cited by: §2.2.
  • [15] W. Li (2018) Transferable joint attribute-identity deep learning for unsupervised person re-identification. In CVPR, Cited by: Table 1, Table 2, §4.3, §4.3.
  • [16] Y. Li, F. Yang, Y. Liu, Y. Yeh, X. Du, and Y. F. Wang (2018)

    Adaptation and re-identification network: an unsupervised deep transfer learning approach to person re-identification

    .
    In CVPR workshop, Cited by: §2.1.
  • [17] C. Lin, E. Yumer, O. Wang, E. Shechtman, and S. Lucey (2018) ST-gan: spatial transformer generative adversarial networks for image compositing. In CVPR, Cited by: §2.2, §3.1.
  • [18] S. Lin, H. Li, C. Li, and A. C. Kot (2018) Multi-task mid-level feature alignment network for unsupervised cross-dataset person re-identification. In BMVC, Cited by: Table 1, Table 2, §4.3, §4.3.
  • [19] M. Liu, T. Breuel, and J. Kautz (2017) Unsupervised image-to-image translation networks. In NIPS, Cited by: §4.3.
  • [20] M. Long, Y. Cao, J. Wang, and M. I. Jordan (2015) Learning transferable features with deep adaptation networks. In ICML, Cited by: §2.2.
  • [21] M. Long, H. Zhu, J. Wang, and M. I. Jordan (2017) Deep transfer learning with joint adaptation networks. In ICML, Cited by: §2.2.
  • [22] N. McLaughlin, J. M. D. Rincon, and P. Miller (2015) Data augmentation for reducing dataset bias in person reidentification. In AVSS, Cited by: §2.1.
  • [23] A. Radford, L. Metz, and S. Chintala (2016) Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, Cited by: §2.2.
  • [24] D. residual learning for image recognition (2016) K. he and x. zhang, s. ren, and j. sun. In CVPR, Cited by: §4.2.
  • [25] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI, Cited by: §3.1.
  • [26] K. Saenko, B. Kulis, M. Fritz, and T. Darrell (2010) Adapting visual category models to new domains. In ECCV, pp. 325–333. Cited by: §2.2.
  • [27] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb (2017) Learning from simulated and unsupervised images through adversarial training. In CVPR, Cited by: §2.2.
  • [28] B. Sun, J. Feng, and K. Saenko (2016) Return of frustratingly easy domain adaptation. In AAAI, Cited by: §2.2.
  • [29] B. Sun and K. Saenko (2016) Deep coral: correlation alignment for deep domain adaptation. In ICCV workshop, Cited by: §2.2.
  • [30] Y. Sun, L. Zheng, W. Deng, and S. Wang (2017) Svdnet for pedestrian retrieval. In ICCV, Cited by: §2.1.
  • [31] Y. Taigman, A. Polyak, and L. Wolf (2017) Unsupervised cross-domain image generation. In ICLR, Cited by: §2.2.
  • [32] A. Torralba and A. A. Efros (2011) Unbiased look at dataset bias. In CVPR, Cited by: §2.2.
  • [33] E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko (2015) Simultaneous deep transfer across domains and tasks. In ICCV, Cited by: §2.2.
  • [34] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell (2017) Adversarial discriminative domain adaptation. In CVPR, Cited by: §2.2.
  • [35] L. Wei, S. Zhang, W. Gao, and Q. Tian (2018) Person transfer gan to bridge domain gap for person re-identification. In CVPR, Cited by: §1, §2.1, Table 1, Table 2, §4.3, §4.3, §4.3.
  • [36] L. Wu, C. Shen, and A. van den Hengel (2016) Personnet: person re-identification with deep convolutional neural networks. arXiv:1601.07255. Cited by: §2.1.
  • [37] S. Wu, Y.-C. Chen, X. Li, A.-C. Wu, J.-J. You, and W.-S. Zheng (2016)

    An enhanced deep feature representation for person re-identification

    .
    In WACV, Cited by: §2.1.
  • [38] Y. Wu, Y. Lin, X. Dong, Y. Yan, W. Ouyang, and Y. Yang (2018) Exploit the unknown gradually: one-shot video-based person re-identification by stepwise learning. In CVPR, Cited by: §2.1.
  • [39] D. Yi, Z. Lei, S. Liao, and S. Z. Li (2014) Deep metric learning for person re-identification. In ICPR, Cited by: §2.1.
  • [40] Z. Yi, H. Zhang, P. Tan, and M. Gong (2017) DualGAN: unsupervised dual learning for image-to-image translation. In ICCV, Cited by: §2.2.
  • [41] F. Zhan, J. Huang, and S. Lu (2019) Adaptive composition gan towards realistic image synthesis. arXiv preprint arXiv:1905.04693. Cited by: §2.2.
  • [42] F. Zhan, C. Xue, and S. Lu (2019) GA-dan: geometry-aware domain adaptation network for scene text detection and recognition. In ICCV, Cited by: §2.2.
  • [43] F. Zhan, H. Zhu, and S. Lu (2019) Spatial fusion gan for image synthesis. In CVPR, Cited by: §2.2.
  • [44] L. Zhang, T. Xiang, and S. Gong (2016) Learning a discriminative null space for person re-identification. In CVPR, Cited by: §4.3.
  • [45] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian (2015) Scalable person re-identification: a benchmark. In ICCV, Cited by: §4.1.
  • [46] L. Zheng, Y. Yang, and A. G. Hauptmann (2016) Person reidentification: past, present and future. arXiv:1610.02984. Cited by: §2.1, §4.2, Table 3.
  • [47] L. Zheng, H. Zhang, S. Sun, M. Chandraker, and Q. Tian (2017) Person re-identification in the wild. In CVPR, Cited by: §2.1, §2.1.
  • [48] Z. Zheng, L. Zheng, and Y. Yang (2017) Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In ICCV, Cited by: §4.1.
  • [49] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang (2017) Random erasing data augmentation. arXiv:1708.04896. Cited by: §2.1.
  • [50] Z. Zhong, L. Zheng, S. Li, and Y. Yang (2018) Generalizing a person retrieval model hetero- and homogeneously. In ECCV, Cited by: Table 1, §4.3.
  • [51] Z. Zhong, L. Zheng, Z. Zheng, S. Li, and Y. Yang (2018) Camera style adaptation for person re-identification. In CVPR, Cited by: §2.1, Table 1, §4.3.
  • [52] F. Zhu, X. Kong, H. Fu, and Q. Tian (2017) Pseudo-positive regularization for deep person re-identification. Multimedia Systems. Cited by: §2.1.
  • [53] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, Cited by: §1, §2.2, §3.1, Table 1, Table 2, §4.3, §4.3.