Semantically Selective Augmentation for Deep Compact Person Re-Identification

06/11/2018 ∙ by Víctor Ponce-López, et al. ∙ 6

We present a deep person re-identification approach that combines semantically selective, deep data augmentation with clustering-based network compression to generate high performance, light and fast inference networks. In particular, we propose to augment limited training data via sampling from a deep convolutional generative adversarial network (DCGAN), whose discriminator is constrained by a semantic classifier to explicitly control the domain specificity of the generation process. Thereby, we encode information in the classifier network which can be utilized to steer adversarial synthesis, and which fuels our CondenseNet ID-network training. We provide a quantitative and qualitative analysis of the approach and its variants on a number of datasets, obtaining results that outperform the state-of-the-art on the LIMA dataset for long-term monitoring in indoor living spaces.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 6

page 7

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Person re-identification (Re-ID) across cameras with disjoint fields of view, given unobserved intervals and varying appearance (e.g

change in clothing), remains a challenging subdomain of computer vision. The task is particularly demanding whenever facial biometrics 

[Yu et al.(2016)Yu, Meng, Zuo, and Hauptmann] are not explicitly applicable, be that due to very low resolution [Haghighat and Abdel-Mottaleb(2017)]

or non-frontal shots. Deep learning approaches have recently been customized, moving the domain of person Re-ID forward 

[Barbosa et al.(2018)Barbosa, Cristani, Caputo, Rognhaugen, and Theoharis] with potential impact on a wide range of applications, for example, CCTV surveillance [Filković et al.(2016)Filković, Kalafatić, and Hrkać] and e-health applications for living and working environments [Sadri(2011)]. Yet, obtaining cross-referenced ground truth over long term [McConville et al.(2018)McConville, Byrne, Craddock, Piechocki, Pope, and Santos-Rodriguez, Twomey et al.(2016)Twomey, Diethe, Kull, Song, Camplani, Hannuna, Fafoutis, Zhu, Woznowski, Flach, and Craddock], realising deployment of inexpensive inference platforms, and establishing visual identities from strongly limited data – all remain fundamental challenges. In particular, the dependency of most deep learning paradigms on vast training data pools and high computational requirements for heavy inference networks appear as significant challenges to many person Re-ID settings.

In this paper, we introduce an approach for producing high performance, light and fast deep Re-ID inference networks for persons - built from limited training data and not explicitly dependent on face identification. To achieve this, we propose an interplay of three recent deep learning technologies as depicted in Figure 1: deep convolutional adversarial networks (DCGANs) [Radford et al.(2015)Radford, Metz, and Chintala] as class-specific sample generators (in blue); face detectors [Simon et al.(2017)Simon, Joo, Matthews, and Sheikh] used as semantic guarantors to steer synthesis (in green); and a clustering-based CondenseNet [Huang et al.(2017a)Huang, Liu, van der Maaten, and Weinberger] as a compressor (in red). We show that the proposed face-selective adversarial synthesis allows to generate new, semantically selective and meaningful artificial images that can improve subsequent training of compressive ID networks. Whilst the training cost of our approach can be significant due to the adversarial networks’ slow and complicated convergence process [Goodfellow et al.(2014)Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, and Bengio], our parameter count of final CondenseNets is approximately one order of magnitude smaller than those of other state-of-the-art systems, such as ResNet50 [Zheng et al.(2017)Zheng, Zheng, and Yang]. We provide a quantitative and qualitative analysis over different adversarial synthesis paradigms for our approach, obtaining results that outperform the highest achieving published work on the LIMA dataset [Layne et al.(2017)Layne, Hannuna, Camplani, Hall, Hospedales, Xiang, Mirmehdi, and Damen] for long-term monitoring in indoor living environments. First, we will provide a brief overview of works related to the proposed approach.

Figure 1: Framework Overview. Visual deep learning pipeline at the core of our approach: inputs (dark gray) are semantically filtered via a face detector (green) to enhance adversarial augmentation via DCGANs (blue). Original and synthetic data is combined to train a compressed CondenseNet (red) producing a light and fast ID-inference network.

2 Related Work

Technologies applicable to perform person Re-ID form a large and long-standing research area with considerable history and specific associated challenges [Zheng et al.(2016)Zheng, Yang, and Hauptmann]

. Whilst low-resolution face recognition 

[Haghighat and Abdel-Mottaleb(2017)], gait and behaviour analysis [Takemura et al.(2018)Takemura, Makihara, Muramatsu, Echigo, and Yagi], as well as full-person, appearance-based recognition [Zheng et al.(2016)Zheng, Yang, and Hauptmann] all offer routes to performing ‘in-effect’ person ID or Re-ID, for this brief review we will focus on particular technical aspects, i.elooking specifically at recent augmentation and deep learning approaches for appearance-based methods.

Augmentation - Despite improvements in methods for high-quality, high-volume ground truth acquisition [McConville et al.(2018)McConville, Byrne, Craddock, Piechocki, Pope, and Santos-Rodriguez, Pham et al.(2017)Pham, Le, and Dao], input data augmentation [Perez and Wang(2017)] remains a key strategy to support generalisation in deep network training generally. It allows to expose networks to otherwise inaccessible pattern configurations to back-propagate against, which, if representing realistic and relevant content, improves generalisation potential of the training procedure. The use of synthetic data in the training set presents several advantages, such as the ability to reduce the effort of labeling images and to generate customizable domain-specific data. It has been noted that combining synthetic and measured input often shows improved performance over using synthetic images only [Shrivastava et al.(2017)Shrivastava, Pfister, Tuzel, Susskind, Wang, and Webb]

. Recent examples of non-augmented, innovative approaches in the person Re-ID domain include feature selection strategies

[Hasan and Babaguchi(2016), Khan and Brèmond(2017)], antropometric profiling [Bondi et al.(2017)Bondi, Pala, Seidenari, Berretti, and Bimbo] using depth cameras, and multi-modal tracking [Pham et al.(2017)Pham, Le, and Dao], amongst many others. Augmentation has long been used in Re-ID scenarios too, for instance in [Barbosa et al.(2018)Barbosa, Cristani, Caputo, Rognhaugen, and Theoharis]

, the authors consider the structural aspects of the human body by exploiting mere RGB data to fully generate semi-realistic synthetic data as inputs to train neural networks, obtaining promising results for person Re-ID. Image augmentation techniques have also demonstrated their effectiveness in improving the discriminative ability of learned CNN embeddings for person Re-ID, especially on large-scale datasets 

[Zheng et al.(2017)Zheng, Zheng, and Yang, Barbosa et al.(2018)Barbosa, Cristani, Caputo, Rognhaugen, and Theoharis, Chen et al.(2017)Chen, Zhu, and Gong]. Recently, the learning and construction of the modelling space itself, used for augmentation, has been realised in deep adversarial learning architectures.

Adversarial Synthesis - Generative Adversarial Networks (GANs) [Goodfellow et al.(2014)Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, and Bengio] in particular have been widely and successfully applied to deliver augmentation – mainly building on their ability to construct a latent space that underpins the training data, and to sample from it to produce new training information. DCGANs [Radford et al.(2015)Radford, Metz, and Chintala] pair the GAN concept with compact convolutional operations to synthesise visual content more efficiently. The DCGAN’s ability to organise the relationship between a latent space and an actual image space associated to the GAN input has been shown in a wide variety of applications, including face and pose analysis [Radford et al.(2015)Radford, Metz, and Chintala, Ma et al.(2017)Ma, Jia, Sun, Schiele, Tuytelaars, and Van Gool]. In these and other domains, latent spaces have been constructed that can convincingly model and parameterise object attributes such as scale, rotation, and position from unsupervised models, and hence dramatically reduce the amount of data needed for conditional generative modeling of complex image distributions.

Compression and Framework - Given ever-growing computational requirements for very-deep inference networks, recent research into network compression and optimisation has produced a number of approaches capable of compactly capturing network functionality. Some examples include ShuffleNet [Zhang et al.(2017)Zhang, Zhou, Lin, and Sun], MobileNet [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam], and CondenseNet [Huang et al.(2017a)Huang, Liu, van der Maaten, and Weinberger], which have proven to be effective even when operating on small devices where computational resources are limited.

Here, we combine semantic data selection for data steering, adversarial synthesis for training space expansion, and CondenseNet compression to sparsify the built Re-ID classifier representation. Our solution operates on single images during inference, able to perform the Re-ID step in a one-shot paradigm111Whilst results are competitive in this setting, discovering and matching segments during inference [Layne et al.(2017)Layne, Hannuna, Camplani, Hall, Hospedales, Xiang, Mirmehdi, and Damen, Liu et al.(2018)Liu, Ma, Wang, and Wang, Zhou et al.(2018)Zhou, Wang, Meng, Xin, Li, Gong, and Zheng, Wu et al.(2018)Wu, Wang, Li, and Gao, Ponce-López et al.(2015)Ponce-López, Escalante, Escalera, and Baró] is not used and could potentially further improve performance.. The following section details the components and functionality of the proposed pipeline.

3 Methodology and Framework Overview

Figure 1 illustrates our methodology pipeline, which follows a generative-discriminative paradigm: (a) training data sets  of image patches are produced by a person detector, where each image patch set is either associated to a known person identity label , or an ‘unknown’ identity label . (b) An image augmentation component then expands on this dataset. This component consists of (c) a facial filter network  based on multi-view bootstrapping and OpenPose [Simon et al.(2017)Simon, Joo, Matthews, and Sheikh]; and (d) DCGAN [Radford et al.(2015)Radford, Metz, and Chintala] processes, whose discriminator networks  are constrained by the semantic selector  to control domain specificity. The set of DCGANs, namely network pairs , are employed to train generator networks  that synthesise unseen samples  associated with labels . These generators  are then used to produce large sets of samples. We focus on two types of scenarios: (1) a setup where we synthesize content for each identity class  individually, and (2) one where only a single ‘unlabeled person’ generator is produced using all classes  as input, with the aim to generate generic identity content, rather than individual-specific imagery. Sampled output from generators is (e) unified with the original frame sets and labels, forming the input data for (f) training a Re-ID CondenseNet  that learns to map sample image patches 

to ID score vectors 

over all identity classes. This yields the sparse inference network  built implicitly compressed in order to support lightweight inference and deployment via a single network. Each component is now considered in detail.

3.1 Adversarial Synthesis of Training Information

Adversarial Network Setup - We utilise the generic adversarial training process of DCGANs [Radford et al.(2015)Radford, Metz, and Chintala] and its suggested network design in order to construct a de-convolutional, generative function  per synthesised label class that – after training – can produce new images  by sampling from a sparse latent space . Depending on the experiment, a single ‘generic person’ network may be built instead utilising all . As in all adversarial setups, generative networks  or are paired with discriminative networks  or , respectively. The latter map from images  to an ‘is synthetic’ score , reflecting network support for . Essentially, the discriminative networks then learn to differentiate generator-produced patches (>>) from original patches (<<). However, we add to this classic dual network setup [Ma et al.(2017)Ma, Jia, Sun, Schiele, Tuytelaars, and Van Gool], a third externally trained classifier  that filters and thereby controls/selects the input to  – in our case that is restricting input to those samples where the presence of faces can be established222We also modify the initial layer of the DCGAN to deal with a temporal gap of the specified number of frames. https://github.com/vponcelo/DCGAN-tensorflow.

Facial Filtering - We use the face keypoint detector from OpenPose [Simon et al.(2017)Simon, Joo, Matthews, and Sheikh] as the filter network  to semantically constrain the input to  and . This method utilises multi-view boostrapping applied to face detection: it maps from images to facial keypoint detections, each with an associated detection confidence. If at least one such keypoint can be established then face detection is defined as successful, where formally  is assigned to reflect either the absence  or presence  of a face.

Figure 2: DCGAN Training. (a) development of loss while training as a base network using all ; (b) reduced loss and fast convergence when re-training to obtain a new pair from a set (‘Unknown’ identity); and (c) re-training to obtain a new pair with the semantic controller . Discriminator losses are shown for (pre-trained) and , (re-trained) densely for original samples (blue) and sparsely every 100 iterations for generated samples (red).

Training Process - All networks then engage in an adversarial training process utilising Adam [Kingma and Ba(2014)] to optimise the networks , , and , , respectively, according to the discussion in [Radford et al.(2015)Radford, Metz, and Chintala], whilst enforcing the domain semantics via . The following detailed process describes this training regime: (1) each  or is optimised towards minimising the negative log-likelihood  based on the relevant inputs from iff , i.eon original samples that are found to contain faces. (2) Network optimisation then switches to back-propagating errors into the entire networks or , respectively, where  is sampled from a randomly initialised Gaussian to generate synthetic content. Consider that whilst the generator weights are adjusted to minimise the negative log-likelihood , encouraging to get lower scores, the discriminator weights are adjusted to maximise it, prompting to get higher scores. DCGAN training then proceeds by alternating between (1) and (2) until acceptable convergence.

Intuition behind Semantically Selective Adversarial Training - Consider that training proceeds by, for instance, optimising  to produce synthetic images of a kind that  cannot differentiate from face-containing samples in  using as a semantic guarantor. Concurrently,  is trained to differentiate images produced by  from original face-containing samples in . These two processes are antagonistic – they cannot both perfectly achieve their optimisation target, and instead will approach a Nash equilibrium in a successful training run [Radford et al.(2015)Radford, Metz, and Chintala]. As a result, the properties of samples from the set  and the ones generated by  will move towards convergence, without  being restricted to the original sample set. This aims at constructively generalising the information content captured by within original input. The same rationale is of course applicable to images produced by , again with as the semantic guarantor for face-content, but this time aiming at the synthesis of generic person imagery rather than individual-specific content. Note that the network – once trained – can thus also serve as a suitable ‘pre-trained’ basis network for optimising individual-specific generators faster (see Figure 2).

3.2 Re-ID Network Training and Compression

Once the synthesis networks  and are trained, we sample their output and combine it with all original training images (withholding 15% per class for testing) to train as a CondenseNet [Huang et al.(2017a)Huang, Liu, van der Maaten, and Weinberger], optimised via standard stochastic gradient decent with Nestrov momentum. Structurally, maps from

-sized RGB-tensors to a score vector over all identity classes. We perform 120 epochs of training on all layers, where layer-internal grouping is applied to the dense layers in order to actively structure network pathways by means of clustering 

[Huang et al.(2017a)Huang, Liu, van der Maaten, and Weinberger]. This principle has been proven effective in DenseNets [Huang et al.(2017b)Huang, Liu, van der Maaten, and Weinberger], ShuffleNets [Zhang et al.(2017)Zhang, Zhou, Lin, and Sun], and MobileNets [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam]. However, CondenseNets extend this approach by introducing a compression mechanism to remove low-impact connections by discarding unused weights. As a consequence, the approach produces an ID inference network333https://github.com/vponcelo/CondenseNet/ which is implicitly compressed and supports lightweight deployment.

Figure 3: DCGAN Synthesis Examples. Samples generated by with (b) or without (a) semantic controller. (c) row: Examples of generated images from and without semantic controller; row: with semantic controller; row: original samples from and . Columns in (c) are, from left to right, ‘unknown’ identity and identities , respectively.

3.3 Datasets

DukeMTMC-reID - First we confirm the viability of a GAN-driven CondenseNet application in a traditional Re-ID setting (e.glarger cardinality of identities, outdoor scenes) via the DukeMTMC-reID [Ristani et al.(2016)Ristani, Solera, Zou, Cucchiara, and Tomasi] dataset, which is a subset of a multi-target, multi-camera pedestrian data corpus. It contains eight 85-minute high-res videos with pedestrian bounding boxes. It covers identities, where identities appear in more than two cameras and  identities (distractor IDs) appear in only one444More details about the evaluation protocol in https://github.com/layumi/DukeMTMC-reID_evaluation..

Market1501 - We also use a large-scale person Re-ID dataset called Market1501 [Zheng et al.(2015)Zheng, Shen, Tian, Wang, Wang, and Tian] collected from 6 cameras covering different identities across images for testing and images for training generated by a deformable part model (DPM) [Felzenszwalb et al.(2010)Felzenszwalb, Girshick, McAllester, and Ramanan].

LIMA - The Long term Identity aware Multi-target multi-camerA tracking dataset [Layne et al.(2017)Layne, Hannuna, Camplani, Hall, Hospedales, Xiang, Mirmehdi, and Damen], provides us with our main test bed for the approach. In contrast to previous datasets, image resolution is high enough in this dataset to effectively apply face detection as a semantic steer. LIMA contains a large set of

images of identity-tagged bounding boxes gathered over 13 independent sessions, where bounding boxes are estimated based on OpenNI NiTE operating on RGB-D and are grouped into time-stamped, local tracklets. The dataset covers a small set of

individuals filmed in various indoor environments, plus an additional ‘unknown’ class containing either background noise or multiple people in the same bounding box. Note that the LIMA dataset is acquired over a significant time period capturing actual people present in a home (e.gresidents and ‘guests’). This makes the dataset interesting as a test bed for long-term analysis, where people’s appearance varies significantly, including changes in clothing (see Figure 4). In our experiments, we use a train-test ratio of implementing a leave-one-session-out approach for cross-validation in order to probe how well performance generalises to different acquisition days.

4 Experiments and Results

We now perform an extensive system analysis by applying the proposed pipeline mainly to the LIMA dataset. We define as the LIMA baseline the best so-far reported micro precision metric on the dataset achieved by a hybrid M2&ME approach given in [Layne et al.(2017)Layne, Hannuna, Camplani, Hall, Hospedales, Xiang, Mirmehdi, and Damen] – that is via tracking by recognition-enhanced constrained clustering with multiple enrolment. This approach assigns identities to frames where the accuracy of picking the correct identity as the top-ranking estimate is reported. Against this, we evaluate performance metrics for our approach judging either the performance over all ground truth labels , including the ‘unknown content’ class (ALL), that is , or only for known identity ground-truth (p-ID), that is . We use two metrics, (i) prec@1 as the rank-one precision, that is the accuracy of selecting the correct identity for test frames according to the highest class score produced by the final Re-ID CondenseNet ; and (ii) mAP as mean Average Precision over all considered classes. Table 1 provides an overview of the results.

Figure 4: Correct Detections under Changed Appearance. Examples of two different individual identities at different instances of the same test session (faces have been blurred for privacy reasons)
Figure 5: Failure Cases. Examples of misidentifications of the ‘unknown’ class and individual identities in challenging positions and without semantic control (faces have been blurred for privacy reasons)

Deep CondenseNet without Augmentation ( only) - The baseline (Table 1, row 1) is first compared to results obtained when training CondenseNet () on original data only (Table 1, row 2). This deep compressed network outperforms the baseline ALL prec@1 by , in particular generalising better for cases of significant appearance change such as wearing different clothes over the session (e.g. without jacket and wearing a jacket afterwards – see Figure 4). The p-ID mAP results (i.ediscarding the ‘unknown’ class) at show that removing destractor content, i.emanual semantic control during the test procedure, can produce scenarios of enhanced performance over filtered test subsets. Figure 5 shows examples of mis-detections for the ‘unknown’ identity and individual identities in absence of the semantic controller. We will now investigate how semantic control can be encoded via externally trained networks applied during training.

Direct Semantic Control () - Simply introducing a semantic controller to face-filter the training input of is, however, counter-productive and reduces performance significantly across all metrics (Table 1, row 5). Restricting to train on only 39% of the input this way withholds critical identity-relevant information.

No Semantic Control ALL prec@1 p-ID prec@1 ALL mAP p-ID mAP
1: Baseline (M2&ME) [Layne et al.(2017)Layne, Hannuna, Camplani, Hall, Hospedales, Xiang, Mirmehdi, and Damen] 89.1 - - -
2: No Augmentation () 91.98 93.49 90.90 96.28
3: Augmentation 24k 92.43 94.27 91 96.95
4: Augmentation 48k 91.74 93.48 90.61 96.54
Semantic Control via ALL prec@1 p-ID prec@1 ALL mAP p-ID mAP
5: No Augmentation () 82.02 92.14 72.90 95.48
6: Augmentation 322k 92.58 94.57 91.14 97.02
7: 24k+24k 92.44 94.37 90.96 97.04
Table 1: Results for LIMA - Top rank precision (prec@1) and mean Average Precision (mAP) for baseline (row 1), non-semantically controlled deep CondenseNet approaches (rows 2-4), and various forms of semantic control (rows 5-7). Note improvements across all metrics when utilising: compressed deep learning (row 2), augmentation (row 3), and semantically selective filtering (rows 6-7).
Figure 6: CMC Curves for DukeMTMC-reID. Visualisation of the precision over the top- classes (where ) for various experimental settings detailed in Table 2 over rows 3-5: (a) row 3: training without augmentation (), (b) row 4: basic DCGAN augmentation (24k), (c) row 5: transfer augmentation via synthesis based on different dataset Market1501 24k(Market1501).

Augmentation via DCGANs () - Instead of restricting training input to the Re-ID network , we therefore analyse how Re-ID performance is affected when semantic control is applied to generic DCGAN-synthesis via of a cross-identity person class as suggested in [Zheng et al.(2017)Zheng, Zheng, and Yang]. Figure 3 shows examples of generated images and how the semantic controller affects to the synthesis appearance. Augmentation of training data with k synthesised samples without semantic control (Table 1, row 3) improves performance slightly across all metrics, confirming benefits discussed in more detail in [Zheng et al.(2017)Zheng, Zheng, and Yang]. Table 2 confirms that applying such DCGAN synthesis together with CondenseNet compression to the DukeMTMC-reID dataset produce results comparable to [Zheng et al.(2015)Zheng, Shen, Tian, Wang, Wang, and Tian]. Figure 6 provides further details on these experimental outcomes. Note that whilst the large deep ResNet50+LSRO [Zheng et al.(2017)Zheng, Zheng, and Yang] approach outperforms our compressed network significantly (Table 2, row 6), this comes at a cost of increasing the parameter cadinality by about an order of magnitude555It requires approximately 8 fewer parameters and operations to achieve a comparable accuracy w.r.t other dense nets (i.earound 600 million less operations to perform inference on a single image) [Huang et al.(2017a)Huang, Liu, van der Maaten, and Weinberger]. Moreover, non-controlled synthesis is generally limited. Indeed, on LIMA no further improvements can be made by scaling up synthesis beyond k, indeed performance drops slightly across all metrics and overfitting to the synthetised data can be observed (Table 1, row 4). We now introduce semantic control to the input of augmentation and observe that the scaling-up limit can be lifted. Diminishing returns take over at levels above k though (i.e54% of synthesis w.r.t original training data). We report results when synthesising k of imagery via improving results for all metrics (Table 1, row 6). We note that these improvements are achieved by synthesising distractors rather than provision of individual-specific augmentations.

Method / No Semantic Control prec@1 prec@5 mAP CMC@1 S-Q mAP S-Q
1: Baseline BoW + KISSME [Zheng et al.(2015)Zheng, Shen, Tian, Wang, Wang, and Tian] - - - 25.13 12.17
2: Baseline LOMO + XQDA [Zheng et al.(2015)Zheng, Shen, Tian, Wang, Wang, and Tian] - - - 30.75 17.04
3: No Augmentation () 87.70 95.54 87.79 29.04 15.99
4: Augmentation 24k 88.08 95.73 88.26 36.45 21.11
5: Transfer 24k(Market1501) 88.84 95.82 88.64 35.95 20.6
6: ResNet50+LSRO [Zheng et al.(2017)Zheng, Zheng, and Yang] (8x larger) - - - 67.68 47.13
Table 2: Results for DukeMTMC-reID - Top rank precision (prec@1) for classification and Single-Query (S-Q) performance. Our results outperform [Zheng et al.(2015)Zheng, Shen, Tian, Wang, Wang, and Tian] when using augmentation (row 4), or using Market1501 as synthesis input (row 5). However, the performance of the larger ResNet50+LSRO [Zheng et al.(2017)Zheng, Zheng, and Yang] cannot be achieved in our setting of compression for lightweight deployment.
Figure 7: Some Results as Confusion Matrices. Columns from left to right correspond to the experimental settings grouped by the presence of semantic selection, according to Table 1 rows 2-4 and 5-7, respectively. Top and bottom rows correspond to two challenging test sessions from the LIMA dataset, where some IDs may not be present (nan true-label values).

Individual-specific Augmentation ( + ) - To explore class-specific augmentation we train an entire set of DCGANs, i.eproduce generators and , respectively as specific identity and non-identity synthesis networks, and apply semantic control to the identity classes . We observe that when balancing the synthesis of training imagery across all classes equally only slightly improves on p-ID mAP, whilst other measures cannot be advanced (Table 1, row 7). Figure 7 provides further result visualisations. The limited improvements of this approach compared to non-identity-specific training (despite synthesis of overall more training data) suggest that, for the LIMA setup at least, person individuality can indeed be encoded by augmentation-supported modelling of a large, generic ‘person’ class against a more limited, non-augmented representation of individuals. Furthermore, experiments on the most challenging LIMA sessions demonstrate that the pre-trained generator can generalize at re-training individual-specific generators and so as to reduce training cost of DCGAN indvidual-specific augmentation (e.gsee identity  in Figure 2).

5 Conclusion

We introduced a deep person Re-ID approach that brought together semantically selective data augmentation with clustering-based network compression to produce light and fast inference networks. In particular, we showed that augmentation via sampling from a DCGAN, whose discriminator is constrained by a semantic face detector, can outperform the state-of-the-art on the LIMA dataset for long-term monitoring in indoor living environments. To explore the applicability of our framework without face detection in outdoor scenarios, we also considered well-known datasets for person Re-ID aimed at people matching, achieving competitive performance on the DukeMTMC-reID dataset.

Exploring generic and effective semantic controllers as part of discriminator networks is an immediate extension of our work, specially to deal with low resolution images, as well as learning generators from other person-like representations broadly across the Re-ID domain.

References

  • [Barbosa et al.(2018)Barbosa, Cristani, Caputo, Rognhaugen, and Theoharis] Igor Barros Barbosa, Marco Cristani, Barbara Caputo, Aleksander Rognhaugen, and Theoharis Theoharis. Looking beyond appearances: Synthetic training data for deep cnns in re-identification. Computer Vision and Image Understanding, 167:50 – 62, 2018. ISSN 1077-3142.
  • [Bondi et al.(2017)Bondi, Pala, Seidenari, Berretti, and Bimbo] Enrico Bondi, Pietro Pala, Lorenzo Seidenari, Stefano Berretti, and Alberto Del Bimbo. Long term person re-identification from depth cameras using facial and skeleton data. In Proceedings of UHA3DS workshop in conjunction with ICPR Google Scholar, 2017.
  • [Chen et al.(2017)Chen, Zhu, and Gong] Yanbei Chen, Xiatian Zhu, and Shaogang Gong. Person re-identification by deep learning multi-scale representations. In 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pages 2590–2600, Oct 2017.
  • [Felzenszwalb et al.(2010)Felzenszwalb, Girshick, McAllester, and Ramanan] Pedro F. Felzenszwalb, Ross B. Girshick, David McAllester, and Deva Ramanan. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627–1645, Sept 2010. ISSN 0162-8828.
  • [Filković et al.(2016)Filković, Kalafatić, and Hrkać] Ivan Filković, Zoran Kalafatić, and Tomislav Hrkać. Deep metric learning for person re-identification and de-identification. In 2016 39th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pages 1360–1364, May 2016.
  • [Goodfellow et al.(2014)Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, and Bengio] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014.
  • [Haghighat and Abdel-Mottaleb(2017)] Mohammad Haghighat and Mohamed Abdel-Mottaleb. Low resolution face recognition in surveillance systems using discriminant correlation analysis. In 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017), pages 912–917, May 2017.
  • [Hasan and Babaguchi(2016)] Mohamed Hasan and Noborou Babaguchi. Long-term people reidentification using anthropometric signature. In 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), pages 1–6, Sept 2016.
  • [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017.
  • [Huang et al.(2017a)Huang, Liu, van der Maaten, and Weinberger] Gao Huang, Shichen Liu, Laurens van der Maaten, and Kilian Q Weinberger. Condensenet: An efficient densenet using learned group convolutions. arXiv preprint arXiv:1711.09224, 2017a.
  • [Huang et al.(2017b)Huang, Liu, van der Maaten, and Weinberger] Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2017b.
  • [Khan and Brèmond(2017)] Furgan M. Khan and François Brèmond. Multi-shot person re-identification using part appearance mixture. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 605–614, March 2017.
  • [Kingma and Ba(2014)] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
  • [Layne et al.(2017)Layne, Hannuna, Camplani, Hall, Hospedales, Xiang, Mirmehdi, and Damen] Ryan Layne, Sion Hannuna, Massimo Camplani, Jake Hall, Timothy M. Hospedales, Tao Xiang, Majid Mirmehdi, and Dima Damen. A dataset for persistent multi-target multi-camera tracking in RGB-D. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 1462–1470, 2017.
  • [Liu et al.(2018)Liu, Ma, Wang, and Wang] Xiaokai Liu, Xiaorui Ma, Jie Wang, and Hongyu Wang. M3l: Multi-modality mining for metric learning in person re-identification. Pattern Recognition, 76:650 – 661, 2018.
  • [Ma et al.(2017)Ma, Jia, Sun, Schiele, Tuytelaars, and Van Gool] Liqian Ma, Xu Jia, Qianru Sun, Bernt Schiele, Tinne Tuytelaars, and Luc Van Gool. Pose guided person image generation. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 406–416. Curran Associates, Inc., 2017.
  • [McConville et al.(2018)McConville, Byrne, Craddock, Piechocki, Pope, and Santos-Rodriguez] Ryan McConville, Dallan Byrne, Ian Craddock, Robert Piechocki, James Pope, and R Santos-Rodriguez. Understanding the quality of calibrations for indoor localisation. In IEEE 4th World Forum on Internet of Things (WF-IoT 2018), 2018.
  • [Perez and Wang(2017)] Luis Perez and Jason Wang. The effectiveness of data augmentation in image classification using deep learning. CoRR, abs/1712.04621, 2017.
  • [Pham et al.(2017)Pham, Le, and Dao] Thi Thanh Thuy Pham, Thi-Lan Le, and Trung-Kien Dao. Improvement of person tracking accuracy in camera network by fusing wifi and visual information. Informatica, 41:133–148, 2017.
  • [Ponce-López et al.(2015)Ponce-López, Escalante, Escalera, and Baró] Víctor Ponce-López, Hugo Jair Escalante, Sergio Escalera, and Xavier Baró. Gesture and action recognition by evolved dynamic subgestures. In Proceedings of the British Machine Vision Conference (BMVC), pages 129.1–129.13, 2015. ISBN 1-901725-53-7.
  • [Radford et al.(2015)Radford, Metz, and Chintala] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. Proceedings of the International Conference on Learning Representations, 2015.
  • [Ristani et al.(2016)Ristani, Solera, Zou, Cucchiara, and Tomasi] Ergys Ristani, Francesco Solera, Roger S. Zou, Rita Cucchiara, and Carlo Tomasi. Performance measures and a data set for multi-target, multi-camera tracking. ECCV workshops, 2016.
  • [Sadri(2011)] Fariba Sadri. Ambient intelligence: A survey. ACM Comput. Surv., 43(4):36:1–36:66, October 2011. ISSN 0360-0300.
  • [Shrivastava et al.(2017)Shrivastava, Pfister, Tuzel, Susskind, Wang, and Webb] Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, and Russell Webb. Learning from simulated and unsupervised images through adversarial training. Proceedings of the Computer Vision and Pattern Recognition Conference, pages 2107–2116, 2017.
  • [Simon et al.(2017)Simon, Joo, Matthews, and Sheikh] Tomas Simon, Hanbyul Joo, Iain Matthews, and Yaser Sheikh. Hand keypoint detection in single images using multiview bootstrapping. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2017.
  • [Takemura et al.(2018)Takemura, Makihara, Muramatsu, Echigo, and Yagi] Noriko Takemura, Yasushi Makihara, Daigo Muramatsu, Tomio Echigo, and Yasushi Yagi. Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition. IPSJ Transactions on Computer Vision and Applications, 10(1):4, Feb 2018.
  • [Twomey et al.(2016)Twomey, Diethe, Kull, Song, Camplani, Hannuna, Fafoutis, Zhu, Woznowski, Flach, and Craddock] Niall Twomey, Tom Diethe, Meelis Kull, Hao Song, Massimo Camplani, Sion Hannuna, Xenofon Fafoutis, Ni Zhu, Pete Woznowski, Peter Flach, and Ian Craddock. The SPHERE challenge: Activity recognition with multimodal sensor data. arXiv preprint arXiv:1603.00797, 2016.
  • [Wu et al.(2018)Wu, Wang, Li, and Gao] Lin Wu, Yang Wang, Xue Li, and Junbin Gao. What-and-where to match: Deep spatially multiplicative integration networks for person re-identification. Pattern Recognition, 76:727 – 738, 2018.
  • [Yu et al.(2016)Yu, Meng, Zuo, and Hauptmann] Shoou-I Yu, Deyu Meng, Wangmeng Zuo, and Alexander Hauptmann. The solution path algorithm for identity-aware multi-object tracking. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  • [Zhang et al.(2017)Zhang, Zhou, Lin, and Sun] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. CoRR, abs/1707.01083, 2017.
  • [Zheng et al.(2015)Zheng, Shen, Tian, Wang, Wang, and Tian] Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jingdong Wang, and Qi Tian. Scalable person re-identification: A benchmark. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 1116–1124, Dec 2015.
  • [Zheng et al.(2016)Zheng, Yang, and Hauptmann] Liang Zheng, Yi Yang, and Alexander G Hauptmann. Person re-identification: Past, present and future. arXiv preprint arXiv:1610.02984, 2016.
  • [Zheng et al.(2017)Zheng, Zheng, and Yang] Zhedong Zheng, Liang Zheng, and Yi Yang. Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In Proceedings of the IEEE International Conference on Computer Vision, pages 3754–3762, 2017.
  • [Zhou et al.(2018)Zhou, Wang, Meng, Xin, Li, Gong, and Zheng] Sanping Zhou, Jinjun Wang, Deyu Meng, Xiaomeng Xin, Yubing Li, Yihong Gong, and Nanning Zheng. Deep self-paced learning for person re-identification. Pattern Recognition, 76:739 – 751, 2018.