Survey on Reliable Deep Learning-Based Person Re-Identification Models: Are We There Yet?

04/30/2020 ∙ by Bahram Lavi, et al. ∙ University of Campinas 0

Intelligent video-surveillance (IVS) is currently an active research field in computer vision and machine learning and provides useful tools for surveillance operators and forensic video investigators. Person re-identification (PReID) is one of the most critical problems in IVS, and it consists of recognizing whether or not an individual has already been observed over a camera in a network. Solutions to PReID have myriad applications including retrieval of video-sequences showing an individual of interest or even pedestrian tracking over multiple camera views. Different techniques have been proposed to increase the performance of PReID in the literature, and more recently researchers utilized deep neural networks (DNNs) given their compelling performance on similar vision problems and fast execution at test time. Given the importance and wide range of applications of re-identification solutions, our objective herein is to discuss the work carried out in the area and come up with a survey of state-of-the-art DNN models being used for this task. We present descriptions of each model along with their evaluation on a set of benchmark datasets. Finally, we show a detailed comparison among these models, which are followed by some discussions on their limitations that can work as guidelines for future research.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The importance of security and safety of people in society at large is continuously growing. Governmental and private organizations are seriously concerned with the security of public areas such as airports and shopping malls. It requires significant effort and financial expense to provide security to the public. To optimize such efforts, video surveillance systems are playing a pivotal role. Nowadays, a panoply of video cameras is growing as a useful tool for addressing various kinds of security issues such as forensic investigations, crime prevention, and safeguarding restricted areas.

Daily continuous recording of videos from network cameras results in daunting amounts of videos for analysis in a manual video surveillance system. Surveillance operators need to analyze them at the same time for specific incidents or anomalies, which is a challenging and tiresome task. Intelligent video surveillance systems (IVSS) aim to automate the issue of monitoring and analyzing videos from camera networks to help surveillance operators in handling and understanding the acquired videos. This makes the IVSS area one of the most active and challenging research areas in computer engineering and computer science for which computer vision (CV) and machine-learning (ML) techniques plays a key role. This field of research enables various tools such as  online applications for people/object detection and tracking, recognizing suspicious action/behavior from the camera network; and  off-line applications to support operators and forensic investigators to retrieve images of the individual of interest from video frames acquired on different camera views.

Person identification is one of the problems of interest in IVSS. It consists of recognizing an individual over a network of video surveillance cameras with possibly non-overlapping fields of view [4, 59, 31]. In general, the application of PReID is to support surveillance operators and forensic investigators in retrieving videos showing an individual of interest, given an image as a query (a.k.a. probe). Therefore, video frames or tracks of all the individuals (a.k.a. template gallery) recorded by the camera network are sorted in descending order of similarity to the probe. It allows the user to find occurrences (if any) of the individual of interest in the top positions.

Person re-identification is a challenging task due to low-image resolution, unconstrained pose, illumination changes, and occlusions, which adhere to the use of robust biometric features like face, among others. Whereas, some cues like gait and anthropometric measures have been used in some existing PReID systems. Most of the existing techniques rely on defining a specific descriptor of clothing (typically including color and texture), and a specific similarity measure between a pair of descriptors (evaluated as a matching score) which can be either manually defined or learned directly from data [4, 26, 23, 28, 48].

Standard PReID Methodology: For a given image of an individual (a.k.a. probe), a PReID system aims to seek the corresponding images of that person within the gallery of templates. Take into consideration that creating the template gallery depends on a re-identification setup, which we can categorize as: (i) single-shot which has only one template frame per individual, and (ii) multiple-shots that contains more than one template frame per individual. In this case, a continuous PReID system is employed in real-time whereby the individual of interest is continuously matched against the template image with the gallery set, using the currently seen frame as a probe. Figure. 1 demonstrates a basic PReID framework. After an image description is generated for a probe, and the template images of the gallery set, matching scores between each of them are computed; and finally, the ranked list is generated by sorting the matching scores in decreasing order.

Figure 1: Standard person re-identification system. Given a probe image and set of template images, the goal is to generate a robust image signature from each and compute the similarity between them, and finally presented by a sorted, ranked list.

The strategy of many existing descriptors use hand-crafted features. Deep-learning (DL) models – e.g.,convolutional neural networks (CNNs) 

[34, 32, 58]

– have been particularly used to solve the problem of PReID by learning from data. A CNN-based model generates a set of feature maps, whereby each pixel of a given image corresponds to specific feature representation. The desired output is expected at the top of the employed model. There are different approaches to train a deep neural network (DNN) model. A DNN model can be trained in a Supervised, Semi- and Un-Supervised manner depending on the problem scenarios and availability of labelled data. In the task of PReID only a small set of training data is available. Thus, developing a learning model in semi- and un-supervised manner is usually challenging task and the model might result in failure or poor performance in PReID. Most of the papers discussed at the end of this paper engage with supervised learning techniques, and only a few of them considered semi- or un-supervised approach. Further, we consider the models used for PReID in three categories as {single, pairwise, and triplet} feature-learning strategies. Details are presented and discussed in section 

3

. This paper presents the state-of-the-art methods of PReID techniques based on DNNs and provides a significant detailed information about them. The literature review involves papers which were published between the year 2014 to date. We provide a taxonomy of deep feature learning methods for PReID including comparisons, limitations, and future research directions and potential as well as opportunities for research in the horizon. Unlike 

[80, 35], we provide a comprehensive and detailed review of the existing techniques, particularly, the more modern ones that rely upon DNN feature learning strategies. We stress that, in this paper, we only consider recent DNN techniques which directly involved on the procedure of PReID task. For each technique, we analyze its experimental result and further make comparisons of the achieved performances considering different perspectives such as comparing DNNs performances when adopting different strategies to solve a problem such as the learning strategy (e.g., single, pairwise, and triplet learnings).

The structure of this paper is organized as follows: Section 2 briefly explain the benchmark datasets employed for PreID. Section 3

describes DNN methods by highlighting the impact of their important content such as objective function, loss functions, data augmentation, among others. Whereas, section

4 discusses performance measures, results and their comparisons, and their limitations and future directions. Finally, section 5 concludes the paper and gives final remarks about the PReID and the paper.

2 Person Re-identification Benchmark Data sets

Data is one of the important factors for current DNN models.

Some factors must be taken into account to reach a reliable recognition rate when evaluating person re-identification solutions. Each dataset is collected to specially target one or more of these factors. The factors that create issues for PReID task includes occlusion (apparent in i-LIDS dataset) and illumination variation (common in most of them). On the other hand, background and foreground segmentation to distinguish the person’s body is a challenging task. Some of the datasets provide the segmented region of a person’s body (e.g., on VIPeR, ETHZ, and CAVIAR datasets). While other datasets have been prepared to evaluate the re-identification task. The most widely datasets are VIPeR, CUHK01, and CUHK03. VIPeR, CAVIAR, and PRID datasets are used when only two fixed camera views are given to evaluate the performance of person re-identification methods. Table. 1 gives a summary of each dataset. Below we briefly discuss each of them.

VIPeR [26]: VIPeR is a challenging dataset due to its small number of images for each individual. It is made up of two images of 632 individuals from two camera views. It consists of pose and illumination variations. The images are cropped and scaled to pixels. This is one of the most widely used datasets for PReID and a good starting point for new researchers in PReID. Enhancing Rank-1 performance on this dataset is still an open challenge.

i-LIDS [5]: It contains 476 images of 119 pedestrians taken at an airport hall from non-overlapping cameras with pose and lighting variations and strong occlusions. A minimum of two images and an average of four images exist for each pedestrian.

ETHZ [22]: It contains three video sequences of a crowded street from two moving cameras; images exhibit considerable illumination changes, scale variations, and occlusions. The images are of different sizes. This dataset provides three sequences of multiple images of an individual from each sequence. Sequences 1, 2, and 3 have 83, 35, and 28 pedestrians, respectively.

CAVIAR [15]: It contains 72 persons and two views in which 50 persons appear in both views while 22 persons appear only in one view. Each person has five images per view, with different appearance variations due to resolution changes, light conditions, occlusions, and different poses.

CUHK: This dataset is divided into three distinct partitions with specific setups. CUHK01 [40] includes images of pedestrians. It consists of two images captured in two disjoint camera views, camera (A) with several variations of viewpoints and pose, and camera (B) mainly include images of the frontal and back view of the camera.  CUHK02 [39] contains individuals constructed by five pairs of camera views (P1-P5 with ten camera views). Each pair includes 971, 306, 107, 193, and 239 individuals, respectively. Each individual has two images in each camera view. This dataset is employed to evaluate the performance when the camera views in the test are different from those in training. Finally, CUHK03 [41] includes images of pedestrians. This data set has been captured with six surveillance cameras. Each identity is observed by two disjoint camera views and has an average of images in each view; all manually cropped pedestrian images exhibit illumination changes, misalignment, occlusions, and body parts missing.

PRID [27]: This dataset is specially designed for PReID, focusing on a single-shot scenario. It contains two image sets containing 385 and 749 persons captured by camera A and camera B, respectively. The two subsets of this dataset share 200 persons in common.

WARD [50]: This dataset has 4,786 images of 70 persons acquired in a real-surveillance scenario with three non-overlapping cameras having huge illumination variation, resolution, and pose changes.

Re-identification Across indoor-outdoor Dataset (RAiD) [18]: It comprise of 6,920 bounding boxes of 43 identities captured by four cameras. The cameras are categorized into four partitions in which the first two cameras are indoors while the remaining two are outdoors. Images show considerable illumination variations because of indoor and outdoor changes.

Market-1501 [97]: A total of six cameras are used, including 5 high-resolution cameras, and one low-resolution camera. Overlap exists among different cameras. Overall, this dataset contains 32,668 annotated bounding boxes of 1,501 identities. Among them, 12,936 images from 751 identities are used for training, and 19,732 images from 750 identities plus distractors are used for gallery set.

MARS [95]: This dataset comprises 1,261 identities with each identity captured by at least two cameras. It consists of 20,478 tracklets and 1,191,003 bounding boxes.

DukeMTMC [56]: This dataset contains 36,441 manually-cropped images of 1,812 persons captured by eight outdoor cameras. The data set gives access to some additional information such as full frames, frame-level ground-truth, and calibration details.

MSMT [78]: It consists of 126,441 images of 4,101 individuals acquired from 12 indoor and three outdoor cameras, with different illumination changes, poses, and scale variations.

RPIfield [99]: This dataset is constructed using 12 synchronized cameras provided by 112 explicitly time-stamped actor pedestrians through out specific paths among about 4000 distractor pedestrians.

Indoor Train Station Dataset (ITSD) [91]: This dataset has the images of people from a real-world surveillance camera captured at a railway station. It presents the image size of pixels and contains 5607 images, 443 identities, with different viewpoints.

Dataset Year Multiple images Multiple camera Illumination variations Pose variations Partial occlusions Scale variations Crop image size
VIPeR 2007
ETHZ 2007 vary
PRID 2011
CAVIAR 2011 vary
WARD 2012
CUHK01 2012
CUHK02 2013
CUHK03 2014 vary
i-LIDS 2014 vary
RAiD 2014
Market-1501 2015
MARS 2016
DukeMTMC 2017 vary
MSMT 2018 vary
RPIfield 2018 vary
ITSD 2019
Table 1: Summary on benchmark PReID datasets.

3 Deep Neural Networks for PReID

Deep learning techniques has been widely applied in several CV problems. This is due to the discriminative and generalization power of these learned models that results in promising performance and achievements. PReID is one of the challenging tasks in the area of CV for which DL models are one of the current best choice in research community. In the following section, we provide an overview of recent DL works for the task of PReID. Several interesting DL models have been proposed to improve PReID performance. These state-of-the-art DL approaches can be categorized by taking into account the learning methodology of their models that have been utilized in the PReID systems. Some works consider the PReID as a standard classification problem. On the other hand, some works have considered the issue of lack of training data samples in the PReID task and proposed a learning model to learn more discriminative features in a pair or triplet units. Figure 2 shows the taxonomy of the types of models being used for PReID that will be discussed in the coming subsections of this paper.

Figure 2: Taxonomy of deep feature-learning methods for PReID

3.1 Single Feature-Learning Based Methods

A model based on a single feature-learning model or single deep model can be developed similarly to other multi-class classification problems. In a PReID system, a classification model is designed to determine the probability of identity of an individual that it belongs to 

[16]. Figure 3 shows an example of a DL based model for a single feature-learning PReID model. This single stream deep model can be further divided in following categories as being shown in Figure 2.

Figure 3: Single feature-learning model in PReID system: The model takes the raw image of an individual as input, and computes the probability of the corresponding class of the individual.

Deep model features fusion with hand-crafted features:

There are number of papers published to boost the performance of PReID by generating deep features. Among them some works additionally involved the hand-crafted features as the complementary features to be fused alongside DL features. These features are further reduced by using traditional dimensionality reduction techniques – e.g., Principal component analysis(PCA).

Wu et al. [83] proposed a feature fusion DNN to regularize CNN features, with joint of hand-crafted features. The network takes a single image of size

pixels as input of the network, and hand-crafted features extracted using one of the state-of-the-art PReID descriptor (best performance obtained from ensemble of local features (ELF) descriptor 

[47]

). Then, both extracted features are followed by a buffer layer and a fully-connected layer, which both layers act as a fusion layer. The buffer layer is used for the fusion, which is essential since it bridges the gap between two features with different domains (i.e., hand-crafted features and deep features). A softmax loss layer then takes the output vector of the fully-connected layer to minimizing the cross-entropy loss, and outputs the deep feature representation. The whole network is trained by applying mini-batch stochastic gradient descent algorithm for back propagation. In 

[82], two low-level descriptors, SIFT and color-histograms, are extracted from the LAB color space over a set of 14 overlapping patches in size of

pixels with 16 pixels of stride. Then, a dimensionality reduction method such as PCA, is applied to scale-invariant feature transform (SIFT) and color-histogram features to reduce the dimensionality of feature space. Those features are further embedded to produce feature representations using Fisher vector encoding, which are linear separable. One Fisher vector is computed on the SIFT and another one on the color histogram features, and finally, two fisher vectors are concatenated as a single feature vector. A hybrid network builds fully-connected layers on the input of Fisher vectors and employs the linear discriminative analysis (LDA) as an objective function in order to maximize margin between two classes.

A structured graph Laplacian algorithm was utilized in a CNN-based model [13]. Different from traditional contrastive and triplet loss in terms of joint learning, the structured graph Laplacian algorithm is additionally embedded at the top of the network. They, indeed, formulate the triplet network into a single feature-learning method, and further, used the generated deep features for joint learning on the training sample. Softmax function is used to maximize the inter-class variations of different individual, while the structured graph Laplacian algorithm is employed to minimize the intra-class variations. As the authors pointed out, the designed network needs no additional network branch, which makes the training process more efficient. Later on, the same authors proposed a structured graph Laplacian embedding approach [19]; where joint CNNs are leveraged by reformulating structured Euclidean distance relationships into the graph Laplacian form. A triplet embedding method was proposed to generate high-level features by taking into account of inter-personal dispersion and intra-personal compactness.

Part-based & Body-based features: Some works have attempted to generate more discriminant features for their model by extracting features from specific body parts as well as extracting features from whole person’s body, that can be used as part of feature vector by fusing it with the deep learning model resultant features. In [66], a deep-convolutional model was proposed to handle misalignments and pose variations of pedestrian images. The overall multi-class person re-identification network is composed by two sub-networks: first a convolutional model is adopted to learn global features from the original images; then a part-based network is used to learn local features from an image, which includes six different parts of pedestrian bodies. Finally, both sub-networks are combined in a fusion layer as the output of the network, with shared weight parameters during training. The output of the network is further used as an image signature to evaluate the performance of their person re-identification approach with Euclidean distance. The proposed deep architecture explicitly enables to learn effective feature representations on the person’s body part and adaptive similarity measurements. Li et al. [36] designed a multi-scale context aware network to learn powerful features throughout the body and different body parts, which can capture knowledge of the local context by stacking convolutions of multiple scales in each layer. In addition, instead of using predefined rigid parts, the proposed model learns and locates deformable pedestrian parts through networks of spatial transformers with new spatial restrictions. Because of variations and background clutter that creates some difficulties in representations based on body-parts, the learning processes of full-body representation is integrated with body-parts for multi-class identification. Chen et al. [10] proposed a Deep Pyramidal Feature Learning (DPFL) CNN architecture for explicitly learning multi-scale deep features from a single input image. In addition, a fusion branch over scales was devised for learning complementary combination of multi-scale features.

Embedding Learning: Embedding- and attribute-learning approaches have also been considered as a complementary feature by some researchers, where the authors proposed to design a model that can jointly learn additional mid-level features obtained by joint learning of high- and low-level features. In [7]

, a matching strategy is proposed to compute the similarity between features maps of an individual and corresponding embedding text. Their method is learned by optimizing the global and local association between local visual and linguistic features, where it computes attention weights for each sample. The attention weights are further used by long short-term memory (LSTM) network to enrich the final prediction. It shows that learning based on visual information could be more robust. Similarly, Chi et al. 

[67] proposed a multi-task learning model that learns from embedded attributes. The attribute embedding is employed as a low-rank attribute embedding integrated with low- and mid-level features to describe the person’s appearance. On the other hand, deep features are obtained by utilizing a DL framework as a high-level feature extractor. All the features are then learned simultaneously by making use of finding a significant correlation among tasks.

Attribute-based Learner: A joint DL network is proposed in [69], which consists of two branches of DL frameworks; in the first branch, the network aims to learn the identity information from person’s appearance under a triplet Siamese network (see section 3.2.3 for more details), meanwhile, an attribute-based classification is utilized in the second branch to learn a hierarchical loss-guided structure to extract meaningful features. The obtained feature vectors of both branches are then concatenated into a single feature vector. Finally, the person’s images in gallery set are ranked according to their feature distances to the final representations. A method of attention mask-based feature learning is proposed in [20]; the authors proposed a CNN-based hybrid architecture that enables the network to focus on more discriminative parts from person’s image. A multi-tasking based solution where the model predicts the attention mask from an input image, and further imposes it on the low-level features in order to re-weighting local features in the feature space.

Semi- and un-supervised learning: There are also few works related to semi- and un-supervised learning methods that attempted to predict person’s identity (i.e., probability of corresponding class label for an individual). Li et al. [37]

proposed a novel unsupervised learning method attempts to replace the fact of manually labelling of data. The method jointly optimizes unlabelled person data within-camera view jointly with cross-camera view under the strategy of end-to-end classification problem. It utilizes deep features generated by a CNN model for the input of their unsupervised learning model. Wang et al. 

[76]

proposed a heterogeneous multi-task model by domain transfer learning and addressed the scalable unsupervised learning for the PReID problem. Two branches of CNNs were employed to capture and learn identity and attribute from a person’s image simultaneously. The output from both branches are fused with another branch which composed by a shallow NN for a joint learning manner. The information from both branches are inferred into a single attribute space. It showed promising results when their model was trained on a source data set and tested on an unlabeled target data set.

The approach in  [86] addressed issues such as misalignment and occlusion in PReID. It aims to extract features from different pre-defined person’s body-parts, and considers them as pose features and attention aware feature. Yu et al. [90] proposed a novel unsupervised loss function, in which the model can learn the asymmetric metric and further embeds it into an end-to-end deep feature learning network. Moreover, Huang et al. [30]

addressed the issue of lack of training data by introducing a multi-pseudo regularized label. The proposed method attempts to generate images based on an adversarial ML techniques, where corresponding class labels are estimated based on semi-supervised learning on a small training set. This could be one possible way of creating synthetic data to train recent deeper NN models.

Data Driven: To address the lack of training data samples, data driven techniques have also been considered for the task of PReID. Xiao et al. [84]

proposed learning deep feature representations from multiple data sets by using CNNs to discover effective neurons for each training set. They first produced a strong baseline model that works on multiple data sets simultaneously by combining the data and labels from several re-id data sets and training the CNN with a softmax loss. Next, for each data set, they performed the forward pass on all its samples and compute for each neuron its average impact on the objective function. Then, they replaced the standard dropout with the deterministic ’domain guided dropout’ to learn generalization by dropping certain neurons during training, and continue to train the CNN model for several epochs. Some neurons are effective only for specific datasets, which might be useless for others due to dataset biases. For instance, the i-LIDS is the only dataset that contains pedestrians with luggage, thus the neurons that capture luggage features will be useless to recognize people from another data set. Another technique to overcome the lack of training data samples, data augmentation techniques have proposed. Those techniques are included the methods for flipping, rotating, sheering, etc. which can be applied on original image. Despite those techniques, in 

[100] a novel data augmentation technique was proposed for PReID, in which a camera style model was developed to generate training data samples via style transfer-learning.

3.2 Multi-Stream Network Structure: Pairwise and Triplet Feature-Learning Methods

DL models in the PReID problem are still suffering from the lack of training data samples; this is because some of the PReID data sets provide only a few images per individual (e.g., VIPeR dataset [26] which only contains pair of images per person) that makes the model to fail on evaluating the performance of model caused by overfitting problem. Therefore, Siamese networks have been developed to this aim [41].

Siamese network models have been widely employed in PReID due to the lack of training instances in this research area. Siamese neural network (SNN) is a type of NN architectures that contains two or more identical sub-networks (i.e., identical refers to sub-networks when they share the same network architecture, parameters, and weights –a.k.a. shared weight parameters). A Siamese network can be employed as a pairwise model (when two sub-networks are included e.g. [17, 31]), or triplet model (when three sub-networks are present [14, 46]). The output of a Siamese model is a similarity score, which takes place at the top of the network. For instance, a model based on pairwise feature-learning takes two images as its input and outputs similarity score between them. Employing such a siamese model could be an excellent solution to train on existing PReID data set[98], when a few training samples are available.

Figure 4: Pairwise-loss feature-learning model.

These models can be divided in the way we discussed single stream models in previous section 3.1 and as shown in Figure 2. However, the rest of this section is organized in three subsections. First we gave a brief explanation of the similarity functions engaged in DL-based PReID methods. These are essential concepts to compute the distance of similarity between the output of the multi (two/three) models from the given multi input images during training DL model. In the second subsection, we described the published DL-based work for the pairwise methods followed by triplet methods in third subsection. Both of these pairwise and triplet follows the single stream feature learning approaches.

3.2.1 Similarity functions

In order to measure the similarity between the pair of input images within a siamese network, typically, an objective function is utilized. An objective function (a.k.a. loss or cost function) aims to map intuitively some values into a one single real number. This represents a cost which is associated with those values. Techniques like NNs seek to minimize a loss function optimally. When a loss function is used for a siamese model, it depends on the type of the utilized model (i.e., either a pairwise or triplet model).

In the case of a pairwise model, let and be a set of images and corresponding labels for each individual, which can be formulated as

(1)

the goal is to minimize the relative distance between the matched pairs and maximize with the mismatched pairs,for given a pair of image representations, and , and corresponding labels .

Among existing loss functions for pairwise classification models, Hinge, and Contrastive loss functions have widely utilized in this vein. Hinge loss function refers to maximum-margin classification; the output of this loss is become zero when the distance similarity of the positive pairs is greater than the distance of the negative ones with respect to the margin . This loss is defined as follow

(2)

The Cosine similarity loss function maximizes the cosine value for positive pairs and reduce the angle in between them, and at the same time, minimize the cosine value for the negative pairs when the value is less than margin.

(3)

A Contrastive loss function minimizes meaningful mapping from high to low dimensional space maps by keeping the similarity of input vectors of nearby points on its output manifold and dissimilar vectors to distant points. Accordingly, the loss can be computed as:

(4)

where is a margin parameter acting as a boundary, and is a distance between two feature vector that is computed as . In order to compute the average of total loss for each above-mentioned pairwise loss functions, it can be computed as

(5)

For a triplet model, an objective function is used to train the network models, which creates a margin between the distance metric of positive pair and distance metric of negative pair. For this type of Siamese model, a softmax layer is employed at the top of the network on both distance outputs. Let

be a set of triplet images, in which and are images of the same person, and is a different person. A triplet loss function is used to train the network models, which makes the distance between and less than the mismatched pairs and in the learning feature space. In the triple-based models, Euclidean loss function is commonly used as a distance metric function. The loss function under L2 distance metric, and denoted as ; where is the network parameters, and represents the network output of image . The difference in the distance is computed between the matched pair and the mismatched pair of a single triplet unit :

(6)

Moreover, the Hinge loss function is another widely used distance measurements. This loss function is a convex approximation in range of 0-1 ranking error loss, which approximate the model’s violation of the ranking order specified in the triplet.

(7)

where is a margin parameter that regularizes the margin between the distance of the two image pairs: and , and is the euclidean distance between the two euclidean points.

3.2.2 Pairwise-loss methods

Several works rely upon pairwise modeling in order to learn features from small set of training data. To this aim, some works proposed novel deep learning architectures for learning in pairwise manner. This types of learning treats PReID task as a binary-class classification problem [41, 1, 81]. In [92]

, a Siamese pair-based model takes two images as the input of two sub-networks, where two networks are locally connected to the first convolutional layer. They employed a linear SVM on the top of the network instead of using a softmax activation function to measure the similarity of input images pair as the output of the network.

In [89], a siamese neural network (SNN) has been designed to learn pairwise similarity. Each input image of a pair was first partitioned into three overlapping horizontal parts. The part pairs are matched through three independent Siamese networks and finally, are combined at score level. Li et al. [41] proposed a deep filter pairing NN to encode photometric transformation across camera views. A patch matching layer is further added to the network to multiple convolution feature maps of pair of images in different horizontal stripes. Later, Ahmed et al. [1]

improved the pair-based Siamese model in which the network takes pairs of images as the input, and outputs the probability of whether two images in the pair are referred to the same or different persons. The generated feature maps are passed through a max-pooling kernel to another convolution layer followed by a max-pooling layer in order to decrease the size of the feature map. Then a cross-input neighborhood layer computes the differences of the features in neighboring locations of the other image.

New Architectures: Wang et al. [72] developed a CNN model to jointly learn single-image representation (SIR) and cross-image representation (CIR) for PReID. Their methodology relied on investigating two separate models for comparing pairwise and triplet images (explained in next section) with similar deep structure. Each of these models configured with different sub-networks for SIR and CIR learning, and another sub-network shared by both SIR and CIR learning. For the pairwise comparison, they used the Euclidean distance as loss function to learn SIR and formulated the CIR learning that provides a binary classification problem and employs the standard SVM to learn CIR as its loss function. It uses the combination of both loss functions as the overall loss function of pairwise comparison. For the triplet comparison, the loss function to learn SIR makes the distance between the matched pairs lower than the mismatched pairs. The CIR learning formulates a learning-to-rank problem and employs the ’RankSVM’ as its loss function. To this end, the combination of both learning methods is used as the overall loss function of the triple comparison. The shared sub-networks share parameters during the training stage.

Wang et al. [77] proposed a pairwise Siamese model by embedding a metric learning method at the top of the network to learn spatiotemporal features. The network takes a pair of images in order to obtain CNN features and outputs whether two images belong to the same person by employing the quadratic discriminant analysis method.

To handle the multi-view person images and learning in pairwise manner, a new deep multi-view feature learning (DMVFL) model was proposed in [70] to combine handcrafted features (e.g., local maximal occurrence (LOMO) [43]) with deep features generated by CNN-based model; and embedding a metric distance method at the top of the network in order to learn metric distance. To this aim, the cross-view quadratic discriminant analysis (XQDA) metric learning method [43] is utilized to jointly learn handcrafted and DL features. In this manner, it is possible to investigate how handcrafted features could be influenced by deep CNN features. Further, a two-channel CNN with a new component named Pyramid Person Matching Network (PPMN) is proposed  [49] with the same architecture of GoogLeNet. The network takes a pair of images and extracts semantic features by convolutional layers. Finally, the Pyramid Matching Module learns the corresponding similarity between semantic features based on multi-scale convolutional layers. A Strict Pyramidal Deep Metric Learning approach was proposed in  [31] in which a Siamese network is composed by two strict pyramidal CNN blocks with shared parameters between each and produced salient features of an individual as the output of the network. The objective was to present a simple network structure that can perform well despite fewer parameters and having a trade-off between lower computational and memory costs concerning the other NNs. In [61], a Siamese pairwise model is designed as a manner of re-ranking approach where a CNN model was adopted to generate high-level features. The obtained feature maps from both sub-networks are mapped into a single feature vector. It is then divided into feature groups, in which the output of the network is equivalent to the number of feature groups and presented as the similarity scores corresponded to each pair. Shen et al. [62]

addressed the issue of spatial information from a person’s appearance. They utilized Kronecker-product matching (KPM) for aligning feature maps of each individual and further used them in order to generate matching confidence maps between pairs of images. At the beginning of their model, two CNN models are utilized to generate feature maps for each pair of images, separately. Then, the obtained feature maps are used in the KPM method to generate wrapped feature maps. The difference between two feature vectors is then mapped between generated feature maps from the first image and wrapped feature maps by using simple element-wise subtraction. Also, a self-residual attention learning is employed on the feature maps of the first image. Finally, the computed features maps are further mapped into a single feature map by using element-wise addition. The final feature map is then followed by an element-wise square, batch normalization, and a softmax layer yields the final probability score between a pair of images.

A pairwise multi-task DL based model is proposed  [52] to use a separate softmax for each auxiliary task as identification, pose labeling, and each attribute labeling task. A CNN is used to generate image representations in which a single image of size

is used as the input. It takes a pair of images by embedding a specific cost function for each. For instance, they used a softmax regression cost function for each task in which a multi-class linear classifier calculates the probability of a person’s identity. They minimized the cost function using SGD concerning the weights of each task; then, the linear combination overall cost functions is presented as the final cost function of the network. The designed network is composed of three convolutional layers, and two max-pooling layers, followed by a final fully-connected layer as the output of the network. The hyperbolic tangent activation function was used between each convolutional layer, while a linear layer was used between the final convolutional layer and the fully-connected layer. The activation of neurons in the fully-connected layer gives the feature representation of the input image; dropout regularization was used between the final convolutional layer and the fully-connected layer.

A novel Siamese Long-Short Term Memory (LSTM) based architecture was proposed in [71] which aims to leverage contextual dependencies by selecting relevant contexts to enhance discriminative capabilities of the local features. They proposed a pairwise Siamese model, which contains six LSTM models for each sub-network. First, each image is divided into six horizontal non-overlapping parts. From each part, an image representation is extracted by using two state-of-the-art descriptors (i.e., LOMO and Color Names). Each feature vector is separately fed to a single LSTM network with the share parameters. The outputs from each LSTM network are combined, and the relative distance of subnets is computed by the contrastive loss function. The whole pairwise network is trained with the mini-batch SGD algorithm.

Part- and Body-Based Features Fusion: Furthermore, some researches have also considered multi-scale and multi-part feature learning from person’s images. Wang et al. [75] designed an ensemble of multi-scale and multi-part with CNNs to jointly learn image representations and similarity measure. The The network takes two person images as the input of the network and derived the full scale, half scale, top part, and middle part image pairs from the original images. The network outputs the similarity score of the pair images. This architecture is composed of four separate sub-CNNs with each sub-CNN embedding images of different scales or different parts. The first sub-CNN takes full images of size and the second sub-CNN takes down-sampled images of size

. The next two sub-CNNs take the top part and middle part as input, respectively. Four sub-CNNs are all composed of two convolutional layers, two max-pooling layers, a fully-connected layer, and one L2-normalization layer. They obtained the image representation from each sub-CNN and then calculated their similarity score. The final score is calculated by averaging four separate scores; A ReLU activation function is used as the neuron activation function for each layer, dropout layer is used in the fully-connected layer to reduce the risk of an over-fitting problem.

Liu et al. [45] utilized a deep model to integrate a soft attention-based model into a Siamese network. The model focuses on local parts of input images on the pair-based Siamese model.

Multi-Scale Learning Models: A multi-scale learning model was proposed in  [54] in which the proposed approach can learn discriminant feature from multi-scales of an image in different levels of resolution. A saliency-based learning strategy is adopted to learn important weight scales. In parallel with the pairwise Siamese model, which aims to distinguish whether a pair of images belong to the same person or not, a tied layer is also used between each layer of each branch in order to verify the identity of an individual. The designed model consists of five components: tied convolutional layers, multi-scale stream layers, saliency-based learning fusion layer, verification subnet, and classification sub-network. The same authors, lately, proposed another approach in [55] in order to learn pedestrian features from different resolution levels of filters over multiple locations and spatial scales.

A patch-based feature learning method was proposed in [63]. A pairwise Siamese network takes a CNN features pair as input and outputs the similarity value between them by applying the cosine and Euclidean distance function. Each sub-network contains a CNN-based model to obtain deep features of each input image pair, and then, each image is split into three overlapping color patches. The deep network built in three different branches, and each branch takes a single patch as its input; finally, the three branches are concluded by a fully-connected layer.

Some works attempted to adopt metric- and transfer-learning methods in pairwise feature-learning models. Chen et al. [9] proposed a deep ranking model to jointly learn image representation and similarities for comparing pairwise images. To this aim, a deep CNN is trained to assign a higher similarity score to the positive pair than any negative pairs in each ranking unit by utilizing the logistic activation function, which is employed as . They first stitched a pair of images horizontally to form an image which is used as an input, and then, the network returns a similarity score as its output. A Deep Hybrid Similarity Learning model (DHSL) [101] based on CNN is proposed to learn the similarity between pair of images. This two-channel CNN with ten layers aims to learn pair feature vectors, discriminate input pairs to minimize the network’s output value for similar pair images, and to maximize for different ones. A new hybrid distance method using element-wise absolute difference and multiplication is proposed to improve the CNN in similarity metrics learning.

Transfer learning: It is a technique which consists of fine-tuning the parameters of a network that has been already trained on a different dataset, in order to adapt it into a new system. Franco et al. [24] proposed a coarse-to-fine approach to achieve generic-to-specific knowledge through transfer learning. The approach follows three steps: first a hybrid network is trained to recognize a person, then another hybrid network employed to discriminate the gender of person; the output of two networks are passed through the coarse-to-fine transfer learning method to a pairwise Siamese network to accomplish the final PReID goal in terms of measuring the similarity between those two features. Later, the same authors proposed a different type of features based on convolutional covariance descriptor (CCF)  [2]. They intend to obtain a set of local covariance matrices over the feature maps extracted by the hybrid network under the same strategy of the above-proposed method.

3.2.3 Triplet-loss methods

Several works proposed novel PReID systems based on deep learning architecture for learning in a triplet manner. Triplet models mainly introduced for image-retrieval [74]

and face recognition 

[60] problems. Such that model takes three images of individual, in a formation as a triplet unit, aiming to minimize the relevant similarity distance between the same person, and maximize from different one. Figure 5 shows a basic triplet model. This type of models either can share weights or keep them independent.

Figure 5: Triplet-loss feature-learning model.

Ding et al. [21] is the first work in PReID task which adopted a triplet deep CNN-based models to produce robust feature representations from raw images. It takes input image size of pixels as a triplet unit, where the weights are shared between each sub-networks. It aims to maximize the relative distance between pairs of images of the same person and a different person under loss function. The model is trained with the SGD algorithm with respect to the output feature of the network.

A learning approach was proposed in [94]

to reformulate a multi-tasking problem in PReID task; whereby the method considered as a joint system overall image retrieval technique across disjoint camera views jointly with deep features and hash-learning functions. A deep architecture of NN was utilized to produce the hashing codes with the weight matrix by taking raw images of size

pixels as input of the network. The network was trained in a triplet manner for similarity feature-learning to enforce that the images of same person should have similar hash codes. For each triplet unit, it maximize the margin between the matched pairs and the mismatched pairs. It uses Alexnet pre-trained network that consists of ten layers: the first six layers form the convolution-pooling network with rectified linear activation and average pooling operation. They used 32, 64, and 128 kernels with size in the first, second, and third convolutional layers and the stride of 2 pixels in every convolution layer. The stride for pooling is 1 and they set the pooling operator size as . The last four layers consists of two fully-connected layers, and a tangent like layer to generate the output as the hash codes, and an element-wise connected layer to manipulate the hash code length by weighting each bin of the hashing codes. The number of units set 512 in the first fully-connected layer and the output of the second fully-connected layer equals to the length of hash code. The activation function of the second fully-connected layer is the tanh-like function, while ReLu activation function is adopted for the others.

Part- and Body-based features: Cheng et al. [14] proposed a triplet loss function in which the network takes a triplet unit of images as input, and jointly learn from the global full-body and local body-parts features as robust representation. The fusion of these two types of features at the top of the network is presented as the output of the network. The utilized CNN model begins with a convolution layer, divided into four equal parts, and each part forms the first layer of an independent body-part channel aiming to learn features from that body part. The four body-part channels with the full-body channel constitute five independent channels that are trained separately from each other (with no parameter sharing between each). At the top of the network, the outputs obtained from the five separate channels are concatenated into a single vector, and is passed through a final fully-connected layer. Bai et al. [3] proposed a deep-person model to generate global- and part-based feature representations of a person’s body. Each image of triplet unit is fed into a backbone CNN to generate low-level features with the shared parameters. Output features of the backbone network are further fed into a two-layer Bidirectional LSTM aiming to generate a part-based feature representation; an LSTM is adopted because of its discriminative ability of part representation with contextual information, handling the misalignment with the sequence-level person representation. At the same time, layer output features are also fed into another network branch, with a global average pooling, a fully-connected, and a Softmax layer for global feature learning. Finally, output features learn similarity distances under a triplet loss function by adopting another branch of network during the training of the whole network. A coherent and conscious DL approach was introduced that is able to cover whole network cameras [44]. The proposed approach aims to seek the globally optimal matching over different cameras. The deep features are generated over full body and part body under a triplet framework, in which each image within a triple unit is presented with a sample image of one camera view, while the other images are presented from other camera views. Once deep features are generated, the cosine similarity is used to obtain similarity scores between them, and afterward, the gradient descent is adopted to obtain the final optimal association. All calculations are involved in both forward and backward propagation to update CNN features.

Attribute based models: An attribute-based method is proposed by Chen et al. [12] that uses embedding learning to drive attributes and identity annotations from a person’s appearance, whereby two embedding-based CNNs are learned, simultaneously. The pre-defined attributes of this work, mainly, rely on pedestrian’s appearance in order to extract similar cues between the same person –i.e., if a pedestrian wears a red T-shirt and/or a black backpack at the same time. An improved triplet loss is used to learn the fusion of them. Due to spatial variations caused by pose/ view point changes, the proposed model is robust in terms of the diversity on the appearance of pedestrian attributes. A multi-image re-ranking approach was proposed  [91] in which an image pool was formed to collect the images of each identity. It uses a CNN-based model in a triplet manner, where the feature vectors obtained from the network during the re-ranking step, is used to compute the similarities between image pools and templates in gallery set.

Multi-scale Learning: Multi-part and multi-scale approaches are also considered in a triplet manner. Liu et al. [46] proposed a multi-scale triplet network by employing a single CNN-based network and two shallow NNs (i.e., to produce less invariance and low-level appearance features from images), with shared parameters between them. The deep network designed with five convolutional layers, five max-pooling layers, two local normalization layers, and three fully-connection layers, while each shallow network composed by two convolutional layers followed by two pooling layers. The output of each network is further combined at an embedding layer in order to generate final feature representation. Wu et al. [79] proposed an attention-multi-scale deep learning technique for joint-learning of low- and high-level features. The proposed deep architecture consists of five branches in which the first branch of the network used to learn deep features via attention block. A triplet and four classification losses adopted to learn the global descriptor through the second and third branches, respectively. Furthermore, a multi-scale feature learning is applied in the fourth and fifth branches of their network.

Semi-supervised Approach: A novel semi-supervised Deep Attribute Learning approach is proposed in  [68], which contains three deep CNN-based networks and the whole network is trained with attributes triplet loss. The first network is trained on an independent data set to predict the predefined attributes (e.g.,). Second network is trained on another data set plus the predicted attributes labels from the first sub-network. Finally, the last network is used to distinguished attributes and trained on another data sets with individual class labels. The proposed method is more reliable on real-world scenario for PReID, in which the proposed solution can be performed onto another unknown target dataset.

4 Results and Open Issues

Rank-1 Recognition Rate on specific Datasets
Ref.# Year Model VIPeR CUHK01 CUHK03 i-LIDS PRID-2011 CAVIAR MARS Market-1501
Li [41] 2014 20.65
Zhang [92] 2014 12.50 7.20
Yi [89] 2014 28.23
Ahmed [1] 2015 34.81 65.00 54.74
Ding [21] 2015 40.50 52.10
Zhang [94] 2015 18.74
Shi [64] 2015 40.91 86.59 59.05
Liu [45] 2016 81.40 65.65 48.24
Cheng [14] 2016 47.80 53.70 22.00
Chen [9] 2016 52.85 57.28 53.60
Wu [83] 2016 51.06 55.51 66.62
Xiao [84] 2016 38.60 66.60 75.30 64.60 64.00
Wu [81] 2016 71.14 64.90 37.21
Li [38] 2016 59.56
Shi [63] 2016 40.91 69.00
Varior [71] 2016 42.40 57.30 61.60
Wang [75] 2016 40.51 57.02 55.89
Wang [72] 2016 29.75 58.93 43.36
Wang [72] 2016 35.13 65.21 51.33
Franco [24] 2016 44.94 63.51 62.30 53.33
Wu [82] 2016 44.11 67.12 48.15
Wang [77] 2016 38.28 27.92
Su [68] 2016 43.50 22.60
Mclaughlin [52] 2016 33.60
McLaughlin [51] 2016 85.00 70.00
Liu [46] 2016 55.40
Iodice [31] 2016 18.04
Su [66] 2017 51.27 78.29 63.14
Li [36] 2017 38.08 74.21 71.77 80.31
Franco [2] 2017 63.85 63.90 55.85
Qian [54] 2017 43.30 79.01 76.87 41.00 65.00
Zhu [101] 2017 44.87
Cheng [13] 2017 70.09 84.70 83.6
Tao [70] 2017 46.00
Mao [49] 2017 45.82 93.10 85.50
Lin [44] 2017 81.15
Bai [3] 2017 91.50 92.31
Chung [17] 2017 60.00 78.00
Chen [11] 2017 50.30 74.50 84.30 68.70
Li [37] 2018 44.70 26.70 49.40 43.80 63.70
Chen [7] 2018 84.08 92.50 93.30
Chi [67] 2018 45.40 56.40 21.00
Sun [69] 2018 87.05
Wang [76] 2018 38.50 34.80 58.20
Xu [86] 2018 88.07 91.39 88.69
Chen [10] 2018 86.70 88.90
Shen [61] 2018 94.90 82.50
Huang [30] 2018 54.65 78.83 81.28 87.96
Shen [62] 2018 93.40 90.10
Chen [12] 2018 65.00
Cheng [19] 2018 70.90 84.70 83.60
Chen [8] 2018 90.20 93.50
Li [42] 2018 44.40 91.20
Yu [90] 2018 34.15 69.00 45.82 60.24
Ding [20] 2019 42.60 86.00
Yuan [91] 2019 66.00 81.00
Xiong [85] 2019 63.50 92.50
Zhong [100] 2019 89.49
Yao [87] 2019 56.65 82.75 88.02
Chen [6] 2019 94.50
Zheng [96] 2019 45.88 87.33
Zheng [93] 2019 88.00 95.30 87.20
Qian [55] 2019 87.55 95.84 95.34
Wu [79] 2019 81.00 95.50
Table 2: Comparison of existing DL models based on Rank-1 recognition rates PReID. Type of models (single, pairwise, and triplet) are denoted by , , and and colored by bluered, and green, respectively. This table is best viewed in color.

Performance Measure To evaluate the performance of a PReID system, the cumulative matching characteristic (CMC) curve is typically calculated and demonstrated as a standard recognition rate of which the individuals are correctly identified within a sorted, ranked list. In other words, a CMC curve is defined as the probability that the correct identity is within the first ranks, where , and is the total number of template images involved during the testing of a PReID system. By definition, the CMC curve increases with , and eventually equals 1 for .

We attempted to collect the original CMC curves presented at each of the previously discussed works for the sake of a comprehensive comparison. However, the CMC curves of most of those works are not available publicly. We therefore listed in table 2 and compared only the first-rank (Rank-1) recognition rate of existing deep PReID techniques since 2014 till date. Rank-1 has a higher importance in PReID due to the reason that the system needs to recognize the person from the limited hard to recognize available data in the first glance. Further, we showed the type of classification models i.e. single, pairwise, and triplet denoted in the table by , , and , and colored by bluered, and green, respectively. The global best results among the methods are shown in bold. Moreover, Fig. 6 demonstrates the Rank-1 recognition accuracy (%) over years per data set. In the following, we will discuss and highlight the best methodology and combination of training algorithm with loss function and optimizer to attain significant performance in PReID.

Best Result per Dataset VIPeR is a small but challenging dataset. Therefore, mostly single models are utilized. The best result for VIPeR to date is shown by a single model i.e., 56.65. Whereas, the best result for VIPeR by Pairwise and Triplet model is 52.85 and 47.80. CUHK01 and CUHK03 are still challenging datasets. The best performance for these datasets is given by Pairwise models i.e., 93.10 and 94.90, respectively. Triplet models show second best result for these datasets, whereas Single models have reduced performance on these datasets. On i-LIDS, the single model shows the best result followed by Pairwise and then Triplet i.e., 88.0, 85.0 and 66.0, respectively. A single model showed the best result for PRID-2011 followed by Triplet model and then pairwise model i.e., 95.30, 81.0, and 70.0. CAVIAR is used only three times in which two times the model was Pairwise whereas, once it is evaluated over a single model. However, the best result is shown by Pairwise. On the other hand, MARS is also used three times, and each time it is evaluated with a single model. Market-1501 is the most evaluated dataset where the best result is shown by Single, Pairwise, and then Triplet model.

Finally, only [44] has evaluated their methodology on WARD dataset. The achieved Rank-1 rate for WARD dataset is , which shows an ideal and almost optimal performance leaving little margin for future research. However, still in surveillance, one needs recognition rate to avoid the anomalies.

Figure 6: Recognition accuracy on the benchmark data sets over the years.

Comparison All solutions discussed here have trained their model with SGD with back-propagation algorithm. Majority of these works evaluated their models for PReID on CUHK03 (33), Market-1501 (29), VIPeR (27), and CUHK01 (26) datasets. Table. 2 shows that VIPeR dataset is one of the most used in PReID problem since 2014, but it remains one of the most challenging datasets. One of the reason is its small size. However, future models need to be able to show good result either directly or through transfer learning.

Good performances have been shown in various large models; however, in real scenarios, the models need to be fast and effective. Almost in many video surveillance system, the concept of processing time is neglected for the sake to achieve higher accuracy. However, it should always be taken into consideration since it is very costly due to the requirement of powerful computers to run these deep models. Efforts have to be made in this regard to make methods more efficient and compatible for achieving high performance despite the smaller size of the network [31]. The network can be reduced by either reducing the number of layers, number of parameters, or introducing a new scheme that has lower connectivity. In [33, 57], authors aimed at a trade-off between ranking accuracy and processing time by proposing a multi-stage ranking system and showed promising results.

Limitations and Future Directions: The task of PReID still suffers from the lack of training data samples. Although, this problem is addressed with the joint help of pairwise Siamese networks and data augmentation, and showed promising performances. However, utilizing such technique still has a major limitation to be applicable where it brings noises into the original data set that can effect the performance of the model in real-world scenarios. Large scale datasets are needed to make models more reliable to tackle challenges such as pose and viewpoint variations in the images. In ML, a classification problem can be more often adopted to the problems with a limited number of classes in which a massive number of instances per class are highly demanded. To this end, the existing methods of ML, such as artificial neural networks allow solving classification problems with the limitations mentioned above. In PReID, some persons and the corresponding classes are increasing day by day. However, the number of instances acquired from camera networks is minimal. In this manner, PReID cannot be entirely taken in a position as a standard classification problem, particularly with DNNs. In contrary to a traditional classification problem, metric learning methods, as discussed in this paper, can help and overcome the limitation of deep models as an appropriate tool for solving PReID problem.

Many recent application areas such as autonomous-vehicles and aerial vehicles [29], [25], [65] use synthetic data for training. Till date, no such dataset has been released for PReID problem. Using a game engine to generate and release a synthetic dataset for PreID can be a possible and viable solution. This could help PReID researchers in training the models and than using it in transfer learning over a smaller dataset.

The technique proposed for image-based PReID data set are not yet applied on video-based dataset in order to generate the sequence of target samples. This can be considered also as future direction on this research community.

Cross-modality approach is also another hot research topic on PReID. This type of methods enables a PReID system to interact with other modalities to obtain alternative information about pedestrians. Those information can be further help the system to have a better analysis with hight performance accuracy under different scenarios. For instance,  [88] proposes a cross-modality PReID for joint learning of thermal and visible domains, and addresses the issue of PReID in night-time. In this vein, domain knowledge transfer [53] is another interesting research line for PReID. Developing a system to learn specific knowledge (e.g., learning attributes) on a labeled dataset, and further evaluating it on an unseen data, which can make PReID system to be more deploy-able also in PReID open-set scenarios.

The discussed models in this paper mainly tried to address a short-term scenario by considering to camera views. Currently, to deploy a PReID system in a real-world over a long-term scenario is a daunting task. Also, a few works have considered to train the model in semi- and un-supervised manner. Such way of learning is more realistic to deploy a PReID system in a real-world scenario. However, the existing semi- and un-supervised methods have shown much weaker performance than supervised learning models which highlights that much work can be done in this area which can avoid the need of label data or can aid in generating label data. To this end, open-set PReID scenario [73] addressed by a few works, particularly none deep learning-based methods. This is a challenging scenario which needs also to be considered more in order to accomplish the main goal of PReID.

Each model in Table 2 shows good performance for one or two benchmark datasets, but narrow to apply to a realistic scenario of PReID due limitations as pointed above. However, out of the 60 models, only one model (i.e., [93]) shows optimal results for more than one database. This highlights the weakness in the current models and emphasizes researchers to come up with such models that can show good performance on at-least 50% of the available datasets. On the other hand, a possible solution in the future research could be to propose specific rules/scenarios for combining all the datasets. Besides, rather individually releasing a new dataset, it will be good to add the new set of images to the old dataset and than evaluating their models. This will help the researchers to evaluate their model over a single dataset.

An important factor to highlight is that while using DNN-based models, one has to take care of the size of the networks. DNNs have large number of parameters and the trained model require more disk space. Hence, when using pairwise or triplet models, these can result in a much heavier trained model. Eventually, it will be hard to store them on small embedded devices with limited memory. Therefore, models with fewer parameters and equal or better performance should be considered.

5 Final Remarks

Person re-identification is a challenging task for an intelligent video surveillance system with open application areas in numerous fields. Despite high importance it is still facing problems due to the poor performance of models in real-world scenarios. In this survey, we summarized recent advances with DNNs for the PReID task from 2014 to date. We have shown the type of models by taking into account of their implementation details for PReID. Besides, we highlighted all the available datasets in this domain. VIPeR dataset is the most challenging and widely used dataset available thus far. To handle the issue of lack of data, utilizing synthetic data is being proposed as a viable solution. Finally, it is essential to consider that besides enhancing the performance of the models, the size of the models (by reducing layers or number of parameters in the model) needs to be decreased without degrading the overall Rank-1 recognition rate.

Acknowledgements

This research was supported by São Paulo Research Foundation (FAPESP), under the thematic project ”DéjàVu: Feature-Space-Time Coherence from Heterogeneous Data for Media Integrity Analytics and Interpretation of Events” with grant number 18/05668-3.

References

  • [1] E. Ahmed, M. Jones, and T. K. Marks (2015) An improved deep learning architecture for person re-identification. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    pp. 3908–3916. Cited by: §3.2.2, §3.2.2, Table 2.
  • [2] F. Alexandre and O. Luciano (2017) Convolutional covariance features: conception, integration and performance in person re-identification. Pattern Recognition 61, pp. 593–609. Cited by: §3.2.2, Table 2.
  • [3] X. Bai, M. Yang, T. Huang, Z. Dou, R. Yu, and Y. Xu (2017) Deep-person: learning discriminative deep features for person re-identification. arXiv preprint arXiv:1711.10658. Cited by: §3.2.3, Table 2.
  • [4] A. Bedagkar-Gala and S. K. Shah (2014) A survey of approaches and trends in person re-identification. Image and Vision Computing 32 (4), pp. 270–286. Cited by: §1, §1.
  • [5] H. O. S. D. Branch (2006) Imagery library for intelligent detection systems (i-lids). In The Institution of Engineering and Technology Conference on Crime and Security, 2006., pp. 445–448. Cited by: §2.
  • [6] B. Chen, Y. Zha, W. Min, and Z. Yuan (2019) Person re-identification based on linear classification margin. In IOP Conference Series: Materials Science and Engineering, Vol. 490, pp. 042006. Cited by: Table 2.
  • [7] D. Chen, H. Li, X. Liu, Y. Shen, J. Shao, Z. Yuan, and X. Wang (2018) Improving deep visual representation for person re-identification by global and local image-language association. In European Conference on Computer Vision (ECCV), pp. 56–73. Cited by: §3.1, Table 2.
  • [8] D. Chen, D. Xu, H. Li, N. Sebe, and X. Wang (2018) Group consistent similarity learning via deep crf for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8649–8658. Cited by: Table 2.
  • [9] S. Chen, C. Guo, and J. Lai (2016) Deep ranking for person re-identification via joint representation learning. IEEE Transactions on Image Processing 25 (5), pp. 2353–2367. Cited by: §3.2.2, Table 2.
  • [10] Y. Chen, X. Zhu, S. Gong, et al. (2018) Person re-identification by deep learning multi-scale representations. pp. 2590–2600. Cited by: §3.1, Table 2.
  • [11] Y. Chen, X. Zhu, W. Zheng, and J. Lai (2017) Person re-identification by camera correlation aware feature augmentation. IEEE transactions on pattern analysis and machine intelligence 40 (2), pp. 392–408. Cited by: Table 2.
  • [12] Y. Chen, S. Duffner, A. Stoian, J. Dufour, and A. Baskurt (2018) Deep and low-level feature based attribute learning for person re-identification. Image and Vision Computing 79, pp. 25–34. Cited by: §3.2.3, Table 2.
  • [13] D. Cheng, Y. Gong, X. Chang, W. Shi, A. Hauptmann, and N. Zheng (2018) Deep feature learning via structured graph laplacian embedding for person re-identification. Pattern Recognition 82, pp. 94–104. Cited by: §3.1, Table 2.
  • [14] D. Cheng, Y. Gong, S. Zhou, J. Wang, and N. Zheng (2016) Person re-identification by multi-channel parts-based cnn with improved triplet loss function. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1335–1344. Cited by: §3.2.3, §3.2, Table 2.
  • [15] D. S. Cheng, M. Cristani, M. Stoppa, L. Bazzani, and V. Murino (2011) Custom pictorial structures for re-identification.. In Bmvc, Vol. 1, pp. 6. Cited by: §2.
  • [16] L. Cheng, X. Jing, X. Zhu, F. Qi, F. Ma, X. Jia, L. Yang, and C. Wang (2018) A hybrid 2d and 3d convolution based recurrent network for video-based person re-identification. In International Conference on Neural Information Processing, pp. 439–451. Cited by: §3.1.
  • [17] D. Chung, K. Tahboub, and E. J. Delp (2017) A two stream siamese convolutional neural network for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1983–1991. Cited by: §3.2, Table 2.
  • [18] A. Das, A. Chakraborty, and A. K. Roy-Chowdhury (2014) Consistent re-identification in a camera network. In European Conference on Computer Vision (ECCV), pp. 330–345. Cited by: §2.
  • [19] C. De, G. Yihong, C. Xiaojun, S. Weiwei, H. Alexander, and Z. Nanning (2018) Deep feature learning via structured graph laplacian embedding for person re-identification. Pattern Recognition 82, pp. 94–104. Cited by: §3.1, Table 2.
  • [20] G. Ding, S. Khan, Z. Tang, and F. Porikli (2019) Feature mask network for person re-identification. Pattern Recognition Letters. Cited by: §3.1, Table 2.
  • [21] S. Ding, L. Lin, G. Wang, and H. Chao (2015) Deep feature learning with relative distance comparison for person re-identification. Pattern Recognition 48 (10), pp. 2993–3003. Cited by: §3.2.3, Table 2.
  • [22] A. Ess, B. Leibe, and L. Van Gool (2007) Depth and appearance for mobile scene analysis. In IEEE 11th International Conference on Computer Vision (ICCV), pp. 1–8. Cited by: §2.
  • [23] M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani (2010) Person re-identification by symmetry-driven accumulation of local features. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2360–2367. Cited by: §1.
  • [24] A. Franco and L. Oliveira (2016) A coarse-to-fine deep learning for person re-identification. In IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–7. Cited by: §3.2.2, Table 2.
  • [25] A. Gaidon, Q. Wang, Y. Cabon, and E. Vig (2016) Virtual worlds as proxy for multi-object tracking analysis. In CVPR, Cited by: §4.
  • [26] D. Gray and H. Tao (2008) Viewpoint invariant pedestrian recognition with an ensemble of localized features. In European Conference on Computer Vision (ECCV), pp. 262–275. Cited by: §1, §2, §3.2.
  • [27] M. Hirzer, C. Beleznai, P. M. Roth, and H. Bischof (2011) Person re-identification by descriptive and discriminative classification. In Image Analysis, pp. 91–102. Cited by: §2.
  • [28] M. Hirzer, P. M. Roth, and H. Bischof (2012) Person re-identification by efficient impostor-based metric learning. In The IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance (AVSS), pp. 203–208. Cited by: §1.
  • [29] S. Huang and D. Ramanan (2017-07) Expecting the unexpected: training detectors for unusual pedestrians with adversarial imposters. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 4664–4673. External Links: Document, ISSN Cited by: §4.
  • [30] Y. Huang, J. Xu, Q. Wu, Z. Zheng, Z. Zhang, and J. Zhang (2018) Multi-pseudo regularized label for generated data in person re-identification. IEEE Transactions on Image Processing. Cited by: §3.1, Table 2.
  • [31] S. Iodice, A. Petrosino, and I. Ullah (2016) Strict pyramidal deep architectures for person re-identification. In Advances in Neural Networks, pp. 179–186. Cited by: §1, §3.2.2, §3.2, Table 2, §4.
  • [32] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
  • [33] B. Lavi, G. Fumera, and F. Roli (2018-01) Multi-stage ranking approach for fast person re-identification. IET Computer Vision. Cited by: §4.
  • [34] Y. LeCun, K. Kavukcuoglu, and C. Farabet (2010) Convolutional networks and applications in vision. In Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 253–256. Cited by: §1.
  • [35] Q. Leng, M. Ye, and Q. Tian (2019) A survey of open-world person re-identification. IEEE Transactions on Circuits and Systems for Video Technology. Cited by: §1.
  • [36] D. Li, X. Chen, Z. Zhang, and K. Huang (2017) Learning deep context-aware features over body and latent parts for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 384–393. Cited by: §3.1, Table 2.
  • [37] M. Li, X. Zhu, and S. Gong (2018) Unsupervised person re-identification by deep learning tracklet association. pp. 737–753. Cited by: §3.1, Table 2.
  • [38] S. Li, X. Liu, W. Liu, H. Ma, and H. Zhang (2016) A discriminative null space based deep learning approach for person re-identification. In International Conference on Cloud Computing and Intelligence Systems (CCIS), pp. 480–484. Cited by: Table 2.
  • [39] W. Li and X. Wang (2013) Locally aligned feature transforms across views. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3594–3601. Cited by: §2.
  • [40] W. Li, R. Zhao, and X. Wang (2012) Human reidentification with transferred metric learning. In Asian Conference on Computer Vision, pp. 31–44. Cited by: §2.
  • [41] W. Li, R. Zhao, T. Xiao, and X. Wang (2014) Deepreid: deep filter pairing neural network for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 152–159. Cited by: §2, §3.2.2, §3.2.2, §3.2, Table 2.
  • [42] W. Li, X. Zhu, and S. Gong (2018) Harmonious attention network for person re-identification. In Computer Vision and Pattern Recognition (CVPR), Vol. 1, pp. 2. Cited by: Table 2.
  • [43] S. Liao, Y. Hu, X. Zhu, and S. Z. Li (2015) Person re-identification by local maximal occurrence representation and metric learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2197–2206. Cited by: §3.2.2.
  • [44] J. Lin, L. Ren, J. Lu, J. Feng, and J. Zhou (2017) Consistent-aware deep learning for person re-identification in a camera network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 6, pp. 5771–5780. Cited by: §3.2.3, Table 2, §4.
  • [45] H. Liu, J. Feng, M. Qi, J. Jiang, and S. Yan (2017) End-to-end comparative attention networks for person re-identification. IEEE Transactions on Image Processing 26 (7), pp. 3492–3506. Cited by: §3.2.2, Table 2.
  • [46] J. Liu, Z. Zha, Q. Tian, D. Liu, T. Yao, Q. Ling, and T. Mei (2016) Multi-scale triplet cnn for person re-identification. In Proceedings of the 2016 ACM on Multimedia Conference, pp. 192–196. Cited by: §3.2.3, §3.2, Table 2.
  • [47] B. Ma, Y. Su, and F. Jurie (2012) Local descriptors encoded by fisher vectors for person re-identification. In European Conference on Computer Vision (ECCV), pp. 413–422. Cited by: §3.1.
  • [48] B. Ma, Y. Su, and F. Jurie (2014) Covariance descriptor based on bio-inspired features for person re-identification and face verification. Image and Vision Computing 32 (6), pp. 379–390. Cited by: §1.
  • [49] C. Mao, Y. Li, Z. Zhang, Y. Zhang, and X. Li (2017) Pyramid person matching network for person re-identification. In Asian Conference on Machine Learning, pp. 487–497. Cited by: §3.2.2, Table 2.
  • [50] N. Martinel and C. Micheloni (2012) Re-identify people in wide area camera network. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pp. 31–36. Cited by: §2.
  • [51] N. McLaughlin, J. M. del Rincon, and P. Miller (2016) Recurrent convolutional network for video-based person re-identification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1325–1334. Cited by: Table 2.
  • [52] N. McLaughlin, J. M. del Rincon, and P. Miller (2017) Person re-identification using deep convnets with multi-task learning. IEEE Transactions on Circuits and Systems for Video Technology 27 (3), pp. 525–539. Cited by: §3.2.2, Table 2.
  • [53] N. Narayan, N. Sankaran, S. Setlur, and V. Govindaraju (2019) Learning deep features for online person tracking using non-overlapping cameras: a survey. Image and Vision Computing 89, pp. 222–235. Cited by: §4.
  • [54] X. Qian, Y. Fu, Y. Jiang, T. Xiang, and X. Xue (2017) Multi-scale deep learning architectures for person re-identification. pp. 5399–5408. Cited by: §3.2.2, Table 2.
  • [55] X. Qian, Y. Fu, T. Xiang, Y. Jiang, and X. Xue (2019) Leader-based multi-scale attention deep architecture for person re-identification. IEEE transactions on pattern analysis and machine intelligence. Cited by: §3.2.2, Table 2.
  • [56] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi (2016) Performance measures and a data set for multi-target, multi-camera tracking. In European Conference on Computer Vision (ECCV), pp. 17–35. Cited by: §2.
  • [57] F. Roli (2016) A multi-stage approach for fast person re-identification. In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pp. 63–73. Cited by: §4.
  • [58] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. (2015) Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115 (3), pp. 211–252. Cited by: §1.
  • [59] M. A. Saghafi, A. Hussain, H. B. Zaman, and M. H. M. Saad (2014) Review of person re-identification techniques. IET Computer Vision 8 (6), pp. 455–474. Cited by: §1.
  • [60] F. Schroff, D. Kalenichenko, and J. Philbin (2015) Facenet: a unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815–823. Cited by: §3.2.3.
  • [61] Y. Shen, H. Li, T. Xiao, S. Yi, D. Chen, and X. Wang (2018) Deep group-shuffling random walk for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2265–2274. Cited by: §3.2.2, Table 2.
  • [62] Y. Shen, T. Xiao, H. Li, S. Yi, and X. Wang (2018) End-to-end deep kronecker-product matching for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6886–6895. Cited by: §3.2.2, Table 2.
  • [63] H. Shi, Y. Yang, X. Zhu, S. Liao, Z. Lei, W. Zheng, and S. Z. Li (2016) Embedding deep metric for person re-identification: a study against large variations. In European Conference on Computer Vision (ECCV), pp. 732–748. Cited by: §3.2.2, Table 2.
  • [64] H. Shi, X. Zhu, S. Liao, Z. Lei, Y. Yang, and S. Z. Li (2015) Constrained deep metric learning for person re-identification. arXiv preprint arXiv:1511.07545. Cited by: Table 2.
  • [65] D. L. Smyth, J. Fennell, S. Abinesh, N. B. Karimi, F. G. Glavin, I. Ullah, B. Drury, and M. G. Madden (2018) A virtual environment with multi-robot navigation, analytics, and decision support for critical incident investigation. In IJCAI, Cited by: §4.
  • [66] C. Su, J. Li, S. Zhang, J. Xing, W. Gao, and Q. Tian (2017) Pose-driven deep convolutional model for person re-identification. In IEEE International Conference on Computer Vision (ICCV), pp. 3980–3989. Cited by: §3.1, Table 2.
  • [67] C. Su, F. Yang, S. Zhang, Q. Tian, L. S. Davis, and W. Gao (2018) Multi-task learning with low rank attribute embedding for multi-camera person re-identification. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (5), pp. 1167–1181. Cited by: §3.1, Table 2.
  • [68] C. Su, S. Zhang, J. Xing, W. Gao, and Q. Tian (2016) Deep attributes driven multi-camera person re-identification. In European Conference on Computer Vision (ECCV), pp. 475–491. Cited by: §3.2.3, Table 2.
  • [69] C. Sun, N. Jiang, L. Zhang, Y. Wang, W. Wu, and Z. Zhou (2018) Unified framework for joint attribute classification and person re-identification. In International Conference on Artificial Neural Networks, pp. 637–647. Cited by: §3.1, Table 2.
  • [70] D. Tao, Y. Guo, B. Yu, J. Pang, and Z. Yu (2018) Deep multi-view feature learning for person re-identification. IEEE Transactions on Circuits and Systems for Video Technology 28 (10), pp. 2657–2666. Cited by: §3.2.2, Table 2.
  • [71] R. R. Varior, B. Shuai, J. Lu, D. Xu, and G. Wang (2016) A siamese long short-term memory architecture for human re-identification. In European Conference on Computer Vision (ECCV), pp. 135–153. Cited by: §3.2.2, Table 2.
  • [72] F. Wang, W. Zuo, L. Lin, D. Zhang, and L. Zhang (2016) Joint learning of single-image and cross-image representations for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1288–1296. Cited by: §3.2.2, Table 2.
  • [73] H. Wang, X. Zhu, T. Xiang, and S. Gong (2016) Towards unsupervised open-set person re-identification. In 2016 IEEE International Conference on Image Processing (ICIP), pp. 769–773. Cited by: §4.
  • [74] J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu (2014) Learning fine-grained image similarity with deep ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1386–1393. Cited by: §3.2.3.
  • [75] J. Wang, Z. Wang, C. Gao, N. Sang, and R. Huang (2017) DeepList: learning deep features with adaptive listwise constraint for person re-identification. IEEE Transactions on Circuits and Systems for Video Technology 27 (3), pp. 513–524. Cited by: §3.2.2, Table 2.
  • [76] J. Wang, X. Zhu, S. Gong, and W. Li (2018) Transferable joint attribute-identity deep learning for unsupervised person re-identification. pp. 2275–2284. Cited by: §3.1, Table 2.
  • [77] S. Wang, C. Zhang, L. Duan, L. Wang, S. Wu, and L. Chen (2016) Person re-identification based on deep spatio-temporal features and transfer learning. In International Joint Conference on Neural Networks (IJCNN), pp. 1660–1665. Cited by: §3.2.2, Table 2.
  • [78] L. Wei, S. Zhang, W. Gao, and Q. Tian (2018) Person trasfer gan to bridge domain gap for person re-identification. In Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on, Cited by: §2.
  • [79] D. Wu, C. Wang, Y. Wu, and D. Huang (2019) Attention deep model with multi-scale deep supervision for person re-identification. arXiv preprint arXiv:1911.10335. Cited by: §3.2.3, Table 2.
  • [80] D. Wu, S. Zheng, X. Zhang, C. Yuan, F. Cheng, Y. Zhao, Y. Lin, Z. Zhao, Y. Jiang, and D. Huang (2019) Deep learning-based methods for person re-identification: a comprehensive review. Neurocomputing. Cited by: §1.
  • [81] L. Wu, C. Shen, and A. v. d. Hengel (2016) PersonNet: person re-identification with deep convolutional neural networks. arXiv preprint arXiv:1601.07255. Cited by: §3.2.2, Table 2.
  • [82] L. Wu, C. Shen, and A. van den Hengel (2017) Deep linear discriminant analysis on fisher networks: a hybrid architecture for person re-identification. Pattern Recognition 65, pp. 238–250. Cited by: §3.1, Table 2.
  • [83] S. Wu, Y. Chen, X. Li, A. Wu, J. You, and W. Zheng (2016) An enhanced deep feature representation for person re-identification. In IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–8. Cited by: §3.1, Table 2.
  • [84] T. Xiao, H. Li, W. Ouyang, and X. Wang (2016) Learning deep feature representations with domain guided dropout for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1249–1258. Cited by: §3.1, Table 2.
  • [85] F. Xiong, Y. Xiao, Z. Cao, K. Gong, Z. Fang, and J. T. Zhou (2019) Good practices on building effective cnn baseline model for person re-identification. In Tenth International Conference on Graphics and Image Processing (ICGIP 2018), Vol. 11069, pp. 110690I. Cited by: Table 2.
  • [86] J. Xu, R. Zhao, F. Zhu, H. Wang, and W. Ouyang (2018) Attention-aware compositional network for person re-identification. pp. 2119–2128. Cited by: §3.1, Table 2.
  • [87] H. Yao, S. Zhang, R. Hong, Y. Zhang, C. Xu, and Q. Tian (2019) Deep representation learning with part loss for person re-identification. IEEE Transactions on Image Processing 28 (6), pp. 2860–2871. Cited by: Table 2.
  • [88] M. Ye, Z. Wang, X. Lan, and P. C. Yuen (2018) Visible thermal person re-identification via dual-constrained top-ranking.. In IJCAI, pp. 1092–1099. Cited by: §4.
  • [89] D. Yi, Z. Lei, and S. Z. Li (2014) Deep metric learning for practical person re-identification. arXiv preprint arXiv:1407.4979. Cited by: §3.2.2, Table 2.
  • [90] H. Yu, A. Wu, and W. Zheng (2018) Unsupervised person re-identification by deep asymmetric metric embedding. IEEE transactions on pattern analysis and machine intelligence. Cited by: §3.1, Table 2.
  • [91] M. Yuan, D. Yin, J. Ding, Z. Zhou, C. Zhu, R. Zhang, and A. Wang (2019) A multi-image joint re-ranking framework with updateable image pool for person re-identification. Journal of Visual Communication and Image Representation 59, pp. 527–536. Cited by: §2, §3.2.3, Table 2.
  • [92] G. Zhang, J. Kato, Y. Wang, and K. Mase (2014) People re-identification using deep convolutional neural network. In International Conference on Computer Vision Theory and Applications (VISAPP), Vol. 3, pp. 216–223. Cited by: §3.2.2, Table 2.
  • [93] R. Zhang, J. Li, H. Sun, Y. Ge, P. Luo, X. Wang, and L. Lin (2019) SCAN: self-and-collaborative attention network for video person re-identification. IEEE Transactions on Image Processing. Cited by: Table 2, §4.
  • [94] R. Zhang, L. Lin, R. Zhang, W. Zuo, and L. Zhang (2015) Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification. IEEE Transactions on Image Processing 24 (12), pp. 4766–4779. Cited by: §3.2.3, Table 2.
  • [95] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian (2016) MARS: a video benchmark for large-scale person re-identification. Springer. Cited by: §2.
  • [96] L. Zheng, Y. Huang, H. Lu, and Y. Yang (2019) Pose invariant embedding for deep person re-identification. IEEE Transactions on Image Processing. Cited by: Table 2.
  • [97] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian (2015) Scalable person re-identification: a benchmark. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1116–1124. Cited by: §2.
  • [98] L. Zheng, Y. Yang, and A. G. Hauptmann (2016) Person re-identification: past, present and future. arXiv preprint arXiv:1610.02984. Cited by: §3.2.
  • [99] M. Zheng, S. Karanam, and R. J. Radke (2018) RPIfield: a new dataset for temporally evaluating person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1893–1895. Cited by: §2.
  • [100] Z. Zhong, L. Zheng, Z. Zheng, S. Li, and Y. Yang (2019) Camstyle: a novel data augmentation method for person re-identification. IEEE Transactions on Image Processing 28 (3), pp. 1176–1190. Cited by: §3.1, Table 2.
  • [101] J. Zhu, H. Zeng, S. Liao, Z. Lei, C. Cai, and L. Zheng (2018) Deep hybrid similarity learning for person re-identification. IEEE Transactions on Circuits and Systems for Video Technology 28 (11), pp. 3183–3193. Cited by: §3.2.2, Table 2.