A two-stream Re-id CNN network
Pedestrian misalignment, which mainly arises from detector errors and pose variations, is a critical problem for a robust person re-identification (re-ID) system. With bad alignment, the background noise will significantly compromise the feature learning and matching process. To address this problem, this paper introduces the pose invariant embedding (PIE) as a pedestrian descriptor. First, in order to align pedestrians to a standard pose, the PoseBox structure is introduced, which is generated through pose estimation followed by affine transformations. Second, to reduce the impact of pose estimation errors and information loss during PoseBox construction, we design a PoseBox fusion (PBF) CNN architecture that takes the original image, the PoseBox, and the pose estimation confidence as input. The proposed PIE descriptor is thus defined as the fully connected layer of the PBF network for the retrieval task. Experiments are conducted on the Market-1501, CUHK03, and VIPeR datasets. We show that PoseBox alone yields decent re-ID accuracy and that when integrated in the PBF network, the learned PIE descriptor produces competitive performance compared with the state-of-the-art approaches.READ FULL TEXT VIEW PDF
Person re-identification (person re-ID) is mostly viewed as an image
This paper proposes the SVDNet for retrieval problems, with focus on the...
Deep learning technology promotes the rapid development of person
The huge variance of human pose and the misalignment of detected human i...
Person re-identification (ReID) is to identify pedestrians observed from...
Both accuracy and efficiency are significant for pose estimation and tra...
Spatial misalignment caused by variations in poses and viewpoints is one...
A two-stream Re-id CNN network
This paper studies the task of person re-identification (re-ID). Given a probe (person of interest) and a gallery, we aim to find in the gallery all the images containing the same person with the probe. We focus on the identification problem, a retrieval task in which each probe has at least one ground truth in the gallery . A number of factors affect the re-ID accuracy, such as detection/tracking errors, variations in illumination, pose, viewpoint, etc.
A critical influencing factor on re-ID accuracy is the misalignment of pedestrians, which can be attributed to two causes. First, pedestrians naturally take on various poses as shown in Fig. 1. Pose variations imply that the position of the body parts within the bounding box is not predictable. For example, it is possible that one’s hands reach above the head, or that one is riding a bicycle instead of being upright. The second cause of misalignment is detection error. As illustrated in the second row of Fig. 1, detection errors may lead to severe vertical misalignment.
When pedestrians are poorly aligned, the re-ID accuracy can be compromised. For example, a common practise in re-ID is to partition the bounding box into horizontal stripes [20, 42, 1, 21]. This method works under the assumption of slight vertical misalignment. But when vertical misalignment does happen as in the cases in Row 2 of Fig. 1, one’s head will be matched to the background of a misaligned image. So horizontal stripes may be less effective when severe misalignment happens. In another example, under various pedestrian poses, the background may be incorrectly weighted by the feature extractors and thus affect the following matching accuracy.
To our knowledge, two previous works [8, 7] from the same group explicitly consider the misalignment problem. In both works, the pictorial structure (PS) is used, which shares a similar motivation and construction process with PoseBox, and the retrieval process mainly relies on matching the normalized body parts. While the idea of constructing normalized poses is similar, our work locates body joints using a state-of-the-art CNN based pose estimator, and the components of PoseBox are different from PS as evidenced by large-scale evaluations. Another difference of our work is the matching procedure. While [8, 7] do not discuss the pose estimation errors which prevalently exist in real-world datasets, we show that these errors make rigid feature learning/matching with only the PoseBox yield inferior results to the original image, and that the three-stream PoseBox fusion network effectively alleviates this problem.
Considering the above-mentioned problems and the limit of previous methods, this paper proposes the pose invariant embedding (PIE) as a robust visual descriptor. Two steps are involved. First, we construct a PoseBox for each pedestrian bounding box. PoseBox depicts a pedestrian with standarized upright stance. Carefully designed with the help of pose estimators , PoseBox aims to produce well-aligned pedestrian images so that the learned feature can find the same person under intensive pose changes. Trained alone using a standard CNN architecture [37, 41, 44], we show that PoseBox yields very decent re-ID accuracy.
Second, to reduce the impact of information loss and pose estimation errors (Fig. 2) during PoseBox construction, we build a PoseBox fusion (PBF) CNN model with three streams as input: the PoseBox, the original image, and the pose estimation confidence. PBF achieves a globally optimized tradeoff between the original image and the PoseBox. PIE is thus defined as the FC activations of the PBF network. On several benchmark datasets, we show that the joint training procedure yields competitive re-ID accuracy to the state of the art. To summarize, this paper has three contributions.
Minor contribution: the PoseBox is proposed which shares a similar nature with a previous work . It enables well-aligned pedestrian matching, and yields satisfying re-ID performance when being used alone.
Major contribution: the pose invariant embedding (PIE) is proposed as a part of the PoseBox Fusion (PBF) network. PBF fuses the original image, PoseBox and the pose estimation errors, thus providing a fallback mechanism when pose estimation fails.
Using PIE, we report competitive re-ID accuracy on the Market-1501, CUHK03, and VIPeR datasets.
to deep learning following the pioneer work “DeepPose”. Some recent methods employ multi-scale features and study mechanisms on how to combine them [29, 26]. It is also effective to inject spatial relationships between body joints by regularizing the unary scores and pairwise comparisons [11, 27]. This paper adopts the convolutional pose machines (CPM) , a state-of-the-art pose estimator with multiple stages and successive pose predictions.
Deep learning for re-ID. Due to its superior performance, deep learning based methods have been dominating the re-ID community in the past two years. In the two earlier works [20, 39], the siamese model which takes two images as input is used. In later works, this model is improved in various ways, such as injecting more sophisticated spatial constraint [1, 6], modeling the sequential properties of body parts using LSTM , and mining discriminative matching parts for different image pairs . It is pointed out in  that the siamese model only uses weak re-ID labels: two images being of the same person or not; and it is suggested that an identification model which fully uses the strong re-ID labels be superior. Several previous works adopt the identification model [37, 36, 41]. In , the video frames are used as training samples of each person class, and in 
, effective neurons are discovered for each training domain and a new dropout strategy is proposed. The architecture proposed in is more similar to the PBF model in our work. In 
, hand-crafted low-level features are concatenated after a fully connected (FC) layer which is connected to the softmax layer. Our network is similar to in that confidence scores of pose estimation are catenated with the other two FC layers. It departs from  in that our network takes three streams as input, two of which are raw images.
Poses for re-ID. Although pose changes have been mentioned by many previous works as an influencing factor on re-ID, only a handful of reports can be found discussing the connection between them. Farenzena et al.  propose to detect the symmetrical axis of different body parts and extract features following the pose variation. In , rough estimates of the upper-body orientation is provided by the HOG detector, and the upper body is then rendered into the texture of an articulated 3D model. Bak et al. 
further classify each person into three pose types: front, back, and side. A similar idea is exploited in, where four pose types are used. Both works [3, 9] apply view-point specific distance metrics according to different testing pose pairs. The closest works to PoseBox are [8, 7], which construct the pictorial structure (PS), a similar concept to PoseBox. They use traditional pose estimators and hand-crafted descriptors that are inferior to CNN by a large margin. Our work employs a full set of stronger techniques, and designs a more effective CNN structure evidenced by the competitive re-ID accuracy on large-scale datasets.
The construction of PoseBox has two steps, i.e., pose estimation and PoseBox projection.
Pose estimation. This paper adopts the off-the-shelf model of the convolutional pose machines (CPM) . In a nutshell, CPM is a sequential convolutional architecture that enforces intermediate supervision to prevent vanishing gradients. A set of 14 body joints are detected, i.e., head, neck, left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees, and left and right ankles, as shown in the second column of Fig. 3
Body part discovery and affine projection. From the detected joints, 10 body parts can be depicted (the third column of Fig. 3). The parts include head, torso, upper and lower arms (left and right), and upper and lower legs (left and right), which almost cover the whole body. These quadrilateral parts are projected to rectangles using affine transformations.
In more details, the head is defined with the joints of head and neck, and we manually set the width of each head box to of its height (from head to neck). An upper arm is confined by the shoulder and elbow joints, and the lower arm by the elbow and wrist joints. The width of the arms boxes is set to 20 pixels. Similarly, the upper and lower legs are defined by the hip and knee joints, and the knee and ankle joints, respectively. Their widths are both 30 pixels. The torso is confined by four body joints, i.e
., the two shoulders and the two hips, so we simply draw a quadrangle for the torso. Due to pose estimation errors, the affine transformation may encounter singular values. So in practice, we add some small random disturbance when the pose estimation confidence of a body part is below a threshold (set to 0.4).
Three types of PoseBoxes. In several previous works discussing the performance of different parts, a common observation is that the torso and legs make the largest contributions [8, 1, 6]. This is expected because the most distinguishing features exist in the upper-body and lower-body clothes. Based on the existing observations, this paper builds three types of PoseBoxes as described below.
PoseBox 1. It consists of the torso and two legs. A leg is comprised of the upper and the lower legs. PoseBox 1 includes two most important body parts and is a baseline for the other two PoseBox types.
PoseBox 2. Based on PoseBox 1, we further add the left and right arms. An arm includes the upper and lower arm sub-modules. In our experiment we show that PoseBox 2 is superior to PoseBox 1 due to the enriched information brought by the arms.
Remarks. The advantage of PoseBox is two-fold. First, the pose variations can be corrected. Second, background noise can be removed largely.
PoseBox is also limited in two aspects. First, pose estimation errors often happen, leading to imprecisely detected joints. Second, PoseBox is designed manually, so it is not guaranteed to be optimal in terms of information loss or re-ID accuracy. We address the two problems by a fusion scheme to be introduced in Section 3.3. For the second problem, specifically, we note that we construct PoseBoxes manually because current re-ID datasets do not provide ground truth poses, without which it is not trivial to design an end-to-end learning method to automatically generate normalized poses.
This paper constructs two baselines based on the original pedestrian image and PoseBox, respectively. According to the results in the recent survey , the identification model  outperforms the verification model [1, 20] significantly on the Market-1501 dataset : the former makes full use of the re-ID labels, i.e., the identity of each bounding box, while the latter only uses weak labels, i.e., whether two boxes belong to the same person. So in this paper we adopt the identification CNN model (Fig. 4). Specifically, this paper uses the standard AlexNet  and Residual-50  architectures. We refer readers to the respective papers for detailed network descriptions.
During training, we employ the default parameter settings, except editing the last FC layer to have the same number of neurons as the number of distinct IDs in the training set. During testing, given an input image resized to , we extract the FC7FC8 activations for AlexNet, and the Pool5FC activations for ResNet-50. After normalization, we use Euclidean distance to perform person retrieval in the testing set. With respect to the input image type, two baselines are used in this paper:
Baseline1: the original image (resized to ) is used as input to CNN during training and testing.
Baseline2: the PoseBox (resized to ) is used as input to CNN during training and testing. Note that only one PoseBox type is used each time.
Motivation. During PoseBox construction, pose estimation errors and information loss may happen, leading to compromised quality of the PoseBox (see Fig. 2). On the one hand, pose estimation errors often happen, as we use an off-the-shelf pose estimator (which is usually the case under practical usage). As illustrated in Fig. 5 and Fig. 1, pose estimation may fail when the detections have missing parts or the pedestrian images are of low resolution. On the other hand, when cropping human parts from a bounding box, it is inevitable that important details are missed out, such as bags and umbrellas (Fig. 2). The failure in the construction of high-quality PoseBoxes and the information loss during part cropping may result in compromised results of the baseline 2. This is confirmed in the experiment that baseline 1 yields superior re-ID accuracy to baseline 2.
For the first problem, i.e., the pose estimation errors, we can mostly foretell the quality of pose estimation by resorting to the confidence scores (examples can be seen in Fig. 5). Under high estimation confidence, we envision fine quality of the generated PoseBox. But when the pose estimation confidence scores are low for some body parts, it may be expected that the constructed PoseBox has poor quality. For the second problem, the missing visual cues can be rescued by re-introducing the original image, so that the discriminative details are captured by the deep network.
Network. Given the above considerations, this paper proposes a three-stream PoseBox Fusion (PBF) network which takes the original image, the PoseBox, and the confidence vector as input (see Fig. 6
). To leverage the ImageNet pre-trained models, two types of image inputs, i.e., the original image and the PoseBox are resized to (then cropped randomly to ) for AlexNet  and for ResNet-50 . The third input, i.e., pose estimation confidence scores, is a 14-dim vector, in which each entry falls within the range .
The two image inputs are fed to two CNNs of the same structure. Due to the content differences of the original image and its PoseBox, the two streams of convolutional layers do not share weights, although they are initialized from the same seed model. The FC6 and FC7 layers are connected to these convolutional layers. For the confidence vector, we add a small FC layer which projects the 14-dim vector to a 14-dim FC vector. We concatenate the three inputs at the FC7 layer, which is further fully connected to FC8. The sum of the three Softmax losses is used for loss computation. When the ResNet-50  is used instead of AlexNet, Fig. 6 does not have the FC6 layers, and the FC7 and FC8 layers are known as Pool5 and FC.
In Fig. 6, as denoted by the green bounding box, the pose invariant embedding (PIE) can either be the concatenated FC7 activations (4,096+4,096+14 = 8,206-dim) or its next fully connected layer (751-dim and 1,160-dim for Market-1501 and CUHK03, respectively). For AlexNet, we denote the two PIE descriptors as PIE(A, FC7) and PIE(A, FC8), respectively; for ResNet-50, they are termed as PIE(R, Pool5) and PIE(R, FC), respectively.
During training, batches of the input triplets (the original image, its PoseBox, and the confidence vector) are fed into PBF, and the sum of the three losses is back-propagated to the convolutional layers. The ImageNet pretrained model initializes both the original image and PoseBox streams.
During testing, given the three inputs of an image, we extract PIE as the descriptor. Note that we apply ReLU on the extracted embeddings, which produces superior results according to our preliminary experiment. Then the Euclidean distance is used to calculate the similarity between the probe and gallery images, before a sorted rank list is produced.
PBF has three advantages. First, the confidence vector is an indicator whether PoseBox is reliable. This improves the learning ability of PBF as a static embedding network, so that a global tradeoff between the PoseBox and the original image can be found. Second, the original image not only enables a fallback mechanism when pose estimation fails, but also retrains the pedestrian details that may be lost during PoseBox construction but are useful in discriminating identities. Third, the PoseBox provides important complementary cues to the original image. Using the correctly predicted joints, pedestrian matching can be more accurate with the well-aligned images. The influence of detection errors and pose variations can thus be reduced.
This paper uses three datasets for evaluation, i.e., VIPeR , CUHK03 , and Market-1501 . The VIPeR dataset contains 632 identities, each having 2 images captured by 2 cameras. It is evenly divided into training and testing sets, each consisting of 316 IDs and 632 images. We perform 10 random train/test splits and calculate the averaged accuracy. The CUHK03 dataset contains 1,360 identities and 13,164 images. Each person is observed by 2 cameras, and on average there are 4.8 images under each camera. We adopt the single-shot mode and evaluate this dataset under 20 random train/test splits. The Market-1501 dataset is featured by 1,501 IDs, 19,732 gallery images and 12,936 training images captured by 6 cameras. Both CUHK03 and Market-1501 are produced by the DPM detector . The Cumulative Matching Characteristics (CMC) curve is used for all the three datasets, which encodes the possibility that the query person is found within the top
ranks in the rank list. For Market-1501 and CUHK03, we additionally employ the mean Average Precision (mAP), which considers both the precision and recall of the retrieval process. The evaluation toolbox provided by the Market-1501 authors is used.
|Baseline1 (A, FC7)||4,096||55.49||76.28||83.55||88.98||32.36||57.15||83.50||90.85||95.70||17.44||31.84||41.04||51.36|
|Baseline1 (A, FC8)||751||53.65||75.48||82.93||88.51||31.60||58.80||85.80||91.90||96.25||17.15||32.06||41.68||51.55|
|Baseline1 (R, Pool5)||2,048||73.02||87.44||91.24||94.70||47.62||51.60||79.60||87.70||95.00||23.42||42.31||51.96||63.80|
|Baseline1 (R, FC)||751||70.58||84.95||90.02||93.53||45.84||54.80||84.20||91.70||97.60||15.85||28.80||37.41||47.85|
Baseline2 (A, FC7)
|Baseline2 (A, FC8)||751||51.10||72.24||79.48||85.60||29.91||42.30||75.05||84.35||92.00||16.04||33.45||42.66||54.97|
|Baseline2 (R, Pool5)||2,048||64.49||79.48||85.07||88.95||38.16||36.90||68.40||78.70||86.70||21.11||37.18||45.89||54.34|
|Baseline2 (R, FC)||751||62.20||78.36||83.76||88.84||37.91||41.70||72.70||84.20||92.50||15.57||26.68||33.54||41.71|
|PIE (A, FC7)||8,206||64.61||82.07||87.83||91.75||38.95||59.80||85.35||91.85||95.85||21.77||38.04||46.61||56.61|
|PIE (A, FC8)||751||65.68||82.51||87.89||91.63||41.12||62.40||88.00||93.70||96.50||18.10||31.20||38.92||49.40|
|PIE (R, Pool5)||4,108||78.65||90.26||93.59||95.69||53.87||57.10||84.60||91.40||96.20||27.44||43.01||50.82||60.22|
|PIE (R, FC)||751||75.12||88.27||92.28||94.77||51.57||61.50||89.30||94.50||97.60||23.80||37.88||47.31||56.55|
Our experiments directly employ the off-the-shelf convolutional pose machines (CPM) trained using the multi-stage CNN model trained on the MPII human pose dataset . Default settings are used with input images resized to . For the PBF network, we replace the convolutional layers with those from either the AlexNet  or ResNet-50 . When AlexNet is used, . When ResNet-50 is used, PBF will not have the FC6 layer, and the FC7 layer is denoted by Pool5:16] and the batch size is set to 32 and 16 using AlexNet and ResNet-50, respectively. For both CNN models, it takes 6-7 hours for the training process to converge on the Market-1501 dataset.
We train PIE on Market-1501 and CUHK03, respectively, which have relatively large data volumes. We also test the generalization ability of PIE on some smaller datasets such as VIPeR. That is, we only extract features using the model pre-trained on Market-1501, and then learn some distance metric on the small datasets.
First, we observe that very competitive performance can be achieved by baseline 1, i.e., training with the original image. Specifically, on Market-1501, we achieve rank-1 accuracy of 55.49% and 73.02% using AlexNet and ResNet-50, respectively. These numbers are consistent with those reported in . Moreover, we find that FC7 (Pool5) is superior to FC8 (FC) on Market-1501 but situation reverses on CUHK03. We speculate the CNN model is trained to be more specific to the Market-1501 training set due to its larger data volume, so retrieval on Market-1501 is more of a transfer task than CUHK03. This is also observed in transferring ImageNet models to other recognition tasks .
Second, compared with baseline 1, we can see that baseline 2 is to some extent inferior. On the Market-1501 dataset, for example, results obtained by baseline 2 is 3.3% and 8.9% lower using AlexNet and ResNet-50, respectively. The performance drop is expected due to the pose estimation errors and information loss mentioned in Section 3.3. Since this paper only employs the off-the-shelf pose estimator, we speculate in the future that the PoseBox baseline can be improved by re-training pose estimation using newly labeled data on the re-ID datasets.
Comparing with baseline 1 and baseline 2, we observe clearly that PIE yields higher re-ID accuracy. On Market-1501, for example, when using AlexNet and the FC7 descriptor, our method exceeds the two baselines by +5.5% and +8,8% in rank-1 accuracy, respectively. With ResNet-50, the improvement becomes slightly smaller, but still arrives at +5.0% and +6.8%, respectively. Specifically, rank-1 accuracy and mAP on Market-1501 arrive at 78.65% and 53.87%, respectively. On CUHK03 and VIPeR, consistent improvement over the baselines can also be observed.
Moreover, Figure 7 shows that Kissme  marginally improves the accuracy, proving that the PIE descriptor is well-learned. The concatenation of the Pool5 features of baseline 1 and 2 coupled with Kissme produces lower accuracy compared with “PIE(Pool5)+kissme”, illustrating that the PBF network learns more effective embeddings than learning separately. We also find that the 2,048-dim “PIE(Pool5,img)+EU” and “PIE(Pool5,pb)+EU” outperforms the corresponding baseline 1 and 2. This suggests that PBF improves the baseline performance probably through the back propagation of the fused loss.
First, PoseBox2 is superior to PoseBox1. On Market-1501 dataset, PoseBox2 improves the rank-1 accuracy by xx% over PoseBox1. The inclusion of arms therefore increases the discriminative ability of the system. Since the upper arm typically shares the same color/texture with the torso, we speculate that it is the long/short sleeves that enhance the descriptors. Second, PoseBox2 has better performance than PoseBox3 as well. For PoseBox3, the integration of the head introduces more noise due to the unstable head detection, which deteriorates the overall system performance. Nevertheless, we find in Fig. 8 that the gap between different PoseBoxes decreases after being integrated in PBF. It is because the combination with the original image reduces the impact of estimation errors and the information loss, a contribution mentioned in Section 1.
Ablation experiment. To evaluate the effectiveness of different components of PBF, ablation experiments are conducted on the Market-1501 dataset. We remove one component from the full system at a time, including the PoseBox, the original image, the confidence vector, and the two losses of the PoseBox and original image streams. The CMC curves are drawn in Fig. 9, from which three conclusions can be drawn.
First, when the confidence vector or the two losses are removed, the remaining system is inferior to the full model, but displays similar accuracy. The performance drop is approximately 1% in the rank-1 accuracy. It illustrates that these two components are important regularization terms. The confidence vector informs the system of the reliability of the PoseBox, thus facilitating the learning process. The two identification losses provide additional supervision to prevent the performance degradation of the two individual streams. Second, after the removal of the stream of the original image (“-img”), the performance drops significantly but still remains superior to baseline 2. Therefore, the original image stream is very important, as it reduces re-ID failures that likely result from pose estimation errors. Third, when the PoseBox stream is cut off (“-PoseBox”), the network is inferior to the full model, but is better than baseline 1. This validates the indispensability of PoseBox, and suggests that the confidence vector improves baseline 1.
Comparison with the state-of-the-art methods. On Market-1501, we compare PIE with the state-of-the-art methods in Table 2. It is clear that our method outperforms these latest results by a large margin. Specifically, we achieve rank-1 accuracy = 77.97%, mAP = 52.76% using the single query mode. To our knowledge, we have set new state of the art on the Market-1501 dataset.
On CUHK03, comparisons are presented in Table 3. When metric learning is not used, our results are competitive in rank-1 accuracy with recent methods such as , but are superior in rank-5, 10, 20, and mAP. When Kissme  is employed, we report higher results: rank-1 = 67.10%, and mAP = 71.32%, which exceed the current state of the art. We note that in , very high results are reported on the hand-drawn subset but no results can be found on the detected set. We also note that metric learning yields smaller improvements on Market-1501 than CUHK03, because the PBF network is better trained on Market-1501 due to its richer annotations.
On VIPeR, we extract features using the off-the-shelf PIE model trained on Market-1501, and the comparison is shown in Table 4. We first compare PIE (using Euclidean distance) with the latest unsupervised methods, e.g., the Gaussian of Gaussian (GoG) , the Bag-of-Words (BOW)  descriptors, etc. We use the available code provided by the authors. We observe that PIE exceeds the competing methods in the rank-1, 5, and 10 accuracies. Then, compared with supervised works without feature fusion, our method (coupled with Mirror Representation  and MFA ) has decent results. We further fuse the PIE descriptor with the pre-computed transferred deep descriptors  and the LOMO descriptor . We employ the mirror representation  and the MFA distance metric coupled with the Chi Square kernel. The fused system achieves new state of the art on the VIPeR dataset with rank-1 accuracy = 54.49%.
|Temp. Adapt. ||47.92||-||-||-||22.31|
|Null Space ||55.43||-||-||-||29.87|
|LSTM Siamese ||61.6||-||-||-||35.3|
|Gated Siamese ||65.88||-||-||-||39.55|
|Improved CNN ||44.96||76.01||83.47||93.15||-|
|Null Space ||54.70||84.75||94.80||95.20||-|
|LSTM Siamese ||57.3||80.1||88.3||-||46.3|
|Gated Siamese ||61.8||80.9||88.3||-||51.25|
|Enhanced Deep ||15.47||34.53||43.99||55.41|
|Null Space ||42.28||71.46||82.94||92.06|
|Enhanced  + Mirror ||34.87||66.68||79.30||90.38|
|LSTM Siamese ||42.4||68.7||79.4||-|
|Gated Siamese ||37.8||66.9||77.4||-|
Two groups of sample re-ID results are shown in Fig. 10. In the first query, for example, the cyan clothes on the background lead to the misjudgement of the foreground characteristics, so that some pedestrians with local green/blue colors incorrectly receive top ranks. Using PIE, foreground can be effectively cropped, leading to more accurate pedestrian matching.
This paper explicitly addresses the pedestrian misalignment problem in person re-identification. We propose the pose invariant embedding (PIE) as pedestrian descriptor. We first construct PoseBox with the 16 joints detected with the convolutional pose machine . PoseBox helps correct the pose variations caused by camera views, person motions and detector errors and enables well-aligned pedestrian matching. PIE is thus learned through the PoseBox fusion (PBF) network, in which the original image is fused with the PoseBox and the pose estimation confidence. PBF reduces the impact of pose estimation errors and detail loss during PoseBox construction. We show that PoseBox yields fair accuracy when used alone and that PIE produces competitive accuracy compared with the state of the art.
Person re-identification by multi-channel parts-based cnn with improved triplet loss function.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1335–1344, 2016.
Combining local appearance and holistic view: Dual-source deep neural networks for human pose estimation.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1347–1355, 2015.
Imagenet classification with deep convolutional neural networks.In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
A siamese long short-term memory architecture for human re-identification.In European Conference on Computer Vision, 2016.
An enhanced deep feature representation for person re-identification.In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1–8. IEEE, 2016.