The task of person re-identification is drawing the ever-increasing attention from the computer vision and the visual surveillance communities. This is because of the inherent difficulty of the task paired with the fact that medium-sized training datasets have become available only recently. The task also has a clear practical value for automated surveillance systems. Despite a long history of research on re-identification[26, 13, 2, 8, 16, 6, 17, 5, 15, 14, 11, 10, 1, 3], the accuracy of the existing systems is often insufficient for the full automation of such application scenarios, which stimulates further research activity. The main confounding factor is the notoriously high variation of the appearance of the same person (even at short time spans) due to pose variations, illumination variation, background clutter, complemented by the high number of individuals wearing similar clothes that typically occur in the same dataset.
In this work, we follow the line of work that applies deep convolutional neural networks (CNNs) and embedding learning to the person re-identification task. Our aim is an architecture that can map (embed) an image of a detected person to a high-dimensional vector (descriptor) such that a simple metric such as Euclidean or cosine distance can be applied to compare pairs of vectors and reason about the probability of two vectors to describe the same person. Here, we avoid the approach taken in several recent works that train a separate multi-layer network to compute the distance between a pair of descriptors, since such methods do not scale well to large datasets, where the ability to perform fast search requires the use of a simple metric.
The choice of the convolutional architecture for embedding in the case of person re-identification is far from obvious. In particular, “standard” architectures that combine convolutional layers followed by fully-connected layers such as those used for image classification or face embedding can fail to achieve sufficient invariance to strong 3D viewpoint changes as well as to non-rigid articulations of pedestrians, given the limited amount of training data typical for re-identification tasks and datasets.
Here, we propose a person re-identification architecture that is based on the idea of bilinear convolutional networks (bilinear CNNs) 
that was originally presented for fine-grained classification tasks and later evaluated for face recognition. We note that the task of person re-identification shares considerable similarity with fine-grained categorization (Figure 1
), as the matching process in both cases often needs to resort to the analysis of fine texture details and parts that are hard to localize. Bilinear CNNs, however, rather radically discard spatial information in the process of the bilinear pooling. While this may be justified for fine-grained classification problems such as bird classification, the variability of geometric pose and viewpoints in re-identification problems is more restricted. Overall, the multi-region bilinear CNNs can be regarded as a middle ground between the traditional CNNs and the bilinear CNNs. In the experiments, we show that such a compromise achieves an optimal performance across a range of person re-identification benchmarks, while also performing favorably compared to previous state-of-the-art. The success of our architecture confirms the promise hold by deep architectures with multiplicative interactions such as bilinear CNNs and our multi-region bilinear CNNs for hard pattern recognition tasks.
2 Related work
Deep CNNs for Re-Identifications. Several CNN-based methods for person re-identification have been proposed recently[10, 26, 1, 3, 22, 23, 19, 12, 25]. Yi  were among the first to evaluate “siamese” architectures that accomplishes embedding of pedestrian images into the descriptor space, where they can be further compared using cosine distance. In , a peculiar architecture specific to pedestrian images is proposed that includes three independent sub-networks corresponding to three regions (legs, torso, head-and-shoulders). This is done in order to take into account the variability of the statistics of textures, shapes, and articulations between the three regions. Our architecture includes the network of Yi  as its part.
learn classification networks that can categorize a pair of images as either depicting the same subjects or different subjects. The proposed deep learning approaches[1, 26, 10], while competitive, do not clearly outperform more traditional approaches based on “hand-engineered” features [15, 28].
need to process pairs that include the query and every image in the dataset, and hence cannot directly utilize fast retrieval methods based on Euclidean and other simple distances. Here we aim at the approach that can learn per-image descriptors and then compare them with cosine similarity measure. This justifies starting with the architecture proposed in and then modifying it by inserting new layers.
There are several new works reporting results that are better than ours [12, 25] where additional data and/or sophisticated pre-training schemes were used, whereas we train our model from scratch on each dataset (except for CUHK01, where CUHK03 was used for pre-training).
Bilinear CNNs. Bilinear convolutional networks (Bilinear CNNs), introduced in  achieved state-of-the-art results for a number of fine-grained recognition tasks, and have also shown potential for face verification 
. Bilinear CNNs consists of two CNNs (where the input of these two CNNs is the same image) without fully-connected layers. The outputs of these two streams are combined in a special way via bilinear pooling. In more detail, the outer product of deep features are calculated for each spatial location, resulting in the quadratic number of feature maps, to which sum pooling over all locations is then performed. The resulting orderless image descriptor is then used in subsequent processing steps. For example, in and 
it is normalized and fed into the softmax layer for classification. An intuition given in is that the two CNN streams combined by bilinear operation may correspond to part and texture detectors respectively. This separation may facilitate localization when significant pose variation is present without the need for any part labeling of the training images. Our approach evaluates bilinear CNNs for the person re-identification tasks and improves this architecture by suggesting its multi-region variant.
3 The architecture
Our solution combines the state-of-the-art method for person re-identification (Deep Metric Learning  ) and the state-of-the-art fine-grained recognition method (bilinear CNN ). Modifying the bilinear CNNs by performing multi-region pooling boosts the performance of this combination significantly. Below, we introduce the notations and discuss the components of the system in detail.
Convolutional architecture. We use architecture proposed by  as baseline. The network incorporates independent streams, in which three overlapping parts of person images are processed separately (top, middle and bottom parts), and produces 500-dimensional descriptor as an output.
Each of the three streams incorporates two convolutional layers of size and
Multi-region Bilinear Model. Bilinear CNNs are motivated by the specialized pooling operation that aggregates the correlations across maps coming from different feature extractors. The aggregation however discards all spatial information that remains in the network prior to the application of the operation. This is justified when the images lack even loose alignment (as e.g. in the case of some fine-grained classification datasets), however is sub-optimal in our case, where relatively tight bounding boxes are either manually selected or obtained using a good person detector. Thus some loose geometric alignment between images is always present. Therefore we modify bilinear layer and replace it with the multi-region bilinear layer, which allows us to retain some of the geometric information. Our modification is, of course, similar to many other approaches in computer vision, notably to the classical spatial pyramids of . In more detail, similarly to 
, we introduce the bilinear model for image similarity as follows:
, where and are feature extractor functions (implemented as CNNs), is the pooling function, is the similarity function. The feature function takes an image at location and outputs the feature of determined dimension (unlike , we use vector notation for features for simplicity): . In this work, two convolutional CNNs (without fully-connected layers) serve as the two feature extractors and . For each of the two images in the pair at each spatial location, the outputs of two feature extractors and are combined using the bilinear operation :
where . Using the operation (1), we compute the bilinear feature vector for each spatial location of the image . If the feature extractors and the output local feature vectors of size and correspondingly, their bilinear combination will have size or , if reshaped to the column vector.
We then suggest to aggregate the obtained bilinear features by pooling across locations that belong to a predefined set of image regions: , where is number of chosen regions. After such pooling, we get the pooled feature vector for each image region (as opposed to the feature vector that is obtained in  for the whole image):
Finally, in order to get a descriptor for image , we combine all region descriptors into a matrix of size :
To pick the set of regions, in our experiments, we simply used the grid of equally-sized non-overlapping patches (note that the receptive fields of the units from different regions are still overlapping rather strongly). The scheme of Multi-region Bilinear CNN architecture is shown in figure 2a.
We incorporate the multi-region bilinear operation (3) into the convolutional architecture in the following way: instead of using one sub-network for each image part, we use two feature extractors with the same convolutional architecture described above. The outputs are combined by the multi-region bilinear operation (3) after the second convolution. Three bilinear outputs for each of the image parts are then concatenated and turned into a 500-dimensional image descriptor by an extra fully connected layer. The overall scheme of Multi-region Bilinear CNN net for each of the two siamese sub-networks used is this work is shown in figure 2b.
Learning the model. As in , we use deep embedding learning , where multiple pedestrian images are fed into the identical neural networks and then the loss attempts to pool descriptors corresponding to the same people (matching pairs) closer and the descriptors corresponding to different people (non-matching pairs) apart. To learn the embeddings, we use the recently proposed Histogram loss  that has been shown to be effective for person re-identification task.
Datasets and evaluation protocols. We investigate the performance of the CNN method and its Bilinear variant (figure 2) for three re-identification datasets: CUHK01 , CUHK03  and Market-1501 . The CUHK01 dataset contains images of 971 identities from two disjoint camera views. Each identity has two samples per camera view. We used 485 randomly chosen identities for train and the other 486 for test.
The CUHK03 dataset includes 13,164 images of 1,360 pedestrians captured from 3 pairs of cameras. The two versions of the dataset are provided: CUHK03-labeled and CUHK03-detected with manually labeled bounding boxes and automatically detected ones accordingly. We provide results for both versions.
Following , we use Recall@K metric to report our results. In more detail, the evaluation protocol accepted for the CUHK03 is the following: 1,360 identities are split into 1,160 identities for training, 100 for validation and 100 for testing. At test time single-shot Recall@K curves are calculated. Five random splits are used for both CUHK01 and CUHK03 to calculate the resulting average Recall@K. Some sample images of CUHK03 dataset are shown in figure 1.
|Method||r = 1||r = 5||r = 10||r = 20|
|Method||r = 1||r = 5||r = 10||r = 20|
We also report our results on the Market-1501 dataset, introduced in . This dataset contains 32,643 images of 1,501 identities, each identity is captured by from two to six cameras. The dataset is randomly divided into the test set of 750 identities and the train set of 751 identities.
Architectures. In the experiments, we compare the baseline CNN architecture of  as one of the baselines. We also evaluate the baseline Bilinear CNN (”B-CNN”) architecture where bilinear features are pooled over all locations for each of the three image parts. This corresponds to the formula (3), where whole image is used for pooling. Finally, we present the results for the Multi-region Bilinear CNN (”MR B-CNN”) introduced in this paper (figure 2).
Implementation details. As in 
, we form training pairs inside each batch consisting of 128 randomly chosen training images (from all cameras). The training set is shuffled after each epoch, so the network can see many different image pairs while training. All images are resized to height 160 and width 60 pixels. Cosine similarity is used to compute the distance between a pair of image descriptors. As discussed above, the Histogram loss is used to learn the models.
|Method||r = 1||r = 5||r = 10||mAP|
|Method||r = 1||r = 5||r = 10||r = 20|
We train networks with the weight decay rate of . The learning rate is changing according to the “step” policy, the initial learning rate is set to and it is divided by ten when the performance on the validation set stops improving (which is roughly every iterations). The dropout layer with probability of 0.5 is inserted before the fully connected layer. The best iteration is chosen using the validation set. Following , for CUHK01 we finetune the net pretrained on CUHK03.
Variations of the Bilinear CNN architecture. We have conducted a number of experiments with the varying pooling area for bilinear features (MR B-CNN), including full area pooling (B-CNN), on the CUHK03-labeled. Here we demonstrate results for our current MR B-CNN architecture with pooling area, as this architecture has been found to be the most beneficial for the CUHK03 dataset. We also compare results for B-CNN architecture, where no spatial information is preserved. In Figure 3a and Figure 3b B-CNN is shown to be outperformed by other two architectures by a large margin. This result is not specific to a particular loss, as we observed the same in our preliminary experiments with the Binomial Deviance loss . The MR B-CNN architecture shows uniform improvement over baseline CNN architecture on all three datasets (Figure 3a,b,c,d).
Comparison with the state-of-the-art methods. To our knowledge, Multi-region Bilinear CNN networks introduced in this paper outperform previously published methods on the CUHK03 (both ’detected’ and ’labeled’ versions), and Market-1501 datasets. Recall@K for several rank values are shown in Table 1, Table 2 and Table 3 (singe query setting was used). For the Market-1501 dataset, mean average precision value is additionally shown. The results for CUHK01 are shown in Table 4.
In this paper we demonstrated an application of new Multi-region Bilinear CNN architecture to the problem of person re-identification. Having tried different variants of the bilinear architecture, we showed that such architectures give state-of-the-art performance on larger datasets. In particular, Multi-region Bilinear CNN allows to retain some spatial information and to extract more complex features, while increase the number of parameters over the baseline CNN without overfitting. We have demonstrated notable gap between the performance of the Multi-region Bilinear CNN and the performance of the standard CNN .
-  E. Ahmed, M. J. Jones, and T. K. Marks. An improved deep learning architecture for person re-identification. In Conf. Computer Vision and Pattern Recognition, CVPR, pages 3908–3916, 2015.
-  L. Bazzani, M. Cristani, and V. Murino. Symmetry-driven accumulation of local features for human characterization and re-identification. Computer Vision and Image Understanding, 117(2):130–144, 2013.
-  S.-Z. Chen, C.-C. Guo, and J.-H. Lai. Deep ranking for person re-identification via joint representation learning. arXiv preprint arXiv:1505.06821, 2015.
-  S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In Conf. Computer Vision and Pattern Recognition, CVPR, pages 539–546, 2005.
-  M. Hirzer, P. M. Roth, and H. Bischof. Person re-identification by efficient impostor-based metric learning. In IEEE International Conference on Advanced Video and Signal-Based Surveillance, AVSS, pages 203–208, 2012.
-  C. Kuo, S. Khamis, and V. D. Shet. Person re-identification using semantic color names and rankboost. In IEEE Workshop on Applications of Computer Vision, WACV 2013, pages 281–287, 2013.
-  S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Conf. Computer Vision and Pattern Recognition, CVPR, pages 2169–2178, 2006.
S. Li, M. Shao, and Y. Fu.
Cross-view projective dictionary learning for person
Int. Joint Conference on Artificial Intelligence, IJCAI, pages 2155–2161, 2015.
-  W. Li, R. Zhao, and X. Wang. Human reidentification with transferred metric learning. In Asian Conference on Computer Vision, ECCV, pages 31–44, 2012.
-  W. Li, R. Zhao, T. Xiao, and X. Wang. Deepreid: Deep filter pairing neural network for person re-identification. In Conf. Computer Vision and Pattern Recognition, CVPR, pages 152–159, 2014.
-  S. Liao, Y. Hu, X. Zhu, and S. Z. Li. Person re-identification by local maximal occurrence representation and metric learning. In Conf. Computer Vision and Pattern Recognition, CVPR, pages 2197–2206, 2015.
-  H. Liu, J. Feng, M. Qi, J. Jiang, and S. Yan. End-to-end comparative attention networks for person re-identification. IEEE Trans. Image Processing, 26(7):3492–3506, 2017.
-  B. Ma, Y. Su, and F. Jurie. Bicov: a novel image representation for person re-identification and face verification. In British Machine Vision Conference, BMVC, pages 1–11, 2012.
-  B. Ma, Y. Su, and F. Jurie. Local descriptors encoded by fisher vectors for person re-identification. In European Conference on Computer Vision - ECCV, Workshops, pages 413–422, 2012.
-  S. Paisitkriangkrai, C. Shen, and A. van den Hengel. Learning to rank in person re-identification with metric ensembles. In Conf. Computer Vision and Pattern Recognition, CVPR, pages 1846–1855, 2015.
-  B. Prosser, W. Zheng, S. Gong, and T. Xiang. Person re-identification by support vector ranking. In British Machine Vision Conference, BMVC 2010, Aberystwyth, UK, August 31 - September 3, 2010. Proceedings, pages 1–11, 2010.
-  P. M. Roth, M. Hirzer, M. Köstinger, C. Beleznai, and H. Bischof. Mahalanobis distance learning for person re-identification. In Person Re-Identification, pages 247–267. 2014.
-  A. RoyChowdhury, T.-Y. Lin, S. Maji, and E. Learned-Miller. Face identification with bilinear cnns. arXiv preprint arXiv:1506.01342, 2015.
-  C. Su, S. Zhang, J. Xing, W. Gao, and Q. Tian. Deep attributes driven multi-camera person re-identification. In European Conference on Computer Vision, ECCV, pages 475–491, 2016.
-  A. R. Tsung-Yu Lin and S. Maji. Bilinear cnn models for fine-grained visual recognition. In International Conference on Computer Vision (ICCV), 2015.
-  E. Ustinova and V. Lempitsky. Learning deep embeddings with histogram loss. In Advances in Neural Information Processing Systems, NIPS, 2016.
-  R. R. Varior, M. Haloi, and G. Wang. Gated siamese convolutional neural network architecture for human re-identification. In European Conference on Computer Vision, ECCV, pages 791–808, 2016.
R. R. Varior, B. Shuai, J. Lu, D. Xu, and G. Wang.
A siamese long short-term memory architecture for human re-identification.In European Conference on Computer Vision, ECCV, pages 135–153, 2016.
-  C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. (CNS-TR-2011-001), 2011.
-  T. Xiao, H. Li, W. Ouyang, and X. Wang. Learning deep feature representations with domain guided dropout for person re-identification. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 1249–1258, 2016.
-  D. Yi, Z. Lei, and S. Z. Li. Deep metric learning for practical person re-identification. arXiv prepzrint arXiv:1407.4979, 2014.
-  L. Zhang, T. Xiang, and S. Gong. Learning a discriminative null space for person re-identification. In Conf. Computer Vision and Pattern Recognition, CVPR, 2016.
-  R. Zhao, W. Ouyang, and X. Wang. Person re-identification by saliency learning. arXiv preprint arXiv:1412.1908, 2014.
-  L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In Computer Vision, IEEE International Conference on, 2015.