1 Introduction
Person reidentification (ReID) is an indispensable component in surveillance video analysis. Given the probe image, person ReID aims at retrieving images belonging to the same identity across multiple nonoverlapping camera views. Thanks to the emergence of deep learning techniques and large scale datasets [57, 59, 17, 51], the field of person ReID evolves rapidly. Though having achieved much progress, it remains challenging due to drastic pose variation, occlusion and background cluttering.
The success of CNN mainly attributes to its strong power to learn discriminative features. The goal of representation learning is to pull the samples of the same identity compactly, and push those of different identities far away from each other in the embedding space. This already resembles a clustering process. On the other hand, the wellknown spectral clustering algorithm [5] just partitions the data into groups such that data from different groups have very low similarities and data within a group have high similarities. The goal of these two techniques are identical. However, the former one mainly utilizes powerful transformation such as CNN on individual samples, the later one utilizes the relations between samples to transform group of samples. These two techniques complement naturally. Given the close connections between these two techniques, it is natural to explicitly integrate clustering process into current ReID pipeline to take the relations between samples into account, favorably in an endtoend manner. Recently, Shen et al. proposed GSRW [33] and SGGNN [34] which are closely related to our work. The key difference between their works and our work lies in that our work focuses on feature transformation, while theirs’ focus on similarity transformation. As shown in later sections, feature transformation offers not only higher efficiency and simpler implementation, but also better performance.
In the context of supervised person ReID, the partitions are known since the labels are provided. Nevertheless, it is not trivial because the typical spectral clustering method involves eigendecomposition which is computationally expensive. Moreover, the gradient w.r.t. eigendecomposition is also hard to compute. To enable efficient computation, we equivalently optimize the transition probability from one subgraph to another on the similarity graph. Consequently, the module is fully differentiable, and only brings marginal computational cost. In addition, we can also adopt this method in testing, which can further improve the performance. Despite its simplicity, the proposed method improve the performance significantly over strong baselines.
In summary, the contributions of this paper are three folds:

To efficiently optimize groupwise similarities among different identities, we incorporate spectral clustering as a component into deep representation learning.

We devise a novel Spectral Feature Transformation (SPT) module to implement the spectral clustering in an endtoend manner. It offers significant performance improvement with negligible overhead.

Extensive experiments on four public benchmarks validate the effectiveness of the proposed method. Our approach outperforms other stateoftheart methods dramatically without any bells and whistles.
2 Related Works
Person reidentification. Facilitated by deep learning techniques, the field of person ReID has witnessed great progress in the last few years. Recent efforts on deep learning based person ReID can be roughly categorized into two directions. One is to improve the network architecture for person ReID. Besides common techniques in CNN such as multiscale feature aggregation [27] or attention modules [18, 44], tailormade architectures [41, 39, 30, 49] for person ReID are also devised. Sun et al. [41] splited the feature map into several horizontal parts and imposed supervision on them directly. Suh et al. [39] employed a subnetwork to learn body part feature and fused it with appearance feature via a bilinearpooling layer. Sarfraz et al. [30] exploited joint keypoints as additional input and used multiple branches to capture orientationspecific feature. These methods explicitly consider the structure of human body to alleviate the impact of occlusion or inaccurate detections, thus improve the performance.
The other direction concentrates on developing discriminative loss functions. There are two dominant streams in this direction. One is to introduce the classical metric learning into deep learning, such as contrastive loss
[8] and triplet loss [31]. The convergence and performance of deep metric learning are highly dependent on the sample organization of the minibatch in training. Several works improve on it by selecting the most informative pairs [37, 26, 11]. Another stream focuses on reducing the intraclass variance and increasing the interclass margin on classification loss functions. For example, center loss
[53] regularizes the distance between each data sample and its corresponding class center; largemargin softmax [21] and its variants [20, 46, 45]enforce various types of margin on the classical softmax loss function. They all have demonstrated effectiveness in face recognition and person ReID tasks.
In addition to the aforementioned directions, some works [59, 22, 23] emerge recently which leverage powerful Generative Adversarial Networks [6] to generate person images for training.
Reranking. Reranking is a postprocessing technique to refine the ranking of retrieval results. In essence, reranking methods aim at enhancing the original similarity metric by the information of local neighbors. Early works [14, 28]
tried to explore kreciprocal nearest neighbors for general image retrieval. Recently, Zhong
et al. [60] introduced reranking technique into ReID task. They combined the Jaccard distance of reciprocal encodings and the Euclidean distance of original features in postprocessing. Along this line, Sarfraz et al. [30] aggregated distances between expanded neighbors of image pairs to reinforce the original pairwise distance. Moreover, to take advantage of the diversity within a single feature, Yu et al. [56] further fused distances between different subfeatures.Spectral clustering. Spectral clustering is a conventional algorithm for data clustering. It was pioneered by Donath et al. [5]
and became popular in the pattern recognition community since some landmark works
[35, 25, 24, 43]. It is based on the spectral graph theory and converts the data clustering problem into the graph partition problem. In contrast to KMeans, spectral clustering makes no assumption on the structure of cluster. So it can generalize to more complex scenarios like intertwined spirals. Some recent works
[12, 32, 42, 54] tried to incorporate spectral clustering with deep learning. Tang et al. [42] proposed normalized cut loss for weakly supervised segmentation to regularize the connections between pixels. Wu et al. [54] optimized Neighborhood Component Analysis (NCA) criterion to preserve the neighborhood structure in the semantic space. Though spectral clustering has been applied extensively, combining it with CNN in person reidentification are still under investigation.Graph convolutional networks. GCN breaks the assumption of convolution that computation can only take place within local regions. Due to the complementarity, it is an effective way to aggregate global information in current CNN framework. It was first proposed by Kipf et al. [15]
for semisupervised classification. Currently, GCN is a rising research direction in computer vision. For example, Yan
et al. [55] modeled dynamics of human body skeletons via graph convolutional networks. Wang et al. [47, 48] exploited GCN and a equivalent view nonlocal feature aggregation to capture the spatialtemporal relations between convolutional features and object proposals in the video, respectively. As mentioned before, two closest works to ours are [33, 34]. They both applied similarity transformation on the graph to achieve better results. More detailed comparisons and discussions of these two methods are presented in Sec. 4.3 Method
Given the close connection between clustering and deep learning based person ReID, it is natural to bring clustering techniques into deep person ReID. Among dozens of candidates, spectral clustering shows superiority in many aspects. In contrast to kmeans which clusters data points by Euclidean distance, spectral clustering focuses on the set of similarities between and within groups which is more flexible. It makes no prior assumption on data topology, thus makes it applicable to more complex scenarios.
We first give a brief introduction of spectral clustering algorithm in Section 3.1. We then elaborate the proposed Spectral Feature Transformation (SFT) in Section 3.2. In Section 3.3 and 3.4, we explain the training strategies in details and then extend the proposed SFT module to the postprocessing stage.
3.1 Graph Cut and Spectral Clustering
We first review the classical spectral clustering and its closely related concept graph cut. From the viewpoint of graph, data can be represented as a undirected graph. Wherein, each data point in is a vertex of the graph and the edge is weighted by the similarity of corresponding data points . For brevity, we take the 2cluster problem as an example in the following formulation, and readers can refer to [38] for multicluster extension.
To obtain the optimal clustering result on a graph, an intuitive way is to solve a minimum cut problem. For two disjoint subsets , the cut between them is defined as
(1) 
However, minimizing vanilla cuts often leads to a trivial solution where a single vertex is separated from the rest of the graph. To circumvent the issue, Shi et al. [35] proposed to normalize each subgraph by its volume:
(2) 
where is the total connection from nodes in to all nodes in the graph.
3.2 Spectral Feature Transformation
Suppose is the final embedding of a training batch. Wherein, and
denote the number of data points and the dimension of the embedding vector, respectively. We adopt cosine similarity with exponential transformation to measure the affinities between samples. Formally, each element of affinity matrix
is defined as(3) 
where is the temperature parameter. Now, we can define a graph on the samples in a minibatch as . If the affinity is properly normalized, it actually defines a random walk process on the graph. By normalizing the rows of
to 1, we can derive the stochastic matrix
, which reflects the transition probability from nodes to nodes:(4) 
where is a diagonal matrix whose elements are defined as . In practice, the computation of can be implemented by applying softmax function with temperature on affinity matrix .
The most intriguing property of is the escaping probability which measures the transition probability from a subgraph to another subgraph . For ReID task, a subgraph denotes the set of samples belonging to the same identity, and the escaping probability is essentially the chance of an identity getting misclassified. We will elaborate the formal definition of below.
The stationary distribution of the underlying random walk process is simply given as,
(5) 
represents the normalized connection strength of one sample to the rest of the graph. A sample with more similar samples within graph tends to have larger connection strength.
And finally, combining Eqn. 4 and 5, the escaping probability is defined as,
(6) 
It is straightforward that a small requires strong intracluster connections and weak intercluster connections, which is the desired property for spectral clustering. In fact, as shown in [24],
(7) 
In the fully supervised setting, the partition , is given, so we mainly focus on minimizing the transition probability w.r.t. the stochastic matrix . By minimizing the transition probability, we essentially minimize the probability of misclassifying a data sample in group into group . Note that this supervision is applied in groupwise not imagewise, which is fundamentally different from previous works. We can directly utilize the cross entropy loss w.r.t. onehot ground truth label to optimize our clustering objective function.
This can be simply implemented by multiplying with original feature
, and optimizing the spectral feature transformation module with standard SGD in an endtoend manner. The overall architecture of the proposed neural networks is displayed in Figure
1.3.3 Training Strategy
In the early stage of training, the features extracted by neural networks are too ambiguous to describe corresponding images accurately. Transition probabilities derived from these features are thus not reliable. This drifts feature transformation and makes training process unstable. Though warming up strategy
[7] mitigates the issue to some extent, the performance is still not satisfactory on some datasets. To eliminate the problem, we impose supervision directly on original features to prevent them from drifting. We draw this inspiration from deeplysupervised nets [16]. It is noteworthy that the original feature and the transformed feature share the same classifier during training. Only by then can we guarantee that the modes of features are aligned before and after spectral feature transformation.To fully liberate the power of spectral clustering, it is necessary to satisfy the assumption that the input data obey the underlying cluster structure. In other words, there must be sufficient images for each identity in a training batch. Thus, we adopt the sampling strategy proposed by Hermans et al. [11] which is ubiquitous in deep metric learning. Specifically, a minibatch in training contains identities and each identity has images.
3.4 Postprocessing
Inspired by works [33, 34], we further extend the proposed spectral feature transformation to the postprocessing stage. The extension is based on the assumption that there is an underlying cluster structure in the neighborhood of the probe image in the gallery. As the evaluation protocol implies, top ranking list has a larger impact on the final performance. So we only refine the top ranking list to balance between efficiency and performance.
Given a probe image, images in the gallery are ranked according to their cosine similarities with it. Then, we collect features of top items and perform spectral feature transformation on them. In the end, top rank list is recomputed based on the transformed features. Since is much smaller than the size of gallery and the features are extracted in advance, the refinement process introduces negligible overhead.
4 Discussion
In the sequel, we will analyze some appealing properties of our method which contribute to the improvement. Next, distinctions and connections with other works will also be discussed.
4.1 Appealing Properties
Relax Assumptions and Ease Optimization
Instead of directly constraining the pairwise similarities, our method relaxes the learning objective to optimize such similarities after the groupwise transformation by SFT. The transformation introduced by SFT moves the features towards the cluster center, thus it has enhanced the discrimination of features.
This makes the optimization and feature learning easier and finally leads to higher performance.
Training Diversity
According to the definition, all samples in the minibatch participate in the SFT operation. The transformed feature of one data sample may differ because the composition of the minibatch changes in each training epoch. This desired property introduces massive diversity which effectively alleviates the risk of overfitting.
4.2 Distinctions with SGGNN
SGGNN [34] addressed the ReID problem from the viewpoint of graph which is similar to us. However, there are some obvious discrepancies. In terms of the definition of the graph, for each image in the probe set, they construct one graph with probetogallery similarities as nodes and gallerytogallery similarities as edges. While in our approach, each node directly corresponds to the feature of a sample and each edge is defined as the similarity of its endpoints. Consequently, in each minibatch, they need to construct graphs with nodes, while we treat the whole minibatch as one single graph which is much conceptually simpler and faster.
4.3 Distinctions with Ncut Loss
Both Ncut loss and our method focus on optimizing a groupwise similarities. However, the detailed implementations are different. Ncut loss realizes the goal via directly optimizing Rayleigh quotient. It is equivalent to minimize the transition probability as previously stated. It is often used along with a extra cross entropy loss to constrain the learning of the feature. We will compare the two loss functions in Section. 5.3.
4.4 Distinctions with Graph Convolution
Graph convolution is proposed to relax the spatial connectivity assumptions in traditional convolution operator. In particular, instead of computing on the spatially connected locations, graph convolution computes on the whole graph weighted by the connectivity of each node. It generalizes convolution operator on nonEuclidean data. On the contrary, the proposed SFT module is a nonparametric operation applied on the final feature to enhance the features. These two methods differ in their motivations.
5 Experiments
In this section, We conduct extensive experiments on four public person reidentification benchmarks, i.e., Market1501 [57], DukeMTMCReID [59, 29], CUHK03 [17] and MSMT17 [51].
5.1 Evaluation Protocols
We follow the standard ReID evaluation metric to evaluate our method and compare it with other works. Given a query image, gallery images are ranked according to their cosine similarity with it. Based on the generated ranking list, Cumulated Matching Characteristics (CMC) at rank1, rank5 and mean average precision (mAP) are calculated to evaluate the performance of the model.
Note that on CUHK03 dataset, we use the recently proposed protocol in [60]. The new protocol splits the whole dataset into 767 and 700 identities for training and testing, respectively, which is much harder than the original one. On Market1501, besides the traditional single query protocol, we also introduce 500k distractors to evaluate the scalability and robustness of the proposed ReID model.
5.2 Implementation Details
We adopt ResNet50 [10]
pretrained on ImageNet
[4]as our backbone network. We use the output of global average pooling layer of ResNet as the embedding vector. In order to preserve more finegrained information, the downsampling of the last stage of ResNet is discarded which leads to a total stride of 16. The temperature
of SFT layer is set to 0.02 for MSMT17 dataset and 0.1 for the remaining three datasets according to cross validation. As for the classifier, we follow a bottleneck design [41]which has been proven effective by many works. Specifically, a fullyconnected layer is applied to reduce the dimension of the feature from 2048 to 512 which is followed by Batch Normalization
[13] and PReLU [9]. The output is then normalized and fed into the loss function. We adopt AMSoftmax [45] loss for the final classification. In all experiments, the margin and the scaling parameter of AMSoftmax are set to 0.3 and 15, respectively.In terms of data preprocessing, input images are resized into . Random horizontal flipping and random erasing [61]
are utilized as data augmentation. In training, each minibatch contains 16 persons and each person has 8 images and results in a batch size of 128. Stochastic Gradient Descent (SGD) with the momentum of 0.9 is applied for optimization. We train 140 epochs in total. The learning rate warms up from 0.001 to 0.1 linearly in the first 20 epochs. It is decayed to 0.01 and 0.001 at 80th and 100th epoch, respectively. In the postprocessing stage, we refine the top50 ranking list for each probe image on Market1501, DukeMTMCReID and CUHK03. While for MSMT17, top150 ranking list is refined, since it has a much larger gallery than the other datasets. Our implementation is based on MXNet
[2] framework. The codes will be public available later.5.3 Ablation Study
px 

Component  Market1501  DukeMTMC  CUHK03  MSMT17  
sft  ds(u)  ds(s)  post  kr  mAP  Rank1  mAP  Rank1  mAP  Rank1  mAP  Rank1 
77.3  91.2  66.1  83.3  40.6  44.9  37.3  66.7  
✓  
✓  ✓  
✓  ✓  
✓  ✓  ✓  
✓  ✓  ✓  
px 
To investigate the effect of each component we proposed, a series of controlled experiments are conducted on all benchmarks we mentioned above.
Effectiveness of Spectral Feature Transformation. As shown in the first two rows of Table 1, consistent improvements are achieved on all four benchmarks. Rank1 accuracy/mAP are improved by 0.4%/2.3%, 2.1%/4.3%, 21.4%/19.6%, 5.2%/7.4% on Market1501, DukeMTMCReID, CUHK03 and MSMT17, respectively. Our method is advantageous in the dataset where each identity is consisted of abundant views as CUHK03, DukeMTMCReID and MSMT17. In addition, we visualize the affinity matrix of images of 6 different identities extracted with and without SFT module. It can be observed from Figure 2 that the connections between different identities are obviously suppressed. Thus, the features extracted by our method are more discriminative for person ReID.
Effectiveness of Deep Supervision. To investigate the influence of deep supervision mentioned in Section 3.3, we combine it with our SFT module while training. It can be seen from row 1 and row 4 in Table 1 that deep supervision contributes to significant improvement. Furthermore, we investigate the necessity of sharing classifier for features before and after the module. As shown in rows 34 of Table 1, the performance drops back to baseline level when independent classifier is applied for training. This indicates the classifier for the original feature could dominate the training process in the unshared setting.
Effectiveness of Postprocessing. We also evaluate our method with and without the proposed postprocessing. As reported in rows 45 of Table 1, the proposed postprocessing could further improves the performance. To further clarify its effectiveness, we make a comparison with the reciprocal encoding [60] method. First, it is clearly shown that our method can also be compatible with other postprocessing method, reciprocal encoding could still boost the performance significantly. Moreover, the proposed postprocessing surpasses reciprocal encoding in terms of Rank1 accuracy which is the most considered metric on all the four datasets. For mAP, our postprocessing method demonstrates advantages on the CUHK03 dataset, while reciprocal encoding is better on the other three. Compared with the proposed postprocessing, reciprocal encoding uses more neighborhood information which makes it computationally expensive. Combining all these components, the performance of our method improves dramatically with negligible overhead and minor modification. An example of the retrieval is represented in Figure 3.
Influence of Temperature . The proper selection of affinity function is crucial for the success of spectral clustering. So, it is necessary to investigate the impact of on the learned features. To this end, we vary to five different values and evaluate the performance of the model trained under these settings. The results are visualized in Figure 4. It shows that our method is relatively robust to the value of .
Influence of The Number of Images per Identity . We investigate the trend of the performance when varying . Given that Market1501 and CUHK03 are relatively small which can not satisfy the need of larger . We only conduct experiments on MSMT17 and DukeMTMCReID. Figure 5 shows that our approach can benefit from larger , while the performance of vanilla baseline model even degrades dramatically when increases. This phenomenon again validates our hypothesis that groupwise training are more advantageous with larger minibatch since for the training of a single sample, we can utilize the information of all the samples within the minibatch.
Comparison with Ncut Loss. As mentioned above, Ncut loss and our method realize the same idea from different aspects. A comparison is thus performed between the two methods. To make it fair, we implement Ncut loss based on the same backbone. The results are summarized in Table 2. It is clear that our method outperforms Ncut loss on all benchmarks. We conjecture that Ncut loss suffers from the detached optimization for feature and similarity. While our method benefits a lot from optimizing feature and similarity jointly with a consistent objective.
px 

Dataset  Ncut loss  ours  
mAP  R1  mAP  R1  
Market1501  79.5  92.0  82.7  93.4 
DukeMTMC  66.7  84.0  73.2  86.9 
CUHK03  40.3  45.5  62.4  68.2 
MSMT17  37.4  66.3  47.6  73.6 
px 
5.4 Comparison with Stateoftheart Methods
The proposed method is compared with stateoftheart methods in this section. To make it fair, we compare results with and without postprocessing, respectively.
Results on Market1501 dataset. As shown in Table 3, our method outperforms all competitors without tailormade architectures or extra auxiliary information with only a single loss. We further perform a comparison on the dataset with 500k distractors. The results are summarized in Table 4. As reported in the table, our method is robust to distractors. Specifically, when disturbed by 100k distractors, the mAP/rank1 accuracy of our method only decreases by 4.9%/2.5%. Note that the rank1 accuracy is still over 90% in this case. While for the other four competitors, the degradations are much larger than ours. The performance gaps are even more significant when increasing the distractor size.
px 

Methods  Reference  Market1501  
mAP  R1  R5  
GLAD [52]  ACMMM17  73.9  89.9   
MLFN [1]  CVPR18  74.3  90.0   
HACNN [18]  CVPR18  75.7  91.2   
DuATM [36]  CVPR18  76.6  91.4  97.1 
Partaligned [39]  ECCV18  79.6  91.7  96.9 
PCB [41]  ECCV18  77.4  92.3  97.2 
Mancs [44]  ECCV18  82.3  93.1   
Proposed    82.7  93.4  97.4 
GSRW [33]  CVPR18  82.5  92.7  96.9 
SGGNN [34]  ECCV18  82.8  92.3  96.1 
Partaligned(KR)  ECCV18  89.9  93.4  96.4 
Proposed(R)    87.5  94.1  97.5 
Proposed(KR)    90.6  93.5  96.6 
px 
px 

Methods  Distractor Size  
0  100k  200k  500k  
mAP  Rank1  mAP  Rank1  mAP  Rank1  mAP  Rank1  
Zheng et al. [58]  59.9  79.5  
APR [19]  62.8  84.0  
TriNet [11]  69.1  84.9  
Partaligned [39]  79.6  91.7  
Proposed  82.7  93.4  
px 
Results on DukeMTMCReID dataset. The results on DukeMTMCReID dataset are presented in Table 5. It can be seen that our method outperforms other stateofthearts significantly. Specifically, our approach gains 1.4% and 2% improvement over Mancs [44] in terms of mAP and rank1 accuracy, respectively.
px 

Methods  Reference  DukeMTMC  
mAP  R1  R5  
PSE [30]  CVPR18  62.0  79.8  89.7 
HACNN [18]  CVPR18  63.8  80.5   
MLFN [1]  CVPR18  62.8  81.0   
DuATM [36]  CVPR18  64.6  81.8  90.2 
PCB+RPP [41]  ECCV18  69.2  83.3   
Partaligned [39]  ECCV18  69.3  84.4  92.2 
Mancs [44]  ECCV18  71.8  84.9   
Proposed    73.2  86.9  93.9 
GSRW [33]  CVPR18  66.4  80.7  88.5 
SGGNN [34]  ECCV18  68.2  81.1  88.4 
Partaligned(KR)  ECCV18  83.9  88.3  93.1 
Proposed(R)    79.6  90.0  94.0 
Proposed(KR)    83.3  88.3  92.0 
px 
Results on CUHK03 dataset. We only conduct experiments on the manually labeled subset of CUHK03. The results are reported in Table 6. It can be observed that our method achieves best performance among compared methods.
px 

Methods  Reference  CUHK03  
mAP  R1  R5  
SVDNet [40]  ICCV17  37.8  40.9   
DPFL [3]  ICCV17  40.5  43.0   
HACNN [18]  CVPR18  41.0  44.4   
MLFN [1]  CVPR18  49.2  54.7   
DaRe [50]  CVPR18  61.6  66.1   
Proposed    62.4  68.2  84.4 
px 
Results on MSMT17 Dataset. Since MSMT17 is released very recently, there is no other published work evaluated on it to our best knowledge. So we only compare our method with baselines reported by authors [51]. As shown in Table 7, our method outperforms these baselines dramatically. Specifically, it exceeds GLAD by 13.6% and 12.2% in terms of mAP and rank1 accuracy, respectively. This verifies the scalability and the robustness of our method when applied in large scale scenarios. To clarify the superiority of our method, we remind readers that GLAD [52] performs pretty well on Market1501 as recorded in Table 3.
6 Conclusion
Inspired by classical spectral clustering, we have proposed a novel spectral feature transformation module to facilitate the learning of discriminative features. In contrast to most of other methods that process samples individually, our method defines a groupwise loss function by the spirit of spectral clustering, and then optimize the deep neural networks by the guidance of this novel loss. The module only involves several basic matrix operations, but the improvement it brings is significant. Furthermore, we extend it to postprocessing which effectively improves the top ranking of the results. Ablation studies on four benchmarks prove the effectiveness and scalability of our method. It reinforces the strong baseline significantly and outperforms other stateoftheart without bells and whistles.
References
 [1] X. Chang, T. M. Hospedales, and T. Xiang. Multilevel factorisation net for person reidentification. In CVPR, 2018.

[2]
T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and
Z. Zhang.
MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems.
In NIPS Workshop, 2016.  [3] Y. Chen, X. Zhu, and S. Gong. Person reidentification by deep learning multiscale representations. In ICCV, 2017.
 [4] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei. ImageNet: A largescale hierarchical image database. In CVPR, 2009.
 [5] W. Donath and A. Hoffman. Lower bounds for the partitioning of graphs. IBM Journal of Research and Development, pages 437–442, 1973.
 [6] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
 [7] P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He. Accurate, large minibatch SGD: Training imagenet in 1 hour. arXiv:1706.02677, 2017.
 [8] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.
 [9] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In ICCV, 2015.
 [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
 [11] A. Hermans, L. Beyer, and B. Leibe. In defense of the triplet loss for person reidentification. arXiv:1703.07737, 2017.
 [12] J. R. Hershey, Z. Chen, J. Le Roux, and S. Watanabe. Deep clustering: Discriminative embeddings for segmentation and separation. In ICASSP, 2016.
 [13] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
 [14] H. Jegou, H. Harzallah, and C. Schmid. A contextual dissimilarity measure for accurate and efficient image search. In CVPR, 2007.
 [15] T. N. Kipf and M. Welling. Semisupervised classification with graph convolutional networks. In ICLR, 2017.
 [16] C.Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeplysupervised nets. 2015.
 [17] W. Li, R. Zhao, T. Xiao, and X. Wang. DeepReID: Deep filter pairing neural network for person reidentification. In ICCV, 2014.
 [18] W. Li, X. Zhu, and S. Gong. Harmonious attention network for person reidentification. In CVPR, 2018.
 [19] Y. Lin, L. Zheng, Z. Zheng, Y. Wu, and Y. Yang. Improving person reidentification by attribute and identity learning. arXiv:1703.07220, 2017.
 [20] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song. Sphereface: Deep hypersphere embedding for face recognition. In CVPR, 2017.

[21]
W. Liu, Y. Wen, Z. Yu, and M. Yang.
Largemargin softmax loss for convolutional neural networks.
In ICML, 2016.  [22] L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool. Pose guided person image generation. In NIPS, 2017.
 [23] L. Ma, Q. Sun, S. Georgoulis, L. Van Gool, B. Schiele, and M. Fritz. Disentangled person image generation. In CVPR, 2018.
 [24] M. Meila and J. Shi. A random walks view of spectral segmentation. In AISTATS, 2001.

[25]
A. Y. Ng, M. I. Jordan, and Y. Weiss.
On spectral clustering: Analysis and an algorithm.
In NIPS, 2002.  [26] H. Oh Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep metric learning via lifted structured feature embedding. In CVPR, 2016.
 [27] X. Qian, Y. Fu, Y.G. Jiang, T. Xiang, and X. Xue. Multiscale deep learning architectures for person reidentification. In ICCV, 2017.
 [28] D. Qin, S. Gammeter, L. Bossard, T. Quack, and L. Van Gool. Hello neighbor: Accurate object retrieval with kreciprocal nearest neighbors. In CVPR, 2011.
 [29] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi. Performance measures and a data set for multitarget, multicamera tracking. In ECCV Workshop, 2016.
 [30] M. S. Sarfraz, A. Schumann, A. Eberle, and R. Stiefelhagen. A posesensitive embedding for person reidentification with expanded cross neighborhood reranking. In CVPR, 2018.
 [31] F. Schroff, D. Kalenichenko, and J. Philbin. FaceNet: A unified embedding for face recognition and clustering. In CVPR, 2015.
 [32] U. Shaham, K. Stanton, H. Li, B. Nadler, R. Basri, and Y. Kluger. SpectralNet: Spectral clustering using deep neural networks. In ICLR, 2018.
 [33] Y. Shen, H. Li, T. Xiao, S. Yi, D. Chen, and X. Wang. Deep groupshuffling random walk for person reidentification. In CVPR, 2018.
 [34] Y. Shen, H. Li, S. Yi, D. Chen, and X. Wang. Person reidentification with deep similarityguided graph neural network. In ECCV, 2018.
 [35] J. Shi and J. Malik. Normalized cuts and image segmentation. 22(8):888–905, 2000.
 [36] J. Si, H. Zhang, C.G. Li, J. Kuen, X. Kong, A. C. Kot, and G. Wang. Dual attention matching network for contextaware feature sequence based person reidentification. In CVPR, 2018.
 [37] K. Sohn. Improved deep metric learning with multiclass npair loss objective. In NIPS, 2016.
 [38] X. Y. Stella and J. Shi. Multiclass spectral clustering. In ICCV, 2003.
 [39] Y. Suh, J. Wang, S. Tang, T. Mei, and K. M. Lee. Partaligned bilinear representations for person reidentification. In ECCV, 2018.
 [40] Y. Sun, L. Zheng, W. Deng, and S. Wang. Svdnet for pedestrian retrieval. In ICCV, 2017.
 [41] Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In ECCV, 2018.
 [42] M. Tang, A. Djelouah, F. Perazzi, Y. Boykov, and C. Schroers. Normalized cut loss for weaklysupervised CNN segmentation. In CVPR, 2018.
 [43] U. Von Luxburg. A tutorial on spectral clustering. 17(4):395–416, 2007.
 [44] C. Wang, Q. Zhang, C. Huang, W. Liu, and X. Wang. Mancs: A multitask attentional network with curriculum sampling for person reidentification. In ECCV, 2018.
 [45] F. Wang, J. Cheng, W. Liu, and H. Liu. Additive margin softmax for face verification. IEEE Signal Processing Letters, 25(7):926–930, 2018.
 [46] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu. CosFace: Large margin cosine loss for deep face recognition. In CVPR, 2018.
 [47] X. Wang, R. Girshick, A. Gupta, and K. He. Nonlocal neural networks. In CVPR, 2018.
 [48] X. Wang and A. Gupta. Videos as spacetime region graphs. In ECCV, 2018.
 [49] Y. Wang, Z. Chen, F. Wu, and G. Wang. Person reidentification with cascaded pairwise convolutions. In CVPR, 2018.
 [50] Y. Wang, L. Wang, Y. You, X. Zou, V. Chen, S. Li, G. Huang, B. Hariharan, and K. Q. Weinberger. Resource aware person reidentification across multiple resolutions. In CVPR, 2018.
 [51] L. Wei, S. Zhang, W. Gao, and Q. Tian. Person transfer GAN to bridge domain gap for person reidentification. In CVPR, 2018.
 [52] L. Wei, S. Zhang, H. Yao, W. Gao, and Q. Tian. GLAD: Globallocalalignment descriptor for pedestrian retrieval. In ACM MM, 2017.
 [53] Y. Wen, K. Zhang, Z. Li, and Y. Qiao. A discriminative feature learning approach for deep face recognition. In ECCV, 2016.
 [54] Z. Wu, A. A. Efros, and S. X. Yu. Improving generalization via scalable neighborhood component analysis. In ECCV, 2018.
 [55] S. Yan, Y. Xiong, and D. Lin. Spatial temporal graph convolutional networks for skeletonbased action recognition. In AAAI, 2018.
 [56] R. Yu, Z. Zhou, S. Bai, and X. Bai. Divide and fuse: A reranking approach for person reidentification. In BMVC, 2017.
 [57] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person reidentification: A benchmark. In ICCV, 2015.
 [58] Z. Zheng, L. Zheng, and Y. Yang. A discriminatively learned CNN embedding for person reidentification. ACM Transactions on Multimedia Computing, Communications, and Applications, 14(1):13, 2017.
 [59] Z. Zheng, L. Zheng, and Y. Yang. Unlabeled samples generated by GAN improve the person reidentification baseline in vitro. In ICCV, 2017.
 [60] Z. Zhong, L. Zheng, D. Cao, and S. Li. Reranking person reidentification with kreciprocal encoding. In CVPR, 2017.
 [61] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang. Random erasing data augmentation. arXiv:1708.04896, 2017.
Comments
There are no comments yet.