1 Introduction
Deep neural networks
[14, 8, 9] have dramatically advanced the computer vision community in recent years, with high performance boost in tremendous tasks, such as image classification [8, 34, 21], object detection [4, 32, 20, 30, 27], object tracking [2], saliency detection [19], etc. The essence behind the success of deep learning resides in both the superior expression power of nonlinear complexity in highdimension space
[9] and the largescale datasets [3, 7] where the deep networks could, in full extent, learn complicated patterns and representative features. The basic question still halts the core of many vision tasks: how can we obtain discriminative features across categories and polymerized features within one class?The most fundamental task in computer vision is image classification [34, 14, 8]: given categories in the training set, we train a deep model that could best discriminate features among categories in highdimensional space. The test set also consists of the same classes (identities). However, there are two challenges when we apply the classification task in realworld applications. The first is the number of categories could be exponentially increased (e.g., MegaFace [12] contains one million test identities). This requires the features should be discriminative to have large margin across classes. The second is the category mismatch between training and test set. Often we train on a set of known classes and evaluate the algorithm on new, different identities. This hastens the features should be polymerized to share small distances within classes.
In this paper, we optimize the deeply learned features to be both discriminative and polymerized, given the largescale constraint: ultra category number and open test set. Faced with the aforementioned challenges, the community extends the classification problem into two sub tasks: data verification and identification. Since our main focus lies in the field of humanrelated tasks, we use the term face with data exchangeably thereafter. Face verification [38, 31] aims at differentiating whether a pair of instances belong to the same identity; whilst face identification [25, 35] strives for predicting the class of an identity given the test gallery [12]. Person recognition [28, 44] also belongs to the verification case, only with more nonfrontal and unstrained face (head) settings^{1}^{1}1 The term recognition has a broad sense: detection, verification, classification, etc. Person recognition is face verification with more unstrained samples. For not making any confusion, we follow this terminology as did in the community. In this paper, we refer to face verification, identification and person recognition, in a general sense, as human recognition..
For the feature discrimination concern, the softmax loss and its variants [26, 31, 25]
are proposed to provide a hyperplane with large margin in highdimensional space and thus effectively distinguish features across classes. However, it ignores the intraclass similarity, making the sample distance within one class distant also and thus amply filling in the feature space (Fig.
1(a)); for the feature polymerization concern, one can resort to distance metric learning [43, 41]. Previous work [36] usually introduced neural networks, using contrastive loss [35] or triplet loss [39, 24, 33], to learn the features, followed by a simple metric such as Euclidean or cosine distance to verify the identity (Fig. 1(b)). Such methods are effective to decrease the intraclass distance in some extent and yet bears an internal drawback due to the formulation: the selection of positive and negative samples are relative within each batch and thus could occur training instability, especially the number of classes and data scale increase (see Fig. 3(b)).An alternative is to jointly combine the discriminative feature learning in a softmax spirit and the polymerized feature learning in a metric learning manner. The center loss [42] and its variant [45] are two typical approaches. They linearly combine two terms as the total loss: the first being softmax and the second a class center supervision . The hybrid solution can effectively enlarge the interclass distance as well as the innerclass similarity; nevertheless, the concept of class center is updated in a statistical manner without consulting to the network parameter update during training. Moreover, the center loss term has to be bundled with an adhoc softmax (thus requires more model parameters); if not, the center barely changes according to its formulation and thus the loss could be zero (c.f., Fig. 1(c)(d)).
To this end, we propose the congenerous cosine (COCO) algorithm to provide a new perspective on jointly considering the feature discrimination and polymerization. The intuition is that we directly optimize and compare the cosine distance (similarity) between features: the intraclass distance should be as minimal as possible whereas the interclass variation should be magnified to the most extent (see Fig. 1(e)). COCO inherits the softmax property to make features discriminative in the highdimensional space as well as keeps the idea of class centroid. Note that unlike the statistical role of in [42], the update of our centroid is performed simultaneously during the network training; and there is no additional loss term in the formulation. By virtue of class centroid bundled with discriminative training, COCO is learned endtoend with stable convergence. Experiments on two smallscale image classification datasets first verify the feasibility of our proposed approach and three largescale human recognition benchmarks further demonstrate the effectiveness of applying COCO in an ultra class number and open test set environment. The source code is publicly available^{2}^{2}2 https://github.com/sciencefans/coco_loss .
2 Related Work
The related work of different loss methods to optimize features is compared and discussed throughout the paper. In this section, we mainly focus on the face and person recognition applications.
Face verification and identification have been extensively investigated [12, 31, 25, 36, 35, 42] recently due to the dramatic demand in reallife applications. Schroff et al. [33] directly learned a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. DeepID2 [35]
employed a contrastive loss with both the identification and verification supervision. It increased the dimension of hidden representations and added supervision to early convolutional layers. Tadmor
et al. [36]introduced a multibatch method to generate invariant face signatures through training pairs. The training tended to be faster to convergence by virtue of smaller variance of the estimator. SphereFace
[25] formulated an angular softmax loss that enabled CNN to learn discriminative features by imposing constraints on a hypersphere manifold.Person recognition in photo albums [1, 29, 28, 22, 44, 18] aims at recognizing the identity of people in daily life photos, where such scenarios can be complex with cluttered background. [1] first address the problem by proposing a Markov random filed framework to combine all contextual cues to recognize the identity of persons. Recently, Zhang et al. [44] introduce a largescale dataset called PIPA for this task. The test set on PIPA is split into two subsets, namely test_0 and test_1 with roughly the same number of instances. In [28], a detailed analysis of different cues is explicitly investigated and three additional test splits are proposed for evaluation. [22] embed scene and relation contexts in LSTM and formulate person recognition as a sequence prediction task. Note that previous work [44, 28, 18, 22]
use the training set only for extracting features and a followup classifier (SVM or neural network) is
trained on test_0. The recognition system is evaluated on the test_1 set. We argue that such a practice is infeasible and ad hoc in realistic application since the second training on test_0 is auxiliary and needs retraining if new samples are added.3 Optimizing Congenerous Cosine Distance
3.1 COCO Formulation and Optimization
Let
denote the feature vector of the
th sample,where is the feature dimension. We first introduce the cosine similarity of two features as:(1) 
The cosine similarity metric quantifies how close two samples are in the feature space. A natural intuition to a desirable loss is to increase the similarity of samples within a category and enlarge the centroid distance of samples across classes. Let be the labels of sample , where is the total number of categories, we have the following loss to maximize:
(2) 
where denotes the minibatch, is an indicator function and is a trivial number for computation stability. The naive design in (2) is reasonable in theory and yet suffers from computational inefficiency. Since the complexity of the loss above is , it increases quadratically as batch size goes bigger. Also the network suffers from unstable parameter update and is hard to converge if we directly compute loss from two arbitrary samples from a minibatch: a similar drawback as does in the triplet loss [39].
As discussed previously, an effective solution to polymerize the innerclass feature distance is to provide a series of truncus in the network: serving as the centroid for each class and thus enforcing features to be learned around these hubs. To this end, we define the centroid of class as the average of features over a minibatch: , where is the number of samples that belong to class within the batch. Incorporating the spirit of class centroid into Eqn. (2), one can derive the following revised loss to maximize:
(3) 
where indexes along the category dimension. The direct intuition behind Eqn. (3) is to measure the distance of one sample against other samples by way of a class centroid, instead of a direct pairwise comparison as in Eqn. (2). The numerator ensures sample is close enough to its class center and the denominator enforces a minimal distance against samples in other classes. The exponential operator
is to transfer the cosine similarity to a normalized probability output.
To optimize the aforementioned loss in practice, we first normalize the feature and centroid by norm and then scale the feature before feeding them into the loss layer:
(4) 
where is the scale factor and is an alternative expression^{3}^{3}3 In practice, it is more feasibly convenient to include in the denominator and the result barely differs. to the inner term of summation in Eqn. (3). Therefore, the proposed congenerous cosine (COCO) loss are formulated, in a crossentropy manner to minimize, as follows:
(5) 
where indexes along the class dimension in and is the binary mapping of sample based on its label . The proposed COCO inherits the properties from (2) to (3), that is to increase the discrimination across categories as well as to polymerize the compactness within one class in a cooperative way. Moreover, it reduces the computation complexity compared with the naive version and can be implemented in a cheat way.
Note that both the features and cluster centroids are trained endtoend. We now derive the gradients of loss w.r.t. the unnormalized input feature and the centroid . For brevity, we drop the sample index and superscript notation in the loss. Denote as the elementwise top gradient of loss w.r.t. the normalized feature, i.e.,
, we have the following gradient by applying the chain rule:
(6) 
where is the vector form of . Considering the specific loss defined in Eqn. (5), the top gradient can be obtained:
(7) 
The derivation of gradient w.r.t. centroid can be derived in a similar manner. The features are initialized from pretrain models and the initial value of is thereby obtained.
3.2 Towards an Optimal Scale Factor
As stated in previous subsection, we first normalize and scale the feature; now we have the following theorem to prove that there exists an optimal value for the factor. The derivations are provided in the supplementary.
Theorem 1.
Given the optimization loss has an upper bound , i.e, ; and the neural network has a class number of , the scale factor enforced on the input feature has a lower boundary:
(8) 
3.3 COCO Verification on Image Classification
In this subsection, we provide results and discussions to verify the effectiveness of COCO on smallscale image classification datasets, namely, MNIST [15] and CIFAR10 [13]. The dataset, network structure and implementation details are provided in the supplementary.
An optimal scale factor matters. Take CIFAR10 as example. When is small (, error rate 12.4), the lower bound of loss is high, leading to a high loss in easy samples; however, the loss for hard examples still remains at low value. This will make the gradients from easy and hard samples stay at the same level. This is hard for the network to optimize towards hard samples  leading to a higher error rate. When , the error rate is 7.22; if we set the factor to the optimal value defined in Eqn. (8), the error rate is 6.25.
Feature discrimination and polymerization. Figure 2 shows the histogram of the cosine distance among positive pairs (i.e., two arbitrary samples that belong to the same class) and negative pairs. We can see that the distance in softmax and triplet cases resemble each other; in the COCO case, the discrepancy between the positive and negative is larger (center distance on xaxis between two clusters); the intraclass similarity is also more polymerized (area under two clusters).
Training loss. Figure 3
describes the training loss curves under different loss strategies on CIFAR and LFW datasets. We can observe that our proposed method undergoes a stable convergence as the epoch goes; whereas other losses do not witness an obvious drop when the learning rate decreases. The triplet loss is effective the number of classes is small on CIFAR; however, when the task extends to a larger scale and includes over 1,000 categories, the triplet term suffers from severe disturbance during training since the optimized feature distance is altered frequently within one batch.
Quantitative results. We report the image classification error rate on MNIST and CIFAR10 in Table 1. Note that MNIST is quite a smallsscale benchmark and the performance tends to saturate. Moreover, we conduct the ablative study on different losses: softmax, center loss, triplet loss and their combination. For fair comparison, they all share the same CNN structure as COCO’s. We can observe that the center loss alone bears a limited improvement (6.66 vs 6.70) over softmax; the triplet loss has the training instability concern and thus the error rate is much higher (12.69). Without resorting to softmax, our method is neat in formulation and achieves the lowest error (6.25).
4 COCO for Largescale Recognition
4.1 Face Verification and Identification
We follow the general convention as in [42, 33, 25]
to conduct face verification and identification: train a network using COCO loss to obtain robust features, extract features in the test set based on the trained model, and compare or identify the unknown classes given a large scale test setting. The main modifications to the traditional practice are two folds: first we employ the detection pipeline to leverage the feature selection and conduct landmark regression, in a multitask RPN manner
[32]. One task is for classifying the foreground or background landmarks while the other focus on regressing them. There are five landmarks: left or right eyes, nose, and left or right corners of mouth. The second revision is the landmark alignment via affine transformation, in order to switch the position to a base one.Method  MNIST  CIFAR10 

DropConnect [37]  0.57  9.41 
NIN [23]  0.47  10.47 
Maxout [6]  0.45  11.68 
DSN [17]  0.39  9.69 
RCNN [5]  0.31  8.69 
GenPool [16]  0.31  7.62 
Softmax  0.36  6.70 
Center loss + softmax  0.32  6.66 
Triplet loss  1.45  12.69 
Triplet loss + softmax  0.38  6.73 
COCO  0.30  6.25 
4.2 Person Recognition
Following a general pipeline of the verification task, we first train features using the proposed COCO algorithm on a specific body region (e.g., head) to classify different identities; during inference, the features on both test subsets are extracted to compute the cosine distance and to determine whether both instances are of the same identity. We utilize the multiregion spirit [22] during training and test to merge results from four regions, namely, face, head, upper body and whole body. The contributions as regard to this task are two folds: remove the second training on test_0 as did in previous work, which credits from the property of COCO to learn discriminative and polymerized features; align each region patch to a base location to reduce variation among samples.
The detailed description of region detection and patch alignment are provided in the supplementary. Given the cropped and aligned patches, we train (finetune) models for different regions on PIPA training set using COCO loss. At testing stage, we measure the cosine similarity between two test splits to recognize the identity in test_1 based on the labels in test_0. The similarity between two patches and in test_1 and test_0 is denoted as , where indicates a region. We normalize the preliminary in order to have scores across different regions comparable:
(9) 
where
are parameters of the logistic regression. The final score
is a weighted mean of the normalized scores of each region: , where is the total number of regions and being the weight of each region’s score. The identity of patch in test_1 is decided by the label corresponding to the maximum score in the reference set: . The test parameters of and are determined by a validation set on PIPA.5 Experiments
Dataset and evaluation metric. Face recognition. The Labeled Face in the Wild (LFW) dataset [10] contains 13,233 webcollected images from 5749 different identities, with large variations in pose, expression and illuminations. Following the standard protocol of unrestricted with labeled outside data, we test on 6,000 face pairs. The MegaFace [12]
is a very challenging dataset and aims to evaluate the performance of face recognition algorithms at the million scale of distractors: people who are not in the test set. It includes gallery set and probe set. The gallery set consists of more than one million images from 690K different individuals, as a subset of Flickr photos from Yahoo. The probe set descends from two existing databases: Facescrub and FGNet.
Person recognition. The People In Photo Albums (PIPA) dataset [44] is divided into train, validation, test and leftover sets, where the head of each instance is annotated in all sets and it consists of over 60,000 instances of around 2,000 individuals. In this work, thanks to the discriminative and polymerized features learned by COCO, we take full advantage of the training set and remove the second training on test_0. Moreover, [28] introduced three more challenging splits besides the original test split, namely album, time and day. Each new split emphasizes different temporal distance (various albums, events, days, etc.) between the two subsets of the test data. The evaluation metric is the averaged classification accuracy over all instances on
test_1.5.1 Face Verification and Identification
Method  Train Data  mAcc 

FaceNet [33]  200M  99.65 
DeepID2 [35] *  300K  99.47 
CenterFace [42]  700K  99.28 
LSoftmax [26]  webface  98.71 
SphereFace [25]  webface  99.42 
Softmax  half MS1M  99.75 
Center loss + softmax  half MS1M  99.78 
Triplet loss  half MS1M  98.85 
Triplet loss + softmax  half MS1M  99.68 
COCO  half MS1M  99.86 
We now apply the COCO feature learning to largescale human recognition system. For fair comparison of our method to other losses (softmax, center loss, triplet loss and their combination), we use the same CNN network structure in the following experiments. Features on LFW is available on the github repository.
Figure 5 shows the ROC curves of different methods and Table 5 reports the accuracy (%) for face verification on LFW [10]. It is observed that COCO model is competitively superior (99.86) than its counterparts. Compared with FaceNet [33] which intakes around 200M outside training data, or DeepID2 [35] which ensembles 25 models, our approach demonstrates the effectiveness of feature learning. Table 2 illustrates the identification accuracy on MegaFace challenge [12], where the test identities share no interaction with those in training and the data scale is quite large. Our model using COCO is competent at different number of distractors. The combination of center loss with softmax ranks second (76.57 vs 75.79 @1M), which verifies that applying center loss alone cannot guarantee a good feature learning for largescale task. The triplet loss (69.13) is inferior and our model competes favorably against those using large protocol of training data.
Method  Protocol  @1M  Method  Protocol  @1M 
NTechLab  small  58.22  NTechLab  large  73.30 
DeepSense  small  70.98  DeepSense  large  74.80 
SphereFace [25]  small  75.77  ShanghaiTech  large  74.05 
LSoftmax [26]  small  67.13  Google FaceNet  large  70.50 
Method  Protocol  @1M  @100k  @10k  @1k 
Softmax  small  71.17  85.78  93.22  96.14 
Center loss + softmax  small  75.79  90.74  96.45  97.91 
Triplet loss  small  69.13  85.64  93.38  96.66 
Triplet loss + softmax  small  70.22  86.77  94.71  97.02 
COCO  small  76.57  91.77  96.72  98.03 

5.2 Person Recognition
Comparison with stateofthearts and visual results. The comparison figure and visual results are provided in the supplementary due to page limit. We can see that our recognition system outperforms against previous stateofthearts in all four test splits. Note that the test identities share no interaction with those in the training set and thus this task is also a challenging open set verification problem. The visual results show some examples of the predicted instances by our model, where complex scenes with nonfrontal faces and body occlusion can be handled properly in most scenarios. The last two columns show failure cases where our method could not differentiate the identities, this is probably due to the very similar appearance configuration in these scenarios.
Method  Face  Head  Upper body  Whole body  original  album  time  day 

  ✓  ✓  84.17  80.78  74.00  53.75  
  ✓  ✓  89.24  81.46  76.84  61.48  
  ✓  ✓  88.40  82.15  70.90  57.87  
  ✓  ✓  88.76  79.15  68.64  42.91  
RNN [22]  ✓  ✓  84.93  78.25  66.43  43.73  
  ✓  ✓  87.43  77.54  67.40  42.30  
  ✓  ✓  81.93  73.84  62.46  34.77  
  ✓  ✓  ✓  87.86  80.85  71.65  59.03  
  ✓  ✓  ✓  88.13  82.87  73.01  55.52  
  ✓  ✓  ✓  89.71  78.29  66.60  52.21  
  ✓  ✓  ✓  91.43  80.67  70.46  55.56  
Softmax  ✓  ✓  ✓  ✓  88.73  80.26  71.56  50.36 
  ✓  ✓  ✓  ✓  92.78  83.53  77.68  61.73 
Ablation study on different regions. Table 3 depicts the investigation on merging the similarity score from different body regions during inference. [22] also employs a multiregion processing step and we include it in Table 3. For some region combination, on certain splits our model is superior (e.g., head plus upper body region, 88 vs 84 on original, 79 vs 78 on album, 68 vs 66 on time); whereas on other splits ours is inferior (e.g., 42 vs 43 on day). This is probably due to distribution imbalance among splits: the upper body region differs greatly in appearance with some instances absent of this region, making COCO hard to learn the corresponding features. However, under the score integration scheme, the final prediction (last line) can complement features learned among different regions and achieves better performance against [22].
6 Conclusion
In this work, we reinvestigate the feature learning problem based on the motivation that deeply learned features should be both discriminative and polymerized. We address the problem by proposing a congenerous cosine (COCO) loss, which optimizes the cosine distance among data features to simultaneously enlarge interclass variation and intraclass similarity. COCO can be learned in a neat way with stable endtoend training. We have extensively conducted experiments on five benchmarks, from image classification to face identification, to demonstrate the effectiveness of our approach, especially applying it to the largescale human recognition tasks.
References
 [1] D. Anguelov, K. chih Lee, S. B. Gokturk, and B. Sumengen. Contextual identity recognition in personal photo albums. In CVPR, 2007.
 [2] Z. Chi, H. Li, H. Lu, and M.H. Yang. Dual deep network for visual tracking. IEEE Trans. on Image Processing, 2017.
 [3] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei. ImageNet: A LargeScale Hierarchical Image Database. In CVPR, 2009.
 [4] R. Girshick. Fast RCNN. In ICCV, 2015.
 [5] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
 [6] I. J. Goodfellow, D. Wardefarley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML. 2013.
 [7] Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao. MSCeleb1M: a dataset and benchmark for largescale face recognition. arXiv preprint:1607.08221, 2016.
 [8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
 [9] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313, 2006.
 [10] G. B. Huang, M. Ramesh, T. Berg, and E. LearnedMiller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 0749, University of Massachusetts, Amherst, October 2007.
 [11] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. 2015.
 [12] I. KemelmacherShlizerman, S. M. Seitz, D. Miller, and E. Brossard. The megaface benchmark: 1 million faces for recognition at scale. In CVPR, 2016.
 [13] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. In Technical Report, 2009.

[14]
A. Krizhevsky, I. Sutskever, and G. E. Hinton.
Imagenet classification with deep convolutional neural networks.
In NIPS, pages 1106–1114, 2012.  [15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. In Proceedings of the IEEE, volume 86, pages 2278–2324, 1998.
 [16] C.Y. Lee, P. W. Gallagher, and Z. Tu. Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. arXiv preprint:1509.08985, 2015.
 [17] C.Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeplysupervised nets. arXiv preprint: 1409.5185, 2014.
 [18] H. Li, J. Brandt, Z. Lin, X. Shen, and G. Hua. A multilevel contextual model for person recognition in photo albums. In CVPR, 2016.
 [19] H. Li, J. Chen, H. Lu, and Z. Chi. CNN for saliency detection with lowlevel feature integration. Neurocomputing, 226:212–220, 2017.
 [20] H. Li, Y. Liu, W. Ouyang, and X. Wang. Zoom outandin network with map attention decision for region proposal and object detection. In arXiv preprint: 1709.04347, 2017.
 [21] H. Li, W. Ouyang, and X. Wang. Multibias nonlinear activation in deep neural networks. In ICML, 2016.
 [22] Y. Li, G. Lin, B. Zhuang, L. Liu, C. Shen, and A. van den Hengel. Sequential person recognition in photo albums with a recurrent network. arXiv preprint:1611.09967, 2016.
 [23] M. Lin, Q. Chen, and S. Yan. Network in network. In ICLR, 2014.
 [24] H. Liu, Y. Tian, Y. Wang, L. Pang, and T. Huang. Deep relative distance learning: Tell the difference between similar vehicles. In CVPR, 2016.
 [25] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song. Sphereface: Deep hypersphere embedding for face recognition. In CVPR, 2017.
 [26] W. Liu, Y. Wen, Z. Yu, and M. Yang. Largemargin softmax loss for convolutional neural networks. In ICML, 2016.
 [27] Y. Liu, H. Li, J. Yan, F. Wei, X. Wang, and X. Tang. Recurrent scale approximation for object detection in CNN. In IEEE International Conference on Computer Vision, 2017.
 [28] S. J. Oh, R. Benenson, M. Fritz, and B. Schiele. Person recognition in personal photo collections. In ICCV, 2015.
 [29] S. J. Oh, R. Benenson, M. Fritz, and B. Schiele. Faceless person recognition; privacy implications in social media. In ECCV, 2016.
 [30] W. Ouyang, H. Li, X. Zeng, and X. Wang. Learning deep representation with largescale attributes. In ICCV, 2015.
 [31] R. Ranjan, C. D. Castillo, and R. Chellappa. L2constrained softmax loss for discriminative face verification. arXiv preprint:1703.09507, 2017.
 [32] S. Ren, K. He, R. Girshick, and J. Sun. Faster RCNN: Towards RealTime Object Detection with Region Proposal Networks. In NIPS, 2015.
 [33] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, 2015.
 [34] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. In ICLR, 2015.
 [35] Y. Sun, X. Wang, and X. Tang. Deeply learned face representations are sparse, selective, and robust. In CVPR, 2015.
 [36] O. Tadmor, Y. Wexler, T. Rosenwein, S. ShalevShwartz, and A. Shashua. Learning a metric embedding for face recognition using the multibatch method. In NIPS, 2016.
 [37] L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus. Regularization of neural networks using dropconnect. In ICML, 2013.
 [38] F. Wang, X. Xiang, J. Cheng, and A. L. Yuille. Normface: L2 hypersphere embedding for face verification. arXiv preprint:1704.06369, 2017.
 [39] X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015.
 [40] S.E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In CVPR, 2016.

[41]
K. Q. Weinberger and L. K. Saul.
Distance metric learning for large margin nearest neighbor
classification.
Journal of Machine Learning Research
, 2009.  [42] Y. Wen, K. Zhang, Z. Li, and Y. Qiao. A discriminative feature learning approach for deep face recognition. In ECCV, 2016.
 [43] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning, with application to clustering with sideinformation. In NIPS, 2003.
 [44] N. Zhang, M. Paluri, Y. Taigman, R. Fergus, and L. Bourdev. Beyond frontal faces: Improving person recognition using multiple cues. In CVPR, 2015.
 [45] X. Zhang, Z. Fang, Y. Wen, Z. Li, and Y. Qiao. Range loss for deep face recognition with longtail. arXiv preprint:1611.08976, 2016.
Supplementary
6.1 Proof on the optimal scale factor, Theorem 1
Proof.
It is obvious that the infimum value of COCO loss is of the form:
(10)  
(11)  
(12) 
where is the probability output in the final classification layer. For brevity, we denote and ; is the angle between two features. Often we have in practice and yet the minimal value of can still be . Therefore,
(13) 
The loss to minimize in an optimization problem usually meets some upper bound criteria, and we denote this target value as , i.e., . Incorporating this prior knowledge into previous derivations, we have:
(14)  
(15) 
∎
It is empirically found that a typical value for is and thus the scale factor can be expressed in a deterministic closed form:
(16) 
6.2 Details on Image Classification and Face Recognition
For image classification on CIFAR, we use a simple ResNet20 structure and train the network from scratch. We set the momentum as 0.9 and the weight decay to be 0.005. The base learning rate is set to be 0.1, 0.1, 0.05, respectively. We drop the learning rate by 10way and stop to decrease the learning rate until it reaches a minimum value (0.0001). All the convolutional layers are initialized with Gaussian distribution with mean of zero and standard variation of 0.05 or 0.1.
For face recognition, we use the Inception ResNet model and add a new fullyconnected layer after the final global pooling layer to generate a 128dim feature. The model is trained from scratch. The training data is a subset of the Microsoft1M celebrity dataset, which consists of 80k identities with 3M images in total, removing all overlapping IDs. For COCO, softmax and center loss, we train the network with SGD solver with a base learning rate 0.01 and drop it to 10% every five epoch. The total training time is 12 epoch. The momentum is set to 0.9. Batch normalization [11] with moving average is employed. For the triplet loss, we have a base learning rate to 0.1 with a bigger batch size 128*3 (it is hard to converge if the size is less than 64*3); we drop the rate to 10% every 90k iteration. The total training time is around 300k iterations.
Evaluation metric on face recognition. For verification, the test set is LFW [10] and the metric is under standard ‘Unrestricted, Labeled Outside Data’ setting of LFW. It verifies 6,000 pairs to determine if they belong to the same person. We report ROC curve and the mean accuracy. For identification on MegaFace [12], we have the test set into two subsets: the first is probe set that consists of FaceScrub (3530 images of 80 persons); the second is gallery set that consists of distractors (1M images with no overlap with FaceScrub). The test metric is under standard MegaFace Challenge1 setting. We report the CMC curve in terms of Top1 accuracy with 10, 100, 1000, 10k, 100k and 1M distractors.
6.3 More Algorithm Details for Person Recognition
The COCO features are trained on four regions (thus four models), namely, face, head, upper body and whole body. We take the face region as an illustration for the alignment scheme. A face detector is first pretrained in a RPN spirit [32]. The face detector identifies some keypoints (eye, brow, mouth, etc.) and we align the detected face patch to a base location via an affine transformation. Let denote keypoints and the aligned locations, respectively. We define as two affine spaces, then the transformation is defined as: , where
is a linear transformation matrix in
and being the bias in . Such an alignment scheme ensures samples both within and across categories do not have large variance: if the model is learned without alignment, it has to distinguish more patterns, e.g., different rotations among persons, making it more prone to overfitting; with alignment, it can better classify features of various identities despite of rotation, viewpoint, translation, etc. Given the cropped and aligned patches, we finetune the face model on PIPA training set using COCO loss.The head region is given as the ground truth for each person. To detect a whole body, we also pretrain a detector in the RPN framework. The model is trained on the largescale human celebrity dataset [7], where we use the first 87021 identities in 4638717 images. The network structure is an inception model [11] with the final pooling layer replaced by a fully connected layer. To determine the upper body region, we conduct human pose estimation [40] to identity keypoints of the body and the upper part is thereby located by these points. The head, whole body and upper body models, which are used for COCO loss training, are finetuned on PIPA training set using the pretained inception model, following similar procedure of patch alignment stated previously for the face region. The aligned patches of four regions are shown in Figure 6(c).
6.4 Experiments on Person Recognition
Test split  Face  Head  Upper body  Whole body  

na  ali  na  ali  [22]  na  ali  [22]  na  ali  
original  95.47  97.45  74.23  82.69  81.75  76.67  80.75  79.92  75.04  79.06 
album  94.66  96.57  65.47  73.77  74.21  66.23  69.58  70.78  64.21  67.27 
time  91.03  93.36  55.88  64.31  63.73  55.24  57.40  58.80  55.53  54.62 
day  90.36  91.32  35.27  44.24  42.75  26.49  32.09  34.61  32.85  29.59 
Ablation study on feature alignment for different regions. Table 4 reports the performance of using feature alignment and different body regions, where several remarks could be observed. First, the alignment case in each region performs better by a large margin than the nonalignment case, which verifies the motivation of patch alignment to alleviate innerclass variance stated in the main paper. Second, for the alignment case, the most representative features to identify a person reside in the region of face, followed by head, upper body and whole body at last. Such a clue is not that obvious for the nonalignment case. Third, we notice that for the whole body region, accuracy in the nonalignment case is higher than that of the alignment case in time and day. This is probably due to the improper definition of base points on these two sets.
Methods  original  album  time  day 

PIPER [44]  83.05       
RNN [22]  84.93  78.25  66.43  43.73 
Naeil [28]  86.78  78.72  69.29  46.61 
Multicontext [18]  88.75  83.33  77.00  59.35 
Ours  92.78  83.53  77.68  61.73 
Visual results. Figure 7 shows some visualization results for person recognition task.
Comments
There are no comments yet.