Face recognition performance using deep learning has seen dramatic improvements in recent years. Convolutional networks trained with large datasets of millions of images of thousands of subjects have shown remarkable capability of learning facial representations that are invariant to age, pose, illumination and expression (A-PIE)[4, 5, 6, 7, 8, 9]. These representations have shown strong performance for recognition of imagery and video in-the-wild in unconstrained datasets, with recent approaches demonstrating capabilities that exceed human performance on the well known Labeled Faces in the Wild dataset .
The problem of face recognition may be described in terms of face verification and face identification. Face verification involves computing a one-to-one similarity between a probe image and a reference image, to determine if two image observations are of the same subject. In contrast, face identification involves computing a one-to-many similarity between a probe media and a gallery of known subjects in order to determine a probe identity. Face verification is important for access control or re-identification tasks, and face identification is important for watch-list surveillance or forensic search tasks.
Face recognition performance evaluations have traditionally focused on the problem of face verification. Over the past fifteen years, face datasets have steadily increased in size in terms of number of subjects and images, as well as complexity in terms controlled vs. uncontrolled collection and amount of A-PIE variability . The Labeled Faces in the Wild dataset  contains 13233 images of 1680 subjects, and compares specific pairs of images of subjects to characterize 1:1 verification performance. Similarly, the YouTubeFaces dataset  contains 3425 videos of 1595 subjects, and compares pairs of videos of subjects for verification. These datasets have set the established standard for face recognition research, with steadily increasing performance [11, 5, 6, 4]. Recently, protocols for face identification have been introduced for LFW  to address the performance evaluation for identification on a common dataset. However, the imagery in LFW was constructed with a well known near-frontal selection bias, which means evaluations are not predictive of performance for large in-the-wild pose variation. In fact, recent studies have shown that while algorithm performance for near frontal recognition is equal to or better than humans, performance of automated systems at the extremes of illumination and pose are still well behind human performance .
The IJB-A dataset  was created to provide the newest and most challenging dataset for both verification and identification. This dataset includes both imagery and video of subjects manually annotated with facial bounding boxes to avoid the near frontal bias, along with protocols for evaluation of both verification and identification. Furthermore, this dataset performs evaluations over templates  as the smallest unit of representation, instead of image-to-image or video-to-video. A template is a set of all media (images and/or videos) of a subject that are to be combined into a single representation suitable for matching. Template based representations are important for many face recognition tasks, which take advantage of an historical record of observations to further improve performance. For example, a template provides a useful abstraction to capture the mugshot history of a criminal for forensic search in law enforcement, or lifetime enrollment images for visa or driver’s licenses in civil identity credentialing for improved access control. Biometric templates have been studied for face recognition, where performance on older algorithms have increased given an historical set of images . The IJB-A dataset is the only public dataset that enables a controlled evaluation of template-based verification and identification at the extremes of pose, illumination and expression.
In this paper, we study the problem of template adaptation. Template adaptation is an example of transfer learning, where the target domain is defined by the set of media of a subject in a template. In general, transfer learning includes a source domain for feature encoding of subjects trained offline, and a specific target domain with limited available observations of new subjects. In the case of template adaptation, the source domain may be a deep convolutional network trained offline to predict subject identity, and the target domain is the set of media in templates of never before seen subjects. In this paper, we study perhaps the simplest form of template adaptation based on deep convolutional networks and one-vs-rest linear SVMs. We combine deep CNN features trained offline to predict subject identity, with a simple linear SVM classifier trained at test time using all media in a template as positive features to classify each new subject.
Extensive evaluation of template adaptation on the IJB-A dataset has generated surprising results. First, template adaptation outperforms all top performing techniques in the literature: convolutional networks combined with triplet loss similarity [6, 4, 15], joint Bayesian metric learning , pose specialized networks , 2D alignment , 3D frontalization  and novel convolutional network architectures . Second, template adaptation when combined with these other techniques results in nearly equivalent performance. Third, we show a clear tradeoff between the size of a template (e.g. the number of unique media in the template) and performance, which leads to the conclusion that if the average largest template size is big enough, then a simple template adaptation strategy is the best choice for both verification and identification on template based datasets.
2 Related Work
The top performing approaches for face verification on Labeled Faces in the Wild  and YouTubeFaces  are all based on convolutional networks. VGG-Face is the application of the VGG-16 convolutional network architecture  trained on a newly curated dataset of 2.6M images of 2622 subjects. This representation includes triplet loss embedding and 2D alignment for normalization to provide state of the art performance. FaceNet  applied the inception CNN architecture  to the problem of face verification. This approach included metric learning to train a triplet loss embedding to learn a 128 dimensional embedding optimized for verification and clustering. This network was trained using a private dataset of over 200M subjects. DeepFace  uses a deep network coupled with 3D alignment, to normalize facial pose by warping facial landmarks to a canonical position prior to encoding. DeepID2+  and DeepID3  extended the inception architecture to include joint Bayesian metric learning  and multi-task learning for both identification and verification.
These top performing convolutional network architectures have interesting common properties. First, they all exhibit deep convolutional network structure, often with parallel specialized sub-networks. However, Parkhi et. al  showed that a the VGG-16 very deep architecture , when trained with a broad and deep dataset containing one thousand examples of 2622 subjects, outperformed networks with specialized networks  and ensembles  on YouTubeFaces. Second, many top performing approaches use some form of pose normalization such as 2D/3D alignment [5, 4, 17] to warp the facial landmarks into a canonical frontal pose. Finally, many approaches use metric learning in the form of triplet loss similarity or joint Bayesian metric learning for the final loss to learn an optimal embedding for verification [6, 4, 16]. An recent independent study reached a similar conclusion that multiple networks combined in an ensemble and metric learning are crucial for strong performance on LFW .
Recent evaluations on IJB-A  are also based on convolutional networks and mirror the top performing approaches on LFW and YouTubeFaces. Recent approaches include deep networks using triplet loss similarity and joint Bayesian metric learning , and five pose specialized sub-networks with 3D pose rendering . Face-BCNN  applies the bilinear CNN architecture to face identification, publishing the earliest results on IJB-A.
Transfer learning has been well studied in the literature, and we refer to a comprehensive survey on the topic . Transfer learning and domain adaptation for convolutional networks is typically performed by pretraining the network on a labeled source domain, replacing the final loss layer for a new task, then fine-tuning the network on this new objective using data from the target domain . Prior work has shown that freezing the network, then replacing a softmax loss layer with a linear SVM loss can result in improved performance for classification tasks [26, 27]. These approaches can be further improved by jointly training the SVM loss and the CNN parameters, so that the lower level features are fine-tuned with respect to the SVM objective . However, such retraining requires a large target domain training set to fine tune all parameters in the deep network. In this paper, we focus on updating the linear SVM only, as this classifier has a regularization structure that has been shown to perform well for unbalanced training sets with few positive examples (e.g. from media in a template) and many negative examples .
Finally, we note that the approach of defining a similarity function for face verification using linear SVMs trained on a large negative set was originally proposed as one-shot similarity (OSS) . We study the more general form of this original idea, by considering templates of images and videos of varying size, alternative fusion strategies, and the impact of gallery negative sets for identification.
3 Template Adaptation
Template adaptation is a form of transfer learning, combining deep convolutional network features trained on a source domain of many labeled faces, with template specific linear SVMs trained on a target domain using the media in a template. Template adaptation can be further decomposed into probe adaptation for face verification and gallery adaptation for face identification. In this section, we describe these approaches.
First, we provide preliminary definitions. A media observation is either a color image of a subject, or a set of video frames of a subject. An image encoding is a mapping from an image to an encoding with dimensionality (e.g. features from a deep CNN). An average encoding is the average of image/frame encodings in a media observation, such as the encodings for all frames in a video. A template is a set of encoded media observations of one subject. The size of a template is defined as the number of unique media used for encoding. Finally, a gallery is a set of tuples of templates and associated subject identification label .
Figure 1 shows an overview of this concept. Each colored shape corresponds to a feature encoding of image or a video feature for the media in a template, such as generated from a convolutional network trained offline. The gray squares correspond to encodings of a large set of media of unique subjects that are very likely to be disjoint from any template. The centroid of the colored shapes corresponds to the average encoding for this template. Probe adaptation is the problem of max-margin classification of the positive features from a template to the large negative feature set. The similarity between the blue probe template and the mated (genuine subject) green template is the margin (dotted lines) of the green feature encodings to the decision surface. Observe that this margin is positive, whereas the margin for the red classifier is negative, so that the blue/green similarity is much larger than blue/red as desired. Gallery adaptation is the problem of max-margin classification where the negative feature set for the gallery templates are defined by the other gallery templates. Observe that adding the magenta subject causes the decision surface for the red and green classifiers to shift, improving the margin score for the probe.
More formally, probe adaptation is the training of a similarity function for a probe template and reference template . Train a linear SVM for , using unit normalized average encodings of media in as positive features and a large feature set as negatives. The large negative set contains one feature encoding for many subject identities, so this set is very likely to be disjoint with the probe template. Similarly, train a linear SVM for , using the unit normalized average encodings for media in as positive features and a large feature set as negatives. Finally, let be notation for evaluating the SVM functional margin (e.g. ) trained on , and evaluated using the unit normalized average media encoding in template . The final similarity score for probe adaptation is the fusion of the two classifier margins using a linear combination . For implementation details, see section 4.1.
Gallery adaptation is the training of a similarity function from a probe template to gallery . A gallery contains templates , and gallery adaptation trains a linear SVM for all pairs following the approach for probe adaptation. Gallery adaptation differs from probe adaptation in that the large negative set for a template is all unit normalized media encodings from all other templates in not including . In other words, the other non-mated subjects in the gallery are used to construct negative features for , whereas the large negative set is used for . The final similarity score for gallery adaptation is the fusion of the probe classifier and the gallery classifier for each using the linear combination .
The proposed approach in section 3 introduces a number of research questions to study.
How does this compare to the state of the art? In section 4.2, we compare the template adaptation approach to all published results and show that the proposed approach exceeds the state of the art by a wide margin. Furthermore, in section 4.3 we perform an analysis of alternatives to combine the state of the art techniques with template adaptation and show that when combined, these alternative approaches all result in nearly the same performance.
How should the negative set be formed? Template adaptation requires training linear SVMs, which require a labeled set of positive and negative feature encodings. In section 4.4, we perform a study to evaluate different strategies of constructing this negative set including using a holdout set, external negative set and combinations. Results show that the gallery based negative set is best for gallery adaptation, and a holdout set derived from the same dataset as the templates is best for verification.
How large do the templates need to be? In section 4.5, we study the effect of template size, or total number of media in a template, on verification performance to identify the minimum template size necessary, to help guide future template based dataset construction. We show that a minimum of three unique media per template results in diminishing returns for template adaptation.
How should template classifier scores be fused? In section 4.6, we study the effect of different strategies for combination of two classifiers, based on winner take all and weighted combinations of on template size. We conclude that an average combination is best with winner take all a close second.
What are the error modes of the template adaptation? In section 4.7, we visualize the best and worst templates pairs in IJB-A for verification (identification errors are shown in the supplementary material), and we show that template size (e.g. number of media in a template) has the largest effect on performance.
4.1 Experimental System
We use the VGG-Face deep convolutional neural network, using the penultimate layer output as the feature encoding . For computing the average encoding across frames of video, we use face tracks
which compute the mean encoding of all frames in a video followed by unit normalization. This approach was shown to be effective for Fisher vector encoding and deep CNN encoding .
Media encoding is preprocessed according to the following pipeline. For each media, we crop each face using the ground truth or detected facial bounding box dilated by a factor of 1.1. Then, we anisotropically rescale this face crop to 224x224x3, such that the aspect ratio is not preserved. This is the assumed input size for the CNN. Next, we encode this face crop for each image or frame in the template using the VGG-face network, and compute average video encodings for each video. Next, we unit normalize each media feature, and train the weights and bias for a linear SVM for each template. We use the LIBLINEAR library with L2-regularized L2-loss primal SVM with class weighted hinge loss objective .
The loss in (1) includes terms for both positive and negative features, such that is the regularization constant for positive observations () and for negative observations (). This formulation of the loss enables data rebalancing for cases where . The positive features in are the average media encodings in the template. The negative features are derived from a large negative feature set in (either from a large negative set for probe adaptation, or other non-mated templates for gallery adaptation). The parameters and adjust the regularization constants to be proportional to the inverse class frequency. The parameter in the SVM, trading-off regularizer and loss, was determined using an held-off validation subset of the data. Finally, the learned weights include a bias term by augmenting with a constant dimension of one.
At test time, we evaluate the linear SVMs as described in section 3. We compute the average media encodings for each media in a template, then compute the mean of the media encodings, then unit normalize forming a template encoding. This constructs a single feature for each template. Given two templates and , let the notation be the evaluation of the functional SVM margin (e.g. ) for the trained linear SVM for , given the template encoding for . Finally, the similarity is a weighted combination of the functional margins for the SVM for evaluated on template encoding and evaluated on .
For baseline comparison, we use the VGG-face network with the output of the 4096d features from the penultimate fully connected layer layer. Media encodings are constructed by averaging features across a video [33, 4]
, and template encodings are constructed by averaging media encodings over a template, then unit normalizing. Template similarity is equivalent to negative L2 distance over unit normalized template encodings. We also compare results with 2D alignment, triplet similarity embedding and joint Bayesian triplet similarity embedding. For the triplet loss and joint Bayesian metric learning, we use hyperparameter settings such that minibatch = 1800, 1M “semi-hard” negative triplets per minibatch, dropconnect regularization 
, 3 epochs of Parallel SGD, fixed learning rate . For 2D alignment, we use ground truth facial bounding boxes and facial landmark regression 
, followed by a robust least squares similarity transform estimation to a reference box to best center the nose.
For all research studies in sections 4.3 - 4.7, we report 1:1 verification ROC curve for all probe and gallery template pairs in IJB-A split 1 and CMC for identification on IJB-A split 1 (see section 4.2 for definitions). This is equivalent to IARPA Janus Challenge Set 2 (CS2) evaluation protocol, which is also reported in the literature.
4.2 IJB-A Evaluation
In this section, we describe the results for evaluation of the experimental system on the IJB-A verification and identification protocols . IJB-A contains 5712 images and 2085 videos of 500 subjects, for an average of 11.4 images and 4.2 videos per subject. This dataset was manually curated using Mechanical Turk from media-in-the-wild to annotate the facial bounding box and eyes and nose facial landmarks, and this manual annotation avoids the Viola-Jones near-frontal bias. Furthermore, this dataset was curated to control for ethnicity, country of origin and pose biases.
Metrics for 1:1 verification are evaluated using a decision error tradeoff (DET) curve. The 1:1 DET curve is equivalent to a receiver operating characteristics (ROC) curve, where the true accept rate is one minus the false negative match rate. This evaluation plots the false negative match rate vs. the false match rate as a function of similarity threshold for a given set of pairs of templates for verification.
Metrics for 1:N identification are the Decision Error Tradeoff (DET) curve and the Cumulative Match Characteristic (CMC) curve. The 1:N DET curve plots the false negative identification rate vs. the false positive identification rate as a function of similarity threshold for a search of L=20 candidate identities in a gallery. The 1:N CMC curve is an information retrieval metric that captures the recall of a specific probe identify within the top-K most similar candidates when searching the gallery. This DET curve is appropriate for limiting the workload for an analyst by allowing for a similarity threshold to be applied to reject false matches even if in the top-K. For detailed description of these metrics, refer to [3, 14].
Performance evaluation for IJB-A requires evaluation of ten random splits of the dataset into training and testing (gallery and probe) sets. The evaluation protocol for 1:1 verification considers specific pairs of mated (genuine) and non-mated (imposter) subjects. The non-mated pairs were chosen to control for gender and skin tone to make the verification problem more challenging. Performance is reported for operating points on each of the curves: 1:1 DET reports false negative match rate at a false match rate of 1e-2, 1:N DET report true positive identification rate (e.g. 1-false negative identification rate) at false positive identification rate of 1e-2, and CMC report true positive identification rate (recall or correct retrieval rate) at rank-one and rank-ten. The 10 splits are used to compute standard deviations for each of these operating points, to characterize statistical significance of the results.
Figure 2 shows the overall evaluation results on IJB-A. This evaluation compares the baseline approach of VGG-Face only  with the proposed approach of VGG-Face encoding with probe and gallery template adaptation. These results show that identification performance is slightly improved for rank 1 and rank 10 retrieval, however there are large performance improvements for the 1:N DET for identification and the 1:1 DET for verification. The table in figure 2 shows performance at specific operating points for verification and identification, and compares to published results in the literature for joint Bayesian metric learning , triplet similarity embedding , multi-pose learning , bilinear CNNs  and very deep CNNs [4, 32]. These results show that the proposed template adaptation, while conceptually simple, exhibits state-of-the-art performance by a wide margin on this dataset.
4.3 Analysis of Alternatives
Figure 4 shows an analysis of alternatives study. The state of the art approaches on LFW and YouTubeFaces often augment a very deep CNN encoding with metric learning [6, 4] for improved verification scores or 2D alignment [5, 4] to better align facial bounding boxes. In this study, we implement triplet loss similarity embedding, joint Bayesian similarity embedding and 2D alignment, and use these alternative feature encodings as input to template adaptation. In this study, we seek to answer whether these alternative strategies will provide improved performance over using CNN encoding only or CNN encoding with template adaptation.
We report 1:1 DET for all probe and gallery template pairs in IJB-A split 1 and CMC for identification on IJB-A split 1. This study shows that template adaptation on the CNN output provides nearly the same result as template adaptation with metric learning or 2D alignment based features. This implies that the additional training and computational requirements for these approaches are not necessary for template based datasets. Furthermore, this study shows that 2D alignment does not provide much benefit on IJB-A, in contrast with reported performance on near frontal datasets [4, 5]. One hypothesis is that this is due to the fact that this dataset has many profile faces for which facial landmark alignment is inaccurate or fails altogether.
4.4 Negative Set Study
Figure 3 shows a negative set study. We study the effect of different combinations of negative feature sets on overall verification performance. Recall that probe and gallery template adaptation require the use of a large negative set for training each linear SVM. This study compares using combinations of features drawn from the non-mated subjects in the gallery (neg) and features drawn from an independent subject disjoint training set (trn). This training set is drawn from the same dataset distribution as the gallery, but is subject disjoint.
The results in figure 3 show that using the gallery set as negative feature set provides the best performance for gallery adaptation. Using the disjoint training set for probe adaptation is the best for verification. This is the final strategy used for evaluation in figure 2. This conclusion is somewhat surprising that the probe adaptation was worse when constructing a negative set combining neg+trn, as a larger negative set typically results in better generalization performance for related approaches such as exemplar-SVM . However, a larger negative set would dilute the effect of the discriminating between gallery subjects, which is the primary goal of the evaluation, so a focused negative set would be appropriate.
Next, we experimented with the CASIA Web-Face dataset . The best negative set for probe adaptation is a set drawn from the same distribution as the templates. However, in many operatational conditions, this dataset will not be available. To study these effects, we constructed a dataset by sampling 70K images from CASIA balanced over classes, and pre-encoding these images for template adaptation training. Figure 3 (bottom) shows that this results in slightly reduced verification performance. One hypothesis is that this imagery exhibits an unmodeled dataset bias for IJB-A faces, or that CASIA is image only, while IJB-A is imagery and videos.
4.5 Template Size Study
Figure 5 shows an analysis of performance as a function of template size. For this study, we consider pairs of templates and compute the maximum template size as . Next, we consider max template sizes in the range , and compute a verification ROC curve for only those template pairs with sizes within the range. For each, we report a single point on the ROC curve at a false alarm rate of 1e-2 or 1e-3. Results from section 4.2 show that the largest benefit for template adaptation is on verification performance, so we analyze the effect of the template sizes on this metric.
(left) shows mean similarity score for templates of mated subjects only within a given template size range. This shows that as the template size increases the mated similarity score also increases. This is perhaps not surprising, as the more observations of media that are available in a template, the better the subject representation and the better the similarity score. The largest uncertainty as shown by the error bars is when the maximum template size is one, which is also not too surprising. Interestingly, the variance on the similarity scores does not decrease as template sizes increase, rather they stay largely the same even as the mean similarity increases.
Figure 5 (right) shows the effect of template size on verification performance. For each point on this curve, we split the dataset into templates that contained sizes within the range shown. Then, we computed a ROC curve and report the true match rate at a false alarm rate of 1e-3 and 1e-2, an operating point on the verification ROC curve. This result shows that the rate of increase in performance is largest for few media, and performance saturates at about 3 media per template. Furthermore, as the number of media per template increases, the verification score at 1e-2 increases by about 19% from one media per template to sixty four. This also shows that the largest benefit for template adaptation is when there are at least three media per template.
4.6 Fusion Study
Figure 6 shows a study for comparing three alternatives for fusion of classifiers. Recall from section 4.1 that a final similarity score is computed as a linear combination of the classifiers trained for templates and . In this section, we study different strategies for setting this weighting.
In general, the template classifier fusion from section 3 is a linear combination of SVM functional margins, . We explore strategies based on winner take all (), template weighted fusion ( and an experiment using the SVM geometric margin (e.g. ), as suggested in . The default strategy is average fusion such that . Results show that the strategy of computing a weighted average with of probe and gallery templates is the best strategy. We also performed a hyperparameter search over , which confirmed this selection.
Finally, we also note that we ran experiments computing average media encodings, computing the margins for each encoding, then averaging the margins. This strategy performed consistently worse than computing average feature encodings.
4.7 Error Analysis
Finally, we visualized identification and verification errors in different performance domains, in order to gain insight into template-based facial recognition. This analysis provides a better understanding of the error modes to better inform future template-based facial recognition. More detailed figures and additional discussion, including identification analysis, are available in supplemental material.
Figure 7 shows four columns of verification probe and gallery pairs for: the best scoring mated pairs; worst scoring mated pairs; best scoring non-mated pairs; and worst scoring non-mated pairs. After computing the similarity for all pairs of probe and gallery templates, we sort the resulting list. Each row represents a probe and gallery template pair. The templates contain from one to dozens of media. Up to eight individual media are shown with the last space showing a mosaic of the remaining media in the template. Between the templates are the IJB-A Template IDs for probe and gallery as well as the best mated and best non-mated scores.
Figure 7 (far left) shows the highest mated similarities. In the thirty highest scoring correct matches, we immediately note that every gallery template contains dozens of media. The probe templates either contain dozens of media or one media that matches well. Figure 7 (center left) shows the lowest mated template pairs, representing failed identification. The thirty lowest mated similarities result from single-media probe templates that are low contrast, low resolution, extremely non-frontal, or not oriented upwards.
Figure 7 (center right) showing the worst non-mated pairs highlights very understandable errors involving single-media probe templates representing impostors in challenging orientations. Figure 7 (far right) showing the best non-mated similarities shows the most certain non-mates, again often involving large templates.
In this paper, we have introduced template adaptation, a simple and surprisingly effective strategy for face verification and identification that achieves state of the art performance on the IJB-A dataset. Furthermore, we showed that this strategy can be applied to existing networks to improve performance. Futhermore, our evaluation provides compelling evidence that there are many face recognition tasks that can benefit from a historical record of media to aid in matching, and that this is an important problem to further evaluate with new template-based face datasets.
Our analysis shows that performance is highly dependent on the number of media available in a template. This strategy results in performance that results in 19% decrease in verification scores when a template contains a single media, such as comparing image to image or video to video, as in LFW or YouTubeFaces style evaluations. However, when probe or gallery templates are rich and at least one template contains greater than three media, performance quickly saturates and dominates the state of the art.
Finally, it remains to be seen if the conclusions hold for other datasets. The IJB-A dataset is currently the only public dataset with a template based evaluation protocol, and it may be that our performance claims are due to dataset bias, even though the composition of this dataset was engineered to avoid systemic bias . Finally, the gallery size for this dataset is limited to 500 subjects, and it remains to be seen if the performance claims scale as the number of subjects increase.
Acknowledgment. This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) under contract number 2014-14071600010. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purpose notwithstanding any copyright annotation thereon.
-  Huang, G., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: University of Massachusetts, Amherst, Technical Report 07-49. (2007)
-  Wolf, L., Hassner, T., Maoz, I.: Face recognition in unconstrained videos with matched background similarity. In: CVPR. (2011)
-  Klare, B., Klein, B., Taborsky, E., Blanton, A., Cheney, J., Allen, K., Grother, P., Mah, A., Jain, A.: Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus benchmark A. In: CVPR. (2015)
-  Parkhi, O., Vedaldi, A., Zisserman, A.: Deep face recognition. In: BMVC. (2015)
-  Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: DeepFace: Closing the gap to human-level performance in face verification. In: CVPR. (2014)
-  Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: A unified embedding for face recognition and clustering. In: CVPR. (2015)
-  Y. Taigman, M. Yang, M.R., Wolf, L.: Web-scale training for face identification. In: CVPR. (2015)
-  Sun, Y., Liang, D., Wang, X., Tang., X.: DeepID3: Face recognition with very deep neural networks. In: arXiv:1502.00873. (2014)
-  Sun, Y., Wang, X., Tang., X.: Deeply learned face representations are sparse, selective, and robust. In: CVPR. (2015)
-  Learned-Miller, E., Huang, G., RoyChowdhury, A., Li, H., Hua, G.: Labeled Faces in the Wild: A Survey. In: Advances in Face Detection and Facial Image Analysis. Springer (2015)
-  Lu, C., Tang, X.: Surpassing human-level face verification performance on LFW with GaussianFace. In: AAAI. (2015)
-  Best-Rowden, L., Han, H., Otto, C., Klare, B., Jain, A.K.: Unconstrained face recognition: Identifying a person of interest from a media collection. IEEE Transactions on Information Forensics and Security 9(12) (2014) 2144–2157
-  Phillips, J., Hill, M., Swindle, J., O’Toole, A.: Human and algorithm performance on the pasc face recognition challenge. In: BTAS. (2015)
-  Grother, P., Ngan, M.: Face recognition vendor test (frvt): Performance of face identification algorithms. In: NIST Interagency Report 8009. (2014)
-  Sankaranarayanan, S., Alavi, A., Chellappa, R.: Triplet similarity embedding for face verification. In: arXiv:1602.03418. (2016)
-  J. Chen, V.P., Chellappa, R.: Unconstrained face verification using deep CNN features. In: WACV. (2016)
-  AbdAlmageed, W., Wu, Y., Rawls, S., Harel, S., Hassner, T., Masi, I., Choi, J., Lekust, J., Kim, J., Natarajan, P., Nevatia, R., Medioni, G.: Face recognition using deep multi-pose representations. In: WACV. (2016)
-  RoyChowdry, A., Lin, T., Maji, S., Learned-Miller, E.: One-to-many face recognition with bilinear CNNs. In: WACV. (2016)
-  Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR. (2015)
-  Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR. (2015)
-  Chen, D., Cao, X., Wang, L., Wen, F., Sun, J.: Bayesian face revisited: A joint formulation. In: ECCV. (2012)
-  Hu, G., Yang, Y., Yi, D., Kittler, J., Christmas, W., Li, S.Z., Hospedales, T.: When face recognition meets with deep learning: an evaluation of convolutional neural networks for face recognition. In: ICCV workshop on ChaLearn Looking at People. (2015)
-  Chen, J., Ranjan, R., Kumar, A., Chen, C., Patel, V., Chellappa, R.: An end-to-end system for unconstrained face verification with deep convolutional neural networks. In: ICCV workshop on ChaLearn Looking at People. (2015)
-  Pan, S.J., Yang, Q.: A survey on transfer learning. Knowledge and Data Engineering, IEEE Transactions on 22(10) (2010) 1345–1359
-  Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: Cnn features off-the-shelf: An astounding baseline for recognition. In: CVPR Workshop on DeepVision. (2014)
Deep learning with linear support vector machines.In: ICML Workshop on Representational Learning. (2013)
-  Huang, F., LeCun, Y.: Large-scale learning with svm and convolutional for generic object categorization. In: CVPR. (2006)
-  Malisiewicz, T., Gupta, A., Efros, A.: Ensemble of exemplar-svms for object detection and beyond. In: ICCV. (2011)
-  Kobayashi, T.: Three viewpoints toward exemplar svm. In: CVPR. (2015)
-  Wolf, L., Hassner, T., Taigman, Y.: The one-shot similarity kernel. In: ICCV. (2009)
-  Wolf, L., Hassner, T., Taigman, Y.: Effective unconstrained face recognition by combining multiple descriptors and learned background statistics. PAMI 33(10) (2011)
-  Wang, D., Otto, C., Jain, A.: Face search at scale: 80 million gallery. In: arXiv:1507.07242. (2015)
-  Parkhi, O.M., Simonyan, K., Vedaldi, A., Zisserman, A.: A compact and discriminative face track descriptor. In: CVPR. (2014)
Fan, R., Chang, K., Hsieh, C., Wang, X., Lin, C.:
Liblinear: A library for large linear classification.
Journal of Machine Learning Research9 (2008) 1871–1874
-  Wan, L., Zeiler, M., Zhang, S., LeCun, Y., Fergus, R.: Regularization of neural network using dropconnect. In: ICML. (2013)
Zinkevich, M., et al:
Parallelized stochastic gradient descent.In: NIPS. (2011)
-  Kazemi, V., Sullivan, J.: One millisecond face alignment with an ensemble of regression trees. In: CVPR. (2014)
-  Yi, D., Lei, Z., Liao, S., Li, S.: Learning face representation from scratch. In: arXiv:1411.7923. (2014)