1 Introduction
Convolutional neural networks (CNNs) have achieved great success on computer vision community by improving the state-of-the-art in almost all of applications, especially in classification problems including object [14, 13, 12, 20, 29, 22, 33] scene[43, 44], and so on. The key to success of CNNs is the availability of large scale of training data and the end-to-end learning framework. The most commonly used CNNs perform feature learning and prediction of label information by mapping the input raw data to deep embedded features which are commonly the output of the last fully connected (FC) layer, and then predict labels using these deep embedded features. These approaches use the deep embedded features holistically for their applications, without knowing what part of the features is used and what it is meaning.
Face recognition in unconstrained environments is a very challenging problem in computer vision. Faces of the same identity can look very different when presented in different illuminations, facial poses, facial expressions, and occlusions. Such variations within the same identity could overwhelm the variations due to identity differences and make face recognition challenging. To solve these problems, many deep learning-based approaches have been proposed and achieved high accuracies of face recognition such as DeepFace
[34], DeepID series [30, 32, 31, 41], FaceNet [28], PIMNet [17], SphereFace [23], and ArcFace [10].In face recognition tasks in unconstrained environments, the deeply learned and embedded features need to be not only separable but also discriminative. However, these features are learned implicitly for separable and distinct representations to classify among different identities without what part of the features is used, what part of the feature is meaningful, and what part of the features is separable and discriminative. Therefore, it is difficult to know what kind of features are used to discriminate the identities of face images clearly.
To overcome this limitation, we propose a novel face recognition method, called a pairwise relational network (PRN) to capture unique relations within same identity and discriminative relations among different identities. To capture relations, the PRN takes local appearance patches as input by ROI projection around landmark points on the feature map in a backbone CNN network. With these local appearance patches, the PRN is trained to capture unique pairwise relations between pairs of local appearance patches to determine facial part-relational structures and properties in face images. Because the existence and meaning of pairwise relations should be identity dependent, the PRN could condition its processing on a facial identity state feature. The facial identity state feature is learned from the long short-term memory (LSTM) units network with the sequential local appearance patches on the feature maps. To further improve accuracy of face recognition, we combined the global appearance representation with the local appearance representation (the relation features) (Figure 1). More details of the proposed face recognition method are given in Section 2.

The main contributions of this paper can be summarized as follows:
-
We propose a novel face recognition method using the pairwise relational network (PRN) which captures the unique and discriminative pairwise relations of local appearance patches on the feature maps to classify face images among different identities.
-
We show that the proposed PRN is very useful to enhance the accuracy of both face verification and face identification.
-
We present extensive experiments on the public available datasets such as Labeled Faces in the Wild (LFW), YouTube Faces (YTF), IARPA Janus Benchmark-A (IJB-A), and IARPA Janus Benchmark-B (IJB-B).
The rest of this paper is as follows: in Section 2
we describe the proposed face recognition method including the base CNN architecture, face alignment, pairwise relational network, facial identity state feature, loss function used for training the proposed method, respectively; in Sections
3 we present experimental results of the proposed method in comparison with the state-of-the-art on the public benchmark dataset and discussion; in Section 4 we draw a conclusion.2 Proposed Methods
In this section, we describe our methods in detail including the base CNN model as backbone network for the global appearance representation, the face alignment method, the pairwise relational network, the pairwise relational network with face identity states, and the loss functions.
2.1 Base Convolutional Neural Network
We first describe the base CNN model. It is the backbone neural network to represent the global appearance representation and extract the local appearance patches to capture the relations (Figure 1). The base CNN model consists of several 3-layer residual bottleneck blocks similar to the ResNet-101 [13]
. The ResNet-101 has one convolution layer, one max pooling layer,
3-layer residual bottleneck blocks, one global average pooling (GAP) layer, one FC layer, and softmax loss layer. The ResNet-101 accepts a image with resolution as input, and hasconvolution filters with a stride of 2 in the first layer. In contrast, our base CNN model accepts a face image with
resolution as input, and has convolution filters with a stride of 1 in the first layer (conv1 in Table 1). Because of different input resolution, size of kernel filters, and stride, the output size in each intermediate layer is also different from the original ResNet-101. In the last layer, we use the GAP with filter in each channel and the FC layer. The outputs of FC layer are fed into the softmax loss layer. More details of the base CNN architecture are given in Table 1.To represent the global appearance representation , we use the feature which is the output of the GAP in the base CNN (Table 1).
To represent the local appearance representation, we extract the local appearance patches on the feature maps (conv5_3) in the base CNN (Table 1) by ROI projection with facial landmark points. These are used to capture and model pairwise relations between them. More details of the local appearance patches and relations are described in Section 2.3.
Layer name | Output size | 101-layer |
---|---|---|
conv1 | , | |
conv2_x | max pool, stride | |
conv3_x | ||
conv4_x | ||
conv5_x | ||
global average pool, 8630-d fc, softmax |
2.2 Face Alignment
In the base CNN model, the input layer accepts the RGB values of the face image pixels. We employ a face alignment method to align a face image into the canonical face image, then we adopt this aligned face image as input of our base CNN model.
The alignment procedures are as follows: 1) Use the DAN implementation of Kowalsky et al. by using multi-stage neural network [19] to detect facial landmark points (Figure 2b); 2) rotate the face in the image plane to make it upright based on the eye positions (Figure 2c); 3) find a central point on the face by taking the mid-point between the leftmost and rightmost landmark points (the red point in Figure 2d); 4) the center points of the eye and mouth (blue points in Figure 2d) are found by averaging all the landmark points in the eye and mouth regions, respectively; 5) center the faces in the -axis, based on the center point (red point); 6) fix the position along the -axis by placing the eye center point at from the top of the image and the mouth center point at from the bottom of the image, respectively; 7) resize the image to a resolution of . Each pixel which value is in a range of in the RGB color space is normalized by dividing to be in a range of .

2.3 Pairwise Relational Network
The pairwise relational network (PRN) is a neural network and takes a set of local appearance patches on the feature maps as input and output a single feature vector as its relational feature for the face recognition task. The PRN captures unique pairwise relations between pairs of local appearance patches within the same identity and discriminative pairwise relations among different identities. In other words, the PRN captures the core common properties of faces within the same identity, while captures the discriminative properties of faces among different identities. Therefore, the PRN aims to determine pairwise-relational structures from pairs of local appearance patches in face images. The relation feature
represents a latent relation of a pair of two local appearance patches, and can be written as follows:(1) |
where
is a multi-layer perceptron (MLP) and its parameters
are learnable weights. is a pair of two local appearance patches ( and ) which are -th and -th local appearance patches corresponding to each facial landmark point, respectively. Each local appearance patches is extracted by the ROI projection which projects a region around -th landmark point in the input image space to a region on the feature map space. The same MLP operates on all possible parings of local appearance patches.The permutation order of local appearance patches is a critical for the PRN, since without this invariance, the PRN would have to learn to operate on all possible permuted pairs of local appearance patches without explicit knowledge of the permutation invariance structure in the data. To incorporate this permutation invariance, we constrain the PRN with an aggregation function (Fig. 3):
(2) |
where is the aggregated relational feature, and is the aggregation function which is summation of all pairwise relations among all possible pairing of the local appearance patches. Finally, a prediction of the PRN can be performed with:
(3) |
where is a function with parameters , and is implemented by the MLP. Therefore, the final form of the PRN is a composite function as follows:
(4) |
where is a set of all possible pairs of local appearance patches where denotes the number of local patches on the feature maps.

To capture unique pairwise relations within same identity and discriminative pairwise relations among different identities, a pairwise relation should be identity dependent. So, we modify the PRN such that could condition its processing on the identity information. To condition the identity information, we embed a face identity state feature as the identity information in the PRN as follows:
(5) |
To get this
, we use the final state of a recurrent neural network composed of LSTM layers and two FC layers that process a sequence of total local appearance patches (Figure
1, 4).2.3.1 Face Identity State Feature
Pairwise relations should be identity dependent to capture unique and discriminative pairwise relations. Based on the feature maps which are the output of the conv5_3 layer in the base CNN model, the face is divided into local regions by ROI projection around landmark points. In these local regions, we extract the local appearance patches to model the facial identity state feature . Let denote the local appearance patches of -th local region. To encode the facial identity state feature , an LSTM-based network has been devised on top of a set of local appearance patches as followings:
(6) |
where is a neural network module which composed of the LSTM layers and two FC layers with learnable parameters . We train with softmax loss function (Fig. 4). The detailed configuration of used in our proposed method will be presented in Section 3.1.2.

2.4 Loss Function
To learn the proposed PRN, we jointly use the triplet ratio loss , pairwise loss , and identity preserving loss (softmax) to minimize distances between faces that have the same identity and to maximize distances between faces that are of different identities:
(7) |
During training the PRN, we empirically set , , and .
2.4.1 Triplet Ratio Loss
Triplet ratio loss is defined to maximize the ratio of distances between the positive pairs and the negative pairs in the triplets of faces. To maximize , the Euclidean distances of positive pairs should be minimized and those of negative pairs should be maximized. Let , where is the input facial image, denote the output of a network (the output of in the PRN), the is defined as follows:
(8) |
where is the output of the network for an anchor face , is the output of the network for a positive face image , and is the output of the network for a negative face in the triplets of faces , respectively. is a margin that defines a minimum ratio in Euclidean space. From recent work by B-.N. Kang et al. [17], they reported that an unbalanced range of distance measured between the pairs of data using only during training; this result means that although the ratio of the distances is bounded in a certain range of values, the range of the absolute distances is not. To overcome this problem, they constrained by adding the pairwise loss .
2.4.2 Pairwise Loss
Pairwise loss is defined to minimize the sum of the squared Euclidean distances between for the anchor face and for the positive face . These pairs and are in the triplets .
(9) |
The joint training with and minimizes the absolute Euclidean distance between face images of a given pair in the triplets of facs .
3 Experiments
The implementation details are given in Section 3.1. Then, we investigate the effectiveness of the PRN and the PRN with the face identity state feature in Section 3.2. In Section 3.3, 3.4, 3.5, and 3.6, we perform several experiments to verify the effectiveness of the proposed method on the public face benchmark datasets including LFW [15], YTF [38], IJB-A [18], and IJB-B [37].
3.1 Implementation Details
3.1.1 Training Data
We used the web-collected face dataset (VGGFace2 [3]). All of the faces in the VGGFace2 dataset and their landmark points are detected by the recently proposed face detector [42] and facial landmark point detector [19]. We used landmark points for the face alignment and extraction of local appearance patches. When the detection of faces or facial landmark points is failed, we simply discard the image. Thus, we discarded face images from subjects. After removing these images without landmark points, it roughly goes to M images of unique persons. We generated a validation set by selecting randomly about from each subject in refined dataset, and the remains are used as the training set. Therefore, the training set roughly has M face images and the validation set has face images, respectively.
3.1.2 Detailed settings in the PRN
For pairwise relations between facial parts, we first extracted a set of local appearance patches , from each local region (nearly size of regions) around landmark points by ROI projection on the feature maps (conv5_3 in Table 1) in the backbone CNN model. Using this , we make () possible pairs of local appearance patches. Then, we used three-layered MLP consisting of
units per layer with batch normalization (BN)
[16]and rectified linear units (ReLU)
[25]non-linear activation functions for
, and three-layered MLP consisting of units per layer with BN and ReLU non-linear activation functions for . To aggregate all of relations from , we used summation as an aggregation function. The PRN is jointly optimized by triplet ratio loss , pairwise loss , and identity preserving loss (softmax) over the ground-truth identity labels using stochastic gradient descent (SGD) optimization method with learning rate
. We used mini-batch size of on four NVIDIA Titan X GPUs. During training the PRN, we froze the backbone CNN model to only update weights of the PRN model.To capture unique and discriminative pairwise relations dependent on identity, the PRN should condition its processing on the face identity state feature . For , we use the LSTM-based recurrent network over a sequence of the local appearance patches which is a set ordered by landmark points order from . In other words, there is a sequence of length per face. In , it consist of LSTM layers and two-layered MLP. Each of the LSTM layer has memory cells. The MLP consists of and units per layer, respectively. The cross-entropy loss with softmax was used for training the (Figure 4).
3.1.3 Detailed settings in the model
We implemented the base CNN and the PRN models using the Keras framework
[7]with TensorFlow
[1] backend. For fair comparison in terms of the effects of each network module, we train three kinds of models (model A, model B, and model C) under the supervision of cross-entropy loss with softmax:-
model A is the baseline model which is the base CNN (Table 1).
-
model B combining two different networks, one of which is the base CNN model (model A) and the other is the (Eq. (4)), concatenates the output feature of the GAP layer in model A as the global appearance representation and the output of the MLP in the without the face identity state feature as the local appearance representation. is the feature of size from each face image. The output of the MLP in the is the feature of size . These two output features are concatenated into a single feature vector with size, then this concatenated feature vector is fed into the FC layer with units.
-
model C is the combined model with the output of the base CNN model (model A) and the output of the (Eq. (5)) with the face identity state feature . The output of model A in model C is the same of the output in model B. The size of the output in the is same as compared with the , but output values are different.
All of convolution layers and FC layers use BN and ReLU as nonlinear activation functions except for LSTM layers in .
3.2 Effects of the PRNs
To investigate the effectiveness of the PRN and the face identity state feature , we performed experiments in terms of the accuracy of classification on the validation set during training. For these experiments, we trained two different network models, one of which is a network (Eq. (4)) using only the PRN model, and the other is a network (Eq. (5)) using the with the . We achieved and accuracies of classification for and , respectively. From these evaluations, when using , we observed that the face identity state feature represents the identity property, and the pairwise relations should be dependent on an identity property of a face image. Therefore, these evaluations validate the effectiveness of using the PRN and the importance of the face identity state feature. We visualize the localized facial parts in Figure 5, where Col. 1, Col. 2, and Col. 3 of each identity are the aligned facial image, detected facial landmark points, and localized facial parts by ROI projection on the feature maps, respectively. We can see that the localized appearance representations are discriminative among different identities.

3.3 Experiments on the Labeled Faces in the Wild (LFW)
We evaluated the proposed method on the LFW dataset, which reveals the state-of-the-art of face verification in unconstrained environments. The LFW dataset is excellent benchmark dataset for face verification in image and contains web crawling images with large variations in illuminations, occlusions, facial poses, and facial expressions, from different identities. Our models such as model A, model B, and model C are trained on the roughly M outside training set (VGGFace2), with no people overlapping with subjects in the LFW. Following the test protocol of unrestricted with labeled outside data [21], we test on face pairs by using a squared distance threshold to determine classification of same and different, and report the results in comparison with the state-of-the-art methods (Table 2).
Method | Images | Networks | Dimension | Accuracy (%) |
---|---|---|---|---|
DeepFace [34] | M | |||
DeepID [30] | ||||
DeepID2+ [32] | ||||
DeepID3 [41] | ||||
FaceNet [28] | M | |||
Learning from Scratch [40] | ||||
CenterFace [36] | M | |||
PIMNet [17] | ||||
PIMNet [17] | ||||
SphereFace [23] | ||||
ArcFace [10] | M | |||
model A (baseline, only ) | M | |||
M | ||||
M | ||||
model B ( + ) | M | |||
model C () | M |
From the experimental results (Table 2), we have the following observations. First, itself provides slightly better accuracy than the baseline model A (the base CNN model, just uses ) and outperforms model B which is jointly combined both with . Second, model C (jointly combined with ) beats the baseline model model A by a significantly margin, improving the accuracy from to . This shows that combination of and can notably increase the discriminative power of deeply learned features, and the effectiveness of the pairwise relations between facial local appearance parts (local appearance patches). Third, compared to model B, model C achieved better accuracy of verification ( vs. ). This shows the importance of the face identity state feature to capture unique and discriminative pairwise relations in the designed PRN model. Last, compared to the state-of-the-art methods on the LFW, the proposed method model C is among the top-ranked approaches, outperforming most of the existing results (Table 2). This shows the importance and advantage of the proposed method.
3.4 Experiments on the YouTube Face Dataset (YTF)
We evaluated the proposed method on the YTF dataset, which reveals the state-of-the-art of face verification in unconstrained environments. The YTF dataset is excellent benchmark dataset for face verification in video and contains videos with large variations in illuminations, facial pose, and facial expressions, from different identities, with an average of videos per person. The length of video clip varies from to frames and average of frames. We follow the test protocol of unrestricted with labeled outside data. We test on video pairs and report the test results in comparison with the state-of-the-art methods (Table 3).
Method | Images | Networks | Dimension | Accuracy (%) |
---|---|---|---|---|
DeepFace [34] | M | |||
DeepID2+ [32] | ||||
FaceNet [28] | M | |||
Learning from Scratch [40] | ||||
CenterFace [36] | M | |||
SphereFace [23] | ||||
NAN [39] | M | |||
model A (baseline, only ) | M | |||
M | ||||
M | ||||
model B ( + ) | M | |||
model C () | M |
From the experimental results (Table 3), we have the following observations. First, itself provides slightly better accuracy than the baseline model A (the base CNN model, just uses ) and outperforms model B which is jointly combined both with . Second, model C (jointly combined with ) beats the baseline model model A by a significant margin, improving the accuracy from to . This shows that combination of and can notably increase the discriminative power of deeply learned features, and the effectiveness of the pairwise relations between facial local appearance patches. Third, compared to model B, model C achieved better accuracy of verification ( v.s. ). This shows the importance of the face identity state feature to capture unique pairwise relations in the designed PRN model. Last, compared to the state-of-the-art methods on the YTF, the proposed method model C is the state-of-the-art (), outperforming the existing results (Table 3). This shows the importance and advantage of the proposed method.
3.5 Experiments on the IARPA Janus Benchmark A (IJB-A)
We evaluated the proposed method on the IJB-A dataset [18] which contains face images and videos captured from unconstrained environments. It features full pose variation and wide variations in imaging conditions thus is very challenging. It contains subjects with images and videos in total, and images and videos per subject on average. We detect the faces using face detector [42] and landmark points using DAN landmark point detector [19], and then align the face image with our face alignment method explained in Section 2.2. In this dataset, each training and testing instance is called a ‘template’, which comprises to mixed still images and video frames. IJB-A dataset provides split evaluations with two protocols (1:1 face verification and 1:N face identification). For face verification, we report the test results by using true accept rate (TAR) vs. false accept rate (FAR) (Table 4). For face identification, we report the results by using the true positive identification (TPIR) vs. false positive identification rate (FPIR) and Rank-N (Table 4). All measurements are based on a squared distance threshold.
Method | 1:1 Verification TAR | 1:N Identification TPIR | |||||||
FAR=0.001 | FAR=0.01 | FAR=0.1 | FPIR=0.01 | FPIR=0.1 | Rank-1 | Rank-5 | Rank-10 | ||
B-CNN [8] | - | - | - | - | |||||
LSFS [35] | - | ||||||||
DCNN+metric [6] | - | - | - | ||||||
Triplet Similarity [27] | |||||||||
Pose-Aware Models [24] | - | - | - | ||||||
Deep Multi-Pose [2] | - | ||||||||
DCNN [5] | - | ||||||||
Triplet Embedding [27] | - | ||||||||
VGG-Face [26] | - | - | - | ||||||
Template Adaptation [9] | |||||||||
NAN [39] | |||||||||
VGGFace2 [3] | |||||||||
model A (baseline, only ) | |||||||||
model B ( + ) | |||||||||
model C ( + ) |
From the experimental results (Table 4), we have the following observations. First, compared to model A (base CNN model), model C (jointly combined with ) achieved a consistently superior accuracy (TAR and TPIR) on both 1:1 face verification and 1:N face identification. Second, compared to model B (jointly combined with ), model C achieved also a consistently better accuracy (TAR and TPIR) on both 1:1 face verification and 1:N face identification. Last, more importantly, model C is trained from scratch and achieves comparable results to the state-of-the-art (VGGFace2 [3]) which is first pre-trained on the MS-Celeb-1M dataset [11], which contains roughly 10M face images, and then is fine-tuned on the VGGFace2 dataset. It shows that our proposed method can be further improved by training on the MS-Celeb-1M and our training dataset.
3.6 Experiments on the IARPA Janus Benchmark B (IJB-B)
We evaluated the proposed method on the IJB-B dataset [37] which contains face images and videos captured from unconstrained environments. The IJB-B dataset is an extension of the IJB-A, having subjects with K still images (including face and non-face) and K frames from videos, an average of images per subject. Because images in this dataset are labeled with ground truth bounding boxes, we only detect landmark points using DAN [19], and then align face images with our face alignment method. Unlike the IJB-A, it does not contain any training splits. In particular, we use the 1:1 Baseline Verification protocol and 1:N Mixed Media Identification protocol for the IJB-B. For face verification, we report the test results by using TAR vs. FAR (Table 5). For face identification, we report the results by using TPIR vs. FPIR and Rank-N (Table 5). We compare our proposed methods with VGGFace2 [3] and FacePoseNet (FPN) [4]. All measurements are based on a squared distance threshold.
Method | 1:1 Verification TAR | 1:N Identification TPIR | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
FAR=0.00001 | FAR=0.0001 | FAR=0.001 | FAR=0.01 | FPIR=0.01 | FPIR=0.1 | Rank-1 | Rank-5 | Rank-10 | ||
VGGFace2 [3] | ||||||||||
VGGFace2_ft [3] | ||||||||||
FPN [4] | - | - | - | |||||||
model A (baseline, only ) | ||||||||||
model B ( + ) | ||||||||||
model C ( + ) |
From the experimental results, we have the following observations. First, compared to model A (base CNN model, just uses ), model C (jointly combined with as the local appearance representation) achieved a consistently superior accuracy (TAR and TPIR) on both 1:1 face verification and 1:N face identification. Second, compared to model B (jointly combined with the ), model C achieved also a consistently better accuracy (TAR and TPIR) on both 1:1 face verification and 1:N face identification. Last, more importantly, model C achieved consistent improvement of TAR and TPIR on both 1:1 face verification and 1:N face identification, and achieved the state-of-the-art results on the IJB-B.
4 Conclusion
We proposed a novel face recognition method using the pairwise relational network (PRN) which takes local appearance patches around landmark points on the feature maps, and captures unique pairwise relations between a pair of local appearance patches. To capture unique and discriminative relations for face recognition, pairwise relations should be identity dependent. Therefore, the PRN conditioned its processing on the face identity state feature embedded by the LSTM based network using a sequential local appearance patches. To further improve accuracy of face recognition, we combined the global appearance representation with the PRN. Experiments verified the effectiveness and importance of our proposed PRN and the face identity state feature, which achieved accuracy on the LFW, the state-of-the-art accuracy () on the YTF, comparable results to the state-of-the-art for both face verification and identification tasks on the IJB-A, and the state-of-the-art results on the IJB-B.
Acknowledgment
This research was supported by the MSIT, Korea, under the SW Starlab support program (IITP-2017-0-00897), and “ICT Consilience Creative program” (IITP-2018-2011-1-00783) supervised by the IITP.
References
-
[1]
Abadi, M., et al.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015),
https://www.tensorflow.org/, software available from tensorflow.org - [2] AbdAlmageed, W., Wu, Y., Rawls, S., Harel, S., Hassner, T., Masi, I., Choi, J., Lekust, J., Kim, J., Natarajan, P., Nevatia, R., Medioni, G.: Face recognition using deep multi-pose representations. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1–9 (March 2016). https://doi.org/10.1109/WACV.2016.7477555
- [3] Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: Vggface2: A dataset for recognising faces across pose and age. CoRR abs/1710.08092 (2017), http://arxiv.org/abs/1710.08092
- [4] Chang, F.J., Tran, A.T., Hassner, T., Masi, I., Nevatia, R., Medioni, G.: Faceposenet: Making a case for landmark-free face alignment. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). pp. 1599–1608 (Oct 2017). https://doi.org/10.1109/ICCVW.2017.188
- [5] Chen, J.C., Patel, V.M., Chellappa, R.: Unconstrained face verification using deep cnn features. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1–9 (March 2016). https://doi.org/10.1109/WACV.2016.7477557
- [6] Chen, J.C., Ranjan, R., Kumar, A., Chen, C.H., Patel, V.M., Chellappa, R.: An end-to-end system for unconstrained face verification with deep convolutional neural networks. In: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW). pp. 360–368 (Dec 2015). https://doi.org/10.1109/ICCVW.2015.55
- [7] Chollet, F., et al.: Keras. https://github.com/fchollet/keras (2015)
- [8] Chowdhury, A.R., Lin, T.Y., Maji, S., Learned-Miller, E.: One-to-many face recognition with bilinear cnns. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1–9 (March 2016). https://doi.org/10.1109/WACV.2016.7477593
- [9] Crosswhite, N., Byrne, J., Stauffer, C., Parkhi, O., Cao, Q., Zisserman, A.: Template adaptation for face verification and identification. In: 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017). pp. 1–8 (May 2017). https://doi.org/10.1109/FG.2017.11
- [10] Deng, J., Guo, J., Zafeiriou, S.: ArcFace: Additive Angular Margin Loss for Deep Face Recognition. ArXiv e-prints (Jan 2018)
- [11] Guo, Y., Zhang, L., Hu, Y., He, X., Gao, J.: MS-Celeb-1M: A dataset and benchmark for large scale face recognition. In: European Conference on Computer Vision. pp. 87–102. Springer International Publishing (2016)
-
[12]
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: 2015 IEEE International Conference on Computer Vision (ICCV). pp. 1026–1034 (Dec 2015).
https://doi.org/10.1109/ICCV.2015.123 -
[13]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 770–778 (June 2016).
https://doi.org/10.1109/CVPR.2016.90 - [14] Huang, G., Liu, Z., v. d. Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2261–2269 (July 2017). https://doi.org/10.1109/CVPR.2017.243
- [15] Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Tech. Rep. 07-49, University of Massachusetts, Amherst (October 2007)
- [16] Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015. pp. 448–456 (2015), http://jmlr.org/proceedings/papers/v37/ioffe15.html
- [17] Kang, B.N., Kim, Y., Kim, D.: Deep convolutional neural network using triplets of faces, deep ensemble, and score-level fusion for face recognition. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). pp. 611–618 (July 2017). https://doi.org/10.1109/CVPRW.2017.89
- [18] Klare, B.F., Klein, B., Taborsky, E., Blanton, A., Cheney, J., Allen, K., Grother, P., Mah, A., Burge, M., Jain, A.K.: Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1931–1939 (June 2015). https://doi.org/10.1109/CVPR.2015.7298803
- [19] Kowalski, M., Naruniec, J., Trzcinski, T.: Deep alignment network: A convolutional neural network for robust face alignment. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). pp. 2034–2043 (July 2017). https://doi.org/10.1109/CVPRW.2017.254
- [20] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1. pp. 1097–1105. NIPS’12 (2012), http://dl.acm.org/citation.cfm?id=2999134.2999257
- [21] Learned-Miller, G.B.H.E.: Labeled faces in the wild: Updates and new reporting procedures. Tech. Rep. UM-CS-2014-003, University of Massachusetts, Amherst (May 2014)
- [22] Liu, S., Deng, W.: Very deep convolutional neural network based image classification using small training sample size. In: 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR). pp. 730–734 (Nov 2015). https://doi.org/10.1109/ACPR.2015.7486599
- [23] Liu, W., Wen, Y., Yu, Z., Li, M., Raj, B., Song, L.: Sphereface: Deep hypersphere embedding for face recognition. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 6738–6746 (July 2017). https://doi.org/10.1109/CVPR.2017.713
- [24] Masi, I., Rawls, S., Medioni, G., Natarajan, P.: Pose-aware face recognition in the wild. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4838–4846 (June 2016). https://doi.org/10.1109/CVPR.2016.523
-
[25]
Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning. pp. 807–814. ICML’10 (2010),
http://dl.acm.org/citation.cfm?id=3104322.3104425 - [26] Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: Proceedings of the British Machine Vision Conference (BMVC). pp. 41.1–41.12 (September 2015). https://doi.org/10.5244/C.29.41
- [27] Sankaranarayanan, S., Alavi, A., Castillo, C.D., Chellappa, R.: Triplet probabilistic embedding for face verification and clustering. In: 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS). pp. 1–8 (Sept 2016). https://doi.org/10.1109/BTAS.2016.7791205
- [28] Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: A unified embedding for face recognition and clustering. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 815–823 (June 2015). https://doi.org/10.1109/CVPR.2015.7298682
- [29] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014), http://arxiv.org/abs/1409.1556
- [30] Sun, Y., Wang, X., Tang, X.: Deep learning face representation from predicting 10,000 classes. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition. pp. 1891–1898 (June 2014). https://doi.org/10.1109/CVPR.2014.244
- [31] Sun, Y., Wang, X., Tang, X.: Deeply learned face representations are sparse, selective, and robust pp. 2892–2900 (June 2015). https://doi.org/10.1109/CVPR.2015.7298907
- [32] Sun, Y., Chen, Y., Wang, X., Tang, X.: Deep learning face representation by joint identification-verification pp. 1988–1996 (2014), http://papers.nips.cc/paper/5416-deep-learning-face-representation-by-joint-identification-verification.pdf
- [33] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1–9 (June 2015). https://doi.org/10.1109/CVPR.2015.7298594
- [34] Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: Closing the gap to human-level performance in face verification. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition. pp. 1701–1708 (June 2014). https://doi.org/10.1109/CVPR.2014.220
- [35] Wang, D., Otto, C., Jain, A.K.: Face search at scale. IEEE Transactions on Pattern Analysis and Machine Intelligence 39(6), 1122–1136 (June 2017). https://doi.org/10.1109/TPAMI.2016.2582166
- [36] Wen, Y., Zhang, K., Li, Z., Qiao, Y.: A discriminative feature learning approach for deep face recognition. In: Computer Vision – ECCV 2016. pp. 499–515. Springer International Publishing (2016). https://doi.org/10.1007/978-3-319-46478-7-31
- [37] Whitelam, C., Taborsky, E., Blanton, A., Maze, B., Adams, J., Miller, T., Kalka, N., Jain, A.K., Duncan, J.A., Allen, K., Cheney, J., Grother, P.: Iarpa janus benchmark-b face dataset. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). pp. 592–600 (2017). https://doi.org/10.1109/CVPRW.2017.87
- [38] Wolf, L., Hassner, T., Maoz, I.: Face recognition in unconstrained videos with matched background similarity. In: CVPR 2011. pp. 529–534 (June 2011). https://doi.org/10.1109/CVPR.2011.5995566
- [39] Yang, J., Ren, P., Zhang, D., Chen, D., Wen, F., Li, H., Hua, G.: Neural aggregation network for video face recognition. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 5216–5225 (July 2017). https://doi.org/10.1109/CVPR.2017.554
- [40] Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. CoRR abs/1411.7923 (2014), http://arxiv.org/abs/1411.7923
- [41] Yi Sun, Ding Liang, X.W., Tang, X.: Deepid3: Face recognition with very deep neural networks. CoRR abs/1502.00873 (2015), http://arxiv.org/abs/1502.00873
- [42] Yoon, J., Kim, D.: An accurate and real-time multi-view face detector using orfs and doubly domain-partitioning classifier. Journal of Real-Time Image Processing (Feb 2018). https://doi.org/10.1007/s11554-018-0751-6
- [43] Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., Torralba, A.: Object detectors emerge in deep scene cnns. CoRR abs/1412.6856 (2014), http://arxiv.org/abs/1412.6856
-
[44]
Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., Oliva, A.: Learning deep features for scene recognition using places database. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 27, pp. 487–495. Curran Associates, Inc. (2014),
http://papers.nips.cc/paper/5349-learning-deep-features-for-scene-recognition-using-places-database.pdf
Comments
There are no comments yet.