Pairwise Relational Networks for Face Recognition

08/15/2018
by   Bong-Nam Kang, et al.
POSTECH
0

Existing face recognition using deep neural networks is difficult to know what kind of features are used to discriminate the identities of face images clearly. To investigate the effective features for face recognition, we propose a novel face recognition method, called a pairwise relational network (PRN), that obtains local appearance patches around landmark points on the feature map, and captures the pairwise relation between a pair of local appearance patches. The PRN is trained to capture unique and discriminative pairwise relations among different identities. Because the existence and meaning of pairwise relations should be identity dependent, we add a face identity state feature, which obtains from the long short-term memory (LSTM) units network with the sequential local appearance patches on the feature maps, to the PRN. To further improve accuracy of face recognition, we combined the global appearance representation with the pairwise relational feature. Experimental results on the LFW show that the PRN using only pairwise relations achieved 99.65 state feature achieved 99.76 pairwise relations and the PRN using pairwise relations and the face identity state feature achieved the state-of-the-art (95.7 achieved comparable results to the state-of-the-art for both face verification and face identification tasks on the IJB-A, and the state-of-the-art on the IJB-B.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 10

11/15/2018

Pairwise Relational Networks using Local Appearance Features for Face Recognition

We propose a new face recognition method, called a pairwise relational n...
08/17/2019

Attentional Feature-Pair Relation Networks for Accurate Face Recognition

Human face recognition is one of the most important research areas in bi...
02/03/2015

DeepID3: Face Recognition with Very Deep Neural Networks

The state-of-the-art of face recognition has been significantly advanced...
10/02/2021

Universal Adversarial Spoofing Attacks against Face Recognition

We assess the vulnerabilities of deep face recognition systems for image...
10/29/2018

Concealing the identity of faces in oblique images with adaptive hopping Gaussian mixtures

Cameras mounted on Micro Aerial Vehicles (MAVs) are increasingly used fo...
07/27/2020

Black-Box Face Recovery from Identity Features

In this work, we present a novel algorithm based on an it-erative sampli...
08/02/2021

Training face verification models from generated face identity data

Machine learning tools are becoming increasingly powerful and widely use...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Convolutional neural networks (CNNs) have achieved great success on computer vision community by improving the state-of-the-art in almost all of applications, especially in classification problems including object [14, 13, 12, 20, 29, 22, 33] scene[43, 44], and so on. The key to success of CNNs is the availability of large scale of training data and the end-to-end learning framework. The most commonly used CNNs perform feature learning and prediction of label information by mapping the input raw data to deep embedded features which are commonly the output of the last fully connected (FC) layer, and then predict labels using these deep embedded features. These approaches use the deep embedded features holistically for their applications, without knowing what part of the features is used and what it is meaning.

Face recognition in unconstrained environments is a very challenging problem in computer vision. Faces of the same identity can look very different when presented in different illuminations, facial poses, facial expressions, and occlusions. Such variations within the same identity could overwhelm the variations due to identity differences and make face recognition challenging. To solve these problems, many deep learning-based approaches have been proposed and achieved high accuracies of face recognition such as DeepFace

[34], DeepID series [30, 32, 31, 41], FaceNet [28], PIMNet [17], SphereFace [23], and ArcFace [10].

In face recognition tasks in unconstrained environments, the deeply learned and embedded features need to be not only separable but also discriminative. However, these features are learned implicitly for separable and distinct representations to classify among different identities without what part of the features is used, what part of the feature is meaningful, and what part of the features is separable and discriminative. Therefore, it is difficult to know what kind of features are used to discriminate the identities of face images clearly.

To overcome this limitation, we propose a novel face recognition method, called a pairwise relational network (PRN) to capture unique relations within same identity and discriminative relations among different identities. To capture relations, the PRN takes local appearance patches as input by ROI projection around landmark points on the feature map in a backbone CNN network. With these local appearance patches, the PRN is trained to capture unique pairwise relations between pairs of local appearance patches to determine facial part-relational structures and properties in face images. Because the existence and meaning of pairwise relations should be identity dependent, the PRN could condition its processing on a facial identity state feature. The facial identity state feature is learned from the long short-term memory (LSTM) units network with the sequential local appearance patches on the feature maps. To further improve accuracy of face recognition, we combined the global appearance representation with the local appearance representation (the relation features) (Figure 1). More details of the proposed face recognition method are given in Section 2.

Figure 1: Overview of the proposed face recognition method

The main contributions of this paper can be summarized as follows:

  • We propose a novel face recognition method using the pairwise relational network (PRN) which captures the unique and discriminative pairwise relations of local appearance patches on the feature maps to classify face images among different identities.

  • We show that the proposed PRN is very useful to enhance the accuracy of both face verification and face identification.

  • We present extensive experiments on the public available datasets such as Labeled Faces in the Wild (LFW), YouTube Faces (YTF), IARPA Janus Benchmark-A (IJB-A), and IARPA Janus Benchmark-B (IJB-B).

The rest of this paper is as follows: in Section 2

we describe the proposed face recognition method including the base CNN architecture, face alignment, pairwise relational network, facial identity state feature, loss function used for training the proposed method, respectively; in Sections

3 we present experimental results of the proposed method in comparison with the state-of-the-art on the public benchmark dataset and discussion; in Section 4 we draw a conclusion.

2 Proposed Methods

In this section, we describe our methods in detail including the base CNN model as backbone network for the global appearance representation, the face alignment method, the pairwise relational network, the pairwise relational network with face identity states, and the loss functions.

2.1 Base Convolutional Neural Network

We first describe the base CNN model. It is the backbone neural network to represent the global appearance representation and extract the local appearance patches to capture the relations (Figure 1). The base CNN model consists of several 3-layer residual bottleneck blocks similar to the ResNet-101 [13]

. The ResNet-101 has one convolution layer, one max pooling layer,

3-layer residual bottleneck blocks, one global average pooling (GAP) layer, one FC layer, and softmax loss layer. The ResNet-101 accepts a image with resolution as input, and has

convolution filters with a stride of 2 in the first layer. In contrast, our base CNN model accepts a face image with

resolution as input, and has convolution filters with a stride of 1 in the first layer (conv1 in Table 1). Because of different input resolution, size of kernel filters, and stride, the output size in each intermediate layer is also different from the original ResNet-101. In the last layer, we use the GAP with filter in each channel and the FC layer. The outputs of FC layer are fed into the softmax loss layer. More details of the base CNN architecture are given in Table 1.

To represent the global appearance representation , we use the feature which is the output of the GAP in the base CNN (Table 1).

To represent the local appearance representation, we extract the local appearance patches on the feature maps (conv5_3) in the base CNN (Table 1) by ROI projection with facial landmark points. These are used to capture and model pairwise relations between them. More details of the local appearance patches and relations are described in Section 2.3.

Layer name Output size 101-layer
conv1 ,
conv2_x max pool, stride
conv3_x
conv4_x
conv5_x
global average pool, 8630-d fc, softmax
Table 1: Base convolutional neural network. The base CNN is similar to ResNet-101, but the dimensionality of input, the size of convolution filters, and the size of each output feature map are different from the original ResNet-101

2.2 Face Alignment

In the base CNN model, the input layer accepts the RGB values of the face image pixels. We employ a face alignment method to align a face image into the canonical face image, then we adopt this aligned face image as input of our base CNN model.

The alignment procedures are as follows: 1) Use the DAN implementation of Kowalsky et al. by using multi-stage neural network [19] to detect facial landmark points (Figure 2b); 2) rotate the face in the image plane to make it upright based on the eye positions (Figure 2c); 3) find a central point on the face by taking the mid-point between the leftmost and rightmost landmark points (the red point in Figure 2d); 4) the center points of the eye and mouth (blue points in Figure 2d) are found by averaging all the landmark points in the eye and mouth regions, respectively; 5) center the faces in the -axis, based on the center point (red point); 6) fix the position along the -axis by placing the eye center point at from the top of the image and the mouth center point at from the bottom of the image, respectively; 7) resize the image to a resolution of . Each pixel which value is in a range of in the RGB color space is normalized by dividing to be in a range of .

Figure 2: A face alignment. The original image is shown in (a); (b) shows the detected landmark points; (c) shows the aligned landmark points in the aligned image plane; and (d) is the final aligned face image, where the red circle was used to center the face image along -axis, and the blue circles denote the two points used for face cropping

2.3 Pairwise Relational Network

The pairwise relational network (PRN) is a neural network and takes a set of local appearance patches on the feature maps as input and output a single feature vector as its relational feature for the face recognition task. The PRN captures unique pairwise relations between pairs of local appearance patches within the same identity and discriminative pairwise relations among different identities. In other words, the PRN captures the core common properties of faces within the same identity, while captures the discriminative properties of faces among different identities. Therefore, the PRN aims to determine pairwise-relational structures from pairs of local appearance patches in face images. The relation feature

represents a latent relation of a pair of two local appearance patches, and can be written as follows:

(1)

where

is a multi-layer perceptron (MLP) and its parameters

are learnable weights. is a pair of two local appearance patches ( and ) which are -th and -th local appearance patches corresponding to each facial landmark point, respectively. Each local appearance patches is extracted by the ROI projection which projects a region around -th landmark point in the input image space to a region on the feature map space. The same MLP operates on all possible parings of local appearance patches.

The permutation order of local appearance patches is a critical for the PRN, since without this invariance, the PRN would have to learn to operate on all possible permuted pairs of local appearance patches without explicit knowledge of the permutation invariance structure in the data. To incorporate this permutation invariance, we constrain the PRN with an aggregation function (Fig. 3):

(2)

where is the aggregated relational feature, and is the aggregation function which is summation of all pairwise relations among all possible pairing of the local appearance patches. Finally, a prediction of the PRN can be performed with:

(3)

where is a function with parameters , and is implemented by the MLP. Therefore, the final form of the PRN is a composite function as follows:

(4)

where is a set of all possible pairs of local appearance patches where denotes the number of local patches on the feature maps.

Figure 3: Pairwise Relational Network (PRN). The PRN is a neural network module and takes a set of local appearance patches on the feature maps as input and outputs a single feature vector as its relational feature for the recognition task. The PRN captures unique pairwise relations between pairs of local appearance patches within the same identity and discriminative pairwise relations among different identities

To capture unique pairwise relations within same identity and discriminative pairwise relations among different identities, a pairwise relation should be identity dependent. So, we modify the PRN such that could condition its processing on the identity information. To condition the identity information, we embed a face identity state feature as the identity information in the PRN as follows:

(5)

To get this

, we use the final state of a recurrent neural network composed of LSTM layers and two FC layers that process a sequence of total local appearance patches (Figure 

1, 4).

2.3.1 Face Identity State Feature

Pairwise relations should be identity dependent to capture unique and discriminative pairwise relations. Based on the feature maps which are the output of the conv5_3 layer in the base CNN model, the face is divided into local regions by ROI projection around landmark points. In these local regions, we extract the local appearance patches to model the facial identity state feature . Let denote the local appearance patches of -th local region. To encode the facial identity state feature , an LSTM-based network has been devised on top of a set of local appearance patches as followings:

(6)

where is a neural network module which composed of the LSTM layers and two FC layers with learnable parameters . We train with softmax loss function (Fig. 4). The detailed configuration of used in our proposed method will be presented in Section 3.1.2.

Figure 4: Face identity state feature. A face on the feature maps is divided into regions by ROI projection around landmark points. A sequence of local appearance patches in these regions are used to encode the face identity state feature from LSTM networks

2.4 Loss Function

To learn the proposed PRN, we jointly use the triplet ratio loss , pairwise loss , and identity preserving loss (softmax) to minimize distances between faces that have the same identity and to maximize distances between faces that are of different identities:

(7)

During training the PRN, we empirically set , , and .

2.4.1 Triplet Ratio Loss

Triplet ratio loss is defined to maximize the ratio of distances between the positive pairs and the negative pairs in the triplets of faces. To maximize , the Euclidean distances of positive pairs should be minimized and those of negative pairs should be maximized. Let , where is the input facial image, denote the output of a network (the output of in the PRN), the is defined as follows:

(8)

where is the output of the network for an anchor face , is the output of the network for a positive face image , and is the output of the network for a negative face in the triplets of faces , respectively. is a margin that defines a minimum ratio in Euclidean space. From recent work by B-.N. Kang et al. [17], they reported that an unbalanced range of distance measured between the pairs of data using only during training; this result means that although the ratio of the distances is bounded in a certain range of values, the range of the absolute distances is not. To overcome this problem, they constrained by adding the pairwise loss .

2.4.2 Pairwise Loss

Pairwise loss is defined to minimize the sum of the squared Euclidean distances between for the anchor face and for the positive face . These pairs and are in the triplets .

(9)

The joint training with and minimizes the absolute Euclidean distance between face images of a given pair in the triplets of facs .

3 Experiments

The implementation details are given in Section 3.1. Then, we investigate the effectiveness of the PRN and the PRN with the face identity state feature in Section 3.2. In Section 3.3, 3.4, 3.5, and 3.6, we perform several experiments to verify the effectiveness of the proposed method on the public face benchmark datasets including LFW [15], YTF [38], IJB-A [18], and IJB-B [37].

3.1 Implementation Details

3.1.1 Training Data

We used the web-collected face dataset (VGGFace2 [3]). All of the faces in the VGGFace2 dataset and their landmark points are detected by the recently proposed face detector [42] and facial landmark point detector [19]. We used landmark points for the face alignment and extraction of local appearance patches. When the detection of faces or facial landmark points is failed, we simply discard the image. Thus, we discarded face images from subjects. After removing these images without landmark points, it roughly goes to M images of unique persons. We generated a validation set by selecting randomly about from each subject in refined dataset, and the remains are used as the training set. Therefore, the training set roughly has M face images and the validation set has face images, respectively.

3.1.2 Detailed settings in the PRN

For pairwise relations between facial parts, we first extracted a set of local appearance patches , from each local region (nearly size of regions) around landmark points by ROI projection on the feature maps (conv5_3 in Table 1) in the backbone CNN model. Using this , we make () possible pairs of local appearance patches. Then, we used three-layered MLP consisting of

units per layer with batch normalization (BN)

[16]

and rectified linear units (ReLU)

[25]

non-linear activation functions for

, and three-layered MLP consisting of units per layer with BN and ReLU non-linear activation functions for . To aggregate all of relations from , we used summation as an aggregation function. The PRN is jointly optimized by triplet ratio loss , pairwise loss , and identity preserving loss (softmax

) over the ground-truth identity labels using stochastic gradient descent (SGD) optimization method with learning rate

. We used mini-batch size of on four NVIDIA Titan X GPUs. During training the PRN, we froze the backbone CNN model to only update weights of the PRN model.

To capture unique and discriminative pairwise relations dependent on identity, the PRN should condition its processing on the face identity state feature . For , we use the LSTM-based recurrent network over a sequence of the local appearance patches which is a set ordered by landmark points order from . In other words, there is a sequence of length per face. In , it consist of LSTM layers and two-layered MLP. Each of the LSTM layer has memory cells. The MLP consists of and units per layer, respectively. The cross-entropy loss with softmax was used for training the (Figure 4).

3.1.3 Detailed settings in the model

We implemented the base CNN and the PRN models using the Keras framework

[7]

with TensorFlow

[1] backend. For fair comparison in terms of the effects of each network module, we train three kinds of models (model A, model B, and model C) under the supervision of cross-entropy loss with softmax:

  • model A is the baseline model which is the base CNN (Table 1).

  • model B combining two different networks, one of which is the base CNN model (model A) and the other is the (Eq. (4)), concatenates the output feature of the GAP layer in model A as the global appearance representation and the output of the MLP in the without the face identity state feature as the local appearance representation. is the feature of size from each face image. The output of the MLP in the is the feature of size . These two output features are concatenated into a single feature vector with size, then this concatenated feature vector is fed into the FC layer with units.

  • model C is the combined model with the output of the base CNN model (model A) and the output of the (Eq. (5)) with the face identity state feature . The output of model A in model C is the same of the output in model B. The size of the output in the is same as compared with the , but output values are different.

All of convolution layers and FC layers use BN and ReLU as nonlinear activation functions except for LSTM layers in .

3.2 Effects of the PRNs

To investigate the effectiveness of the PRN and the face identity state feature , we performed experiments in terms of the accuracy of classification on the validation set during training. For these experiments, we trained two different network models, one of which is a network (Eq. (4)) using only the PRN model, and the other is a network (Eq. (5)) using the with the . We achieved and accuracies of classification for and , respectively. From these evaluations, when using , we observed that the face identity state feature represents the identity property, and the pairwise relations should be dependent on an identity property of a face image. Therefore, these evaluations validate the effectiveness of using the PRN and the importance of the face identity state feature. We visualize the localized facial parts in Figure 5, where Col. 1, Col. 2, and Col. 3 of each identity are the aligned facial image, detected facial landmark points, and localized facial parts by ROI projection on the feature maps, respectively. We can see that the localized appearance representations are discriminative among different identities.

Figure 5: Visualization of the localized facial parts

3.3 Experiments on the Labeled Faces in the Wild (LFW)

We evaluated the proposed method on the LFW dataset, which reveals the state-of-the-art of face verification in unconstrained environments. The LFW dataset is excellent benchmark dataset for face verification in image and contains web crawling images with large variations in illuminations, occlusions, facial poses, and facial expressions, from different identities. Our models such as model A, model B, and model C are trained on the roughly M outside training set (VGGFace2), with no people overlapping with subjects in the LFW. Following the test protocol of unrestricted with labeled outside data [21], we test on face pairs by using a squared distance threshold to determine classification of same and different, and report the results in comparison with the state-of-the-art methods (Table 2).

Method   Images   Networks   Dimension   Accuracy (%)
DeepFace [34] M
DeepID [30]
DeepID2+ [32]
DeepID3 [41]
FaceNet [28] M
Learning from Scratch [40]
CenterFace [36] M
PIMNet [17]
PIMNet [17]
SphereFace [23]
ArcFace [10] M
model A (baseline, only ) M
M
M
model B ( + ) M
model C () M
Table 2: Comparison of the number of images, the number of networks, the dimensionality of feature, and the accuracy of the proposed method with the state-of-the-art methods on the LFW

From the experimental results (Table 2), we have the following observations. First, itself provides slightly better accuracy than the baseline model A (the base CNN model, just uses ) and outperforms model B which is jointly combined both with . Second, model C (jointly combined with ) beats the baseline model model A by a significantly margin, improving the accuracy from to . This shows that combination of and can notably increase the discriminative power of deeply learned features, and the effectiveness of the pairwise relations between facial local appearance parts (local appearance patches). Third, compared to model B, model C achieved better accuracy of verification ( vs. ). This shows the importance of the face identity state feature to capture unique and discriminative pairwise relations in the designed PRN model. Last, compared to the state-of-the-art methods on the LFW, the proposed method model C is among the top-ranked approaches, outperforming most of the existing results (Table 2). This shows the importance and advantage of the proposed method.

3.4 Experiments on the YouTube Face Dataset (YTF)

We evaluated the proposed method on the YTF dataset, which reveals the state-of-the-art of face verification in unconstrained environments. The YTF dataset is excellent benchmark dataset for face verification in video and contains videos with large variations in illuminations, facial pose, and facial expressions, from different identities, with an average of videos per person. The length of video clip varies from to frames and average of frames. We follow the test protocol of unrestricted with labeled outside data. We test on video pairs and report the test results in comparison with the state-of-the-art methods (Table 3).

Method   Images   Networks   Dimension   Accuracy (%)
DeepFace [34] M
DeepID2+ [32]
FaceNet [28] M
Learning from Scratch [40]
CenterFace [36] M
SphereFace [23]
NAN [39] M
model A (baseline, only ) M
M
M
model B ( + ) M
model C () M
Table 3: Comparison of the number of images, the number of networks, the dimensionality of feature, and the accuracy of the proposed method with the state-of-the-art methods on the YTF

From the experimental results (Table 3), we have the following observations. First, itself provides slightly better accuracy than the baseline model A (the base CNN model, just uses ) and outperforms model B which is jointly combined both with . Second, model C (jointly combined with ) beats the baseline model model A by a significant margin, improving the accuracy from to . This shows that combination of and can notably increase the discriminative power of deeply learned features, and the effectiveness of the pairwise relations between facial local appearance patches. Third, compared to model B, model C achieved better accuracy of verification ( v.s. ). This shows the importance of the face identity state feature to capture unique pairwise relations in the designed PRN model. Last, compared to the state-of-the-art methods on the YTF, the proposed method model C is the state-of-the-art (), outperforming the existing results (Table 3). This shows the importance and advantage of the proposed method.

3.5 Experiments on the IARPA Janus Benchmark A (IJB-A)

We evaluated the proposed method on the IJB-A dataset [18] which contains face images and videos captured from unconstrained environments. It features full pose variation and wide variations in imaging conditions thus is very challenging. It contains subjects with images and videos in total, and images and videos per subject on average. We detect the faces using face detector [42] and landmark points using DAN landmark point detector [19], and then align the face image with our face alignment method explained in Section 2.2. In this dataset, each training and testing instance is called a ‘template’, which comprises to mixed still images and video frames. IJB-A dataset provides split evaluations with two protocols (1:1 face verification and 1:N face identification). For face verification, we report the test results by using true accept rate (TAR) vs. false accept rate (FAR) (Table 4). For face identification, we report the results by using the true positive identification (TPIR) vs. false positive identification rate (FPIR) and Rank-N (Table 4). All measurements are based on a squared distance threshold.

Method 1:1 Verification TAR 1:N Identification TPIR
FAR=0.001 FAR=0.01 FAR=0.1 FPIR=0.01 FPIR=0.1 Rank-1 Rank-5 Rank-10
B-CNN [8] - - - -
LSFS [35] -
DCNN+metric [6] - - -
Triplet Similarity [27]
Pose-Aware Models [24] - - -
Deep Multi-Pose [2] -
DCNN [5] -
Triplet Embedding [27] -
VGG-Face [26] - - -
Template Adaptation [9]
NAN [39]
VGGFace2 [3]
model A (baseline, only )
model B ( + )
model C ( + )
Table 4: Comparison of performances of the proposed PRN method with the state-of-the-art on the IJB-A dataset. For verification, TAR vs. FAR are reported. For identification, TPIR vs. FPIR and the Rank-N accuracies are presented

From the experimental results (Table 4), we have the following observations. First, compared to model A (base CNN model), model C (jointly combined with ) achieved a consistently superior accuracy (TAR and TPIR) on both 1:1 face verification and 1:N face identification. Second, compared to model B (jointly combined with ), model C achieved also a consistently better accuracy (TAR and TPIR) on both 1:1 face verification and 1:N face identification. Last, more importantly, model C is trained from scratch and achieves comparable results to the state-of-the-art (VGGFace2 [3]) which is first pre-trained on the MS-Celeb-1M dataset [11], which contains roughly 10M face images, and then is fine-tuned on the VGGFace2 dataset. It shows that our proposed method can be further improved by training on the MS-Celeb-1M and our training dataset.

3.6 Experiments on the IARPA Janus Benchmark B (IJB-B)

We evaluated the proposed method on the IJB-B dataset [37] which contains face images and videos captured from unconstrained environments. The IJB-B dataset is an extension of the IJB-A, having subjects with K still images (including face and non-face) and K frames from videos, an average of images per subject. Because images in this dataset are labeled with ground truth bounding boxes, we only detect landmark points using DAN [19], and then align face images with our face alignment method. Unlike the IJB-A, it does not contain any training splits. In particular, we use the 1:1 Baseline Verification protocol and 1:N Mixed Media Identification protocol for the IJB-B. For face verification, we report the test results by using TAR vs. FAR (Table 5). For face identification, we report the results by using TPIR vs. FPIR and Rank-N (Table 5). We compare our proposed methods with VGGFace2 [3] and FacePoseNet (FPN) [4]. All measurements are based on a squared distance threshold.

Method 1:1 Verification TAR 1:N Identification TPIR
FAR=0.00001 FAR=0.0001 FAR=0.001 FAR=0.01 FPIR=0.01 FPIR=0.1 Rank-1 Rank-5 Rank-10
VGGFace2 [3]
VGGFace2_ft [3]
FPN [4] - - -
model A (baseline, only )
model B ( + )
model C ( + )
Table 5: Comparison of performances of the proposed PRN method with the state-of-the-art on the IJB-B dataset. For verification, TAR vs. FAR are reported. For identification, TPIR vs. FPIR and the Rank-N accuracies are presented

From the experimental results, we have the following observations. First, compared to model A (base CNN model, just uses ), model C (jointly combined with as the local appearance representation) achieved a consistently superior accuracy (TAR and TPIR) on both 1:1 face verification and 1:N face identification. Second, compared to model B (jointly combined with the ), model C achieved also a consistently better accuracy (TAR and TPIR) on both 1:1 face verification and 1:N face identification. Last, more importantly, model C achieved consistent improvement of TAR and TPIR on both 1:1 face verification and 1:N face identification, and achieved the state-of-the-art results on the IJB-B.

4 Conclusion

We proposed a novel face recognition method using the pairwise relational network (PRN) which takes local appearance patches around landmark points on the feature maps, and captures unique pairwise relations between a pair of local appearance patches. To capture unique and discriminative relations for face recognition, pairwise relations should be identity dependent. Therefore, the PRN conditioned its processing on the face identity state feature embedded by the LSTM based network using a sequential local appearance patches. To further improve accuracy of face recognition, we combined the global appearance representation with the PRN. Experiments verified the effectiveness and importance of our proposed PRN and the face identity state feature, which achieved accuracy on the LFW, the state-of-the-art accuracy () on the YTF, comparable results to the state-of-the-art for both face verification and identification tasks on the IJB-A, and the state-of-the-art results on the IJB-B.

Acknowledgment

This research was supported by the MSIT, Korea, under the SW Starlab support program (IITP-2017-0-00897), and “ICT Consilience Creative program” (IITP-2018-2011-1-00783) supervised by the IITP.

References