Hierarchical Representation Learning for Kinship Verification

05/27/2018 ∙ by Naman Kohli, et al. ∙ West Virginia University IIIT Delhi 0

Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this research, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. Utilizing the information obtained from the human study, a hierarchical Kinship Verification via Representation Learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU Kinship Database is created which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields stateof-the-art kinship verification accuracy on the WVU Kinship database and on four existing benchmark datasets. Further, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20



There are no comments yet.


page 1

page 6

page 8

page 9

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Kinship refers to sharing of selected characteristics among organisms through nature. Kinship verification is the task of judging if two individuals are kin or not and has been widely studied in the field of psychology and neuroscience. Hogben [1] called the similarities in facial structure of humans as familial traits. Face resemblance is thought to be one of the most common physical cues for kinship [2]. The hypothesis that similarity among faces could be a cue for kinship was first formulated by Daly and Wilson [3]. Since then, facial similarity/resemblance has been used to judge kinship recognition in a number of research experiments [4, 5, 6, 7, 8, 9, 10, 11]. Maloney and Martello [12] have examined the relation between similarity and kinship detection among siblings and concluded that observers do look for similarity in judging kinship among children. Martello and Maloney [13] have further shown that in kinship recognition, the upper portion of a face has more discriminating power as compared to the lower half. In a different study, to determine the effect of lateralization on allocentric kin recognition, they have suggested that the right half of the face is equal to the left half portion of the face for the purpose of kinship recognition[14].

Year Authors Algorithm Database Accuracy (%) Outside Training
2010 Fang et al. [15] Pictorial structure model Cornell KinFace 70.67 No
2011 Siyu et al. [16] Transfer learning UB Kin Database 60.00
Shao et al. [17] Transfer subspace learning UB Kin Database 69.67
Zhou et al [18] Spatial pyramid learning based kinship Private Database 67.75
2012 Xia et al. [19] Attributes LIFT learning UB Kin Database 82.50
Kohli et al.[20] Self similarity representation of weber faces UB Kin Database 69.67
IIITD Kinship Database 75.20
Guo et al [21] Product of likelihood ratio on salient features Private Database 75.00
Zhou et al. [22] Gabor based gradient oriented pyramid Private Database 69.75
2013 Dibeklioglu et al. [23] Spatio temporal features UvA-NEMO Smile 67.11
2014 Lu et al. [24] Multiview neighborhood repulsed metric learning KinFace-I 69.90
KinFace-II 76.50
Yan et al.[25] Discriminative multimetric learning Cornell KinFace 73.50*
UB Kin Database 74.50
KinFace-I 72.00*
KinFace-II 78.00*
Dehghan et al. [26]

Discrimination via gated autoencoders

KinFace-I 74.50
KinFace-II 82.20
Yan et al. [27] Prototype discriminative feature learning Cornell KinFace 71.90
UB Kin Database 67.30
KinFace-I 70.10
KinFace-II 77.00
2015 Liu et al. [28] Inheritable Fisher Vector Feature based kinship KinFace-I 73.45
KinFace-II 81.60
Alirezazadeh et al. [29] Genetic Algorithm for feature selection for kinship KinFace-I 81.30
KinFace-II 86.15
Zhou et al. [30] Ensemble similarity learning KinFace-I 78.60
KinFace-II 75.70
2016 Proposed Kinship verification via representation learning (KVRL-fcDBN) Cornell KinFace 89.50 Yes
UB Kin Database 91.80
KinFace-I 96.10
KinFace-II 96.20
WVU Kinship Database 90.80
TABLE I: Review of kinship verification algorithms. Outside Training column represents if an external face database was required for training the algorithm. The symbol * represents value taken from ROC curve
Fig. 1: Examples of kin-relations considered in this research.

Some examples of kin-relations are shown in Fig. 1. Kinship verification has several applications such as:

  1. organizing image collections and resolving identities in photo albums,

  2. searching for relatives in public databases,

  3. boosting automatic face verification capabilities,

  4. automatically tagging large number of images available online, and

  5. finding out kin of a victim or suspect by law enforcement agencies.

Kinship verification has also gained interest in the computer vision and machine learning communities. The first dataset containing kin pairs was collected by Fang et al.


. For performing kinship verification, the authors proposed an algorithm for facial feature extraction and forward selection methodology. Since then, the algorithms for verifying kin have increased in complexity and Table

I provides a review of algorithms recently published in this area along with the databases used. The problem of kinship verification is particularly challenging because of the large intra-class variations among different kin pairs. At the same time, look-alikes decrease the inter-class variation among the facial images of kin. While existing algorithms have achieved reasonable accuracies, there is a scope of further improving the performance. For instance, deep learning algorithms can be utilized; however, they typically require a large training database which existing kinship databases lack. Moreover, kinship cues can be visualized as the soft information that can be utilized to boost the performance of face verification algorithms.

I-a Research Contributions

Inspired by face recognition literature, where researchers have tried to understand how humans perform face recognition, we have performed a similar study to understand the ability of humans in identifying kin. Using the cues from human study, this research presents a deep learning based kinship verification framework that relies on learning face representations. A new approach using the proposed filtered contractive deep belief networks (

fcDBN) is presented where the formulation of RBMs is extended through a filtering approach and a contractive penalty. The idea of this approach stems from the fact that facial images have an inherent structure which can be emphasized using filters. By simultaneously learning filters and weights, an invariant representation of the faces is learned which is utilized in kinship verification. Using contractive penalty, we learn robust features that are invariant to local variations in the images. The proposed approach shows state-of-the-art results on multiple datasets used in kinship verification research.
Humans utilize contextual information in identifying faces such as establishing the identity of a person through kinship cues. Inspired by this phenomenon, our research models kinship as soft information which can help in improving the performance of a strong biometric matcher. Therefore, we also present an approach that incorporates kinship as a soft biometric information for boosting the results of face verification. A new database consisting of multiple images of kin has also been created to help in evaluating the performance of the proposed kinship verification algorithm.

Ii Evaluating Human Performance for Kinship Verification

In face recognition literature, several studies have been performed to understand the recognition capabilities of human mind. Inspired by these studies, in this research, a human study is conducted to understand the ability of humans in identifying kin. The goal of this study is to (a) understand the underlying cues that humans use to identify kin, and (b) integrate these findings in automatic deep learning algorithm to achieve better kinship verification accuracy. Lu et al. [24] have performed a similar human study based on kinship verification. They have focused specifically on the overall kinship verification accuracy and concluded that using contextual information such as hair and background improves kinship verification.

Ii-a Experimental Protocol and Databases Used

Amazon MTurk is an online platform specifically designed for aiding research by organizing surveys and collecting results in a comprehensive manner. MTurk allows crowdsourcing and enables researchers to include participants across diverse demographics. It has been shown to provide reliable data as compared to data provided by the traditional means of survey collection and offers a rich pool of participants [31]

. It allows the creation of Human Intelligence Tasks (HITs) for surveys, studies, and experiments which are in turn completed by participants. The participants receive a reward for completing a HIT if their results are approved by the requester. In this study conducted on Amazon MTurk, a total of 479 volunteers (200 male and 279 female) participated. Among all the participants, 366 were Indians (Mean Age (M) = 33.45 years, Standard Deviation in Age (SD) = 11.67 years), 81 were Caucasians (M = 35.39 years, SD = 10.74 years), 29 were Asians (non-Indians) (M = 28.13 years, SD = 6.93 years), and 3 were African-Americans (M = 30.33 years, SD = 8.17 years).

The images used in this study are collected from three databases: Vadana [32, 33], Kinship Verification database [15], and UB Kin database [19, 16, 17]. The database consists of 150 kin pairs and 150 non-kin pairs with 39 Sister-Sister (SS) combinations, 36 Brother-Sister (BS) combinations, 35 Brother-Brother (BB) combinations, 50 Father-Son (FS) combinations, 40 Father-Daughter (FD) combinations, 41 Mother-Daughter (MD) combinations, and 59 Mother-Son (MS) combinations. Each participant is shown five pairs of images that are assigned in a random order. The participant has to answer if the subjects in the given pair of images appear to be kin to each other or not. Additionally, the participants are also asked if they have seen the subjects prior to the study. This allows us to evaluate the differences in the responses based on the familiarity with the stimuli.

Generally, the studies evaluating the human performance have used full faces. However, it is not necessary that the whole face contributes in determining kinship. Therefore, we also perform the experiments with specific facial regions. The performance of the participants is determined for the following visual stimulus:

  1. full face,

  2. T region (containing nose and eyes),

  3. not-T region (containing the face with the eye and nose regions obfuscated),

  4. lower part of facial image, and

  5. binocular region (eye strip).

Fig. 2 illustrates different facial regions extracted from faces of subjects. The binocular region is chosen to observe the effect of eyes on kinship verification. The T region represents features in the region of the face around the eyes and nose. Furthermore, to observe the effect of outer facial regions, not-T region is chosen (which does not have regions that are included in the T region). The lower facial region is included to evaluate a hypothesis stated in an earlier research study [13] which claims that kinship cues are not present in this region.

Fig. 2: Sample images demonstrating seven kin-relations considered in this research.
S. No. Experiments Kin Entropy Non-Kin Entropy Total Entropy Overall Accuracy (in %)
Participant’s Demographic - Gender
1. Female 0.3703 0.0063 0.0069 0.0132 56.00
2. Male 0.2982 0.0045 0.0048 0.0093 55.00
Participant’s Demographic - Age
1. <30 0.3498 0.0056 0.0061 0.0117 55.60
2. 30 - 50 0.3119 0.0050 0.0053 0.0102 55.51
3. >50 0.3986 0.0077 0.0082 0.0159 56.95
Stimulus Kin Relationship
1. Mother-Son 0.8211 0.0162 0.0383 0.0545 55.39
2. Sister-Sister 0.5505 0.0181 0.0059 0.0240 66.23
3. Father-Daughter 0.3762 0.0065 0.0072 0.0137 56.01
4. Mother-Daughter 0.3088 0.0046 0.0051 0.0097 54.74
5. Brother-Sister 0.2482 0.0024 0.0035 0.0059 50.11
6. Father-Son 0.2092 0.0021 0.0023 0.0044 53.48
7. Brother-Brother 0.0560 0.0002 0.0001 0.0003 54.10
Local and Global Regions of Face
1. Face 0.4531 0.0107 0.0115 0.0221 58.36
2. Not-T 0.4212 0.0084 0.0093 0.0177 57.02
3. T 0.3466 0.0059 0.0064 0.0123 55.92
4. Chin 0.2772 0.0037 0.0040 0.0077 54.57
5. Binocular 0.1656 0.0013 0.0013 0.0026 52.58
TABLE II: Quantitative analysis of human performance on kinship verification.

Ii-B Results and Analysis

In the human study, we analyze (a) the effect of gender and age demographics of participants on kinship verification, (b) the types of kinship relation between stimuli, and (c) the discriminative local and global face features that humans rely on to correctly verify kinship.

Based on the responses from participants, a quantitative analysis of the data is performed using three independent measures: accuracy of correct kinship verification, discriminability or sensitivity index (), and information theory to compute the kin entropy and non-kin entropy. Discriminability or sensitivity index () is used in signal detection theory to quantify the difference between the mean signal and noise distributions in a given stimulus as perceived by participants.

There is an inherent uncertainty in determining the relationship between stimuli. This uncertainty can be attributed to noise and higher response categories. The stimulus information entropy and noise in the signal

are computed from the confusion matrix using Eq.

1 and 2 respectively.


Here, refers to the response of participants and refers to the stimulus. The information entropy is calculated by subtracting the noise in the signal from the stimulus entropy as shown in Eq. 3. The information entropy is divided by to represent in bits and larger values of the bits determine higher perceptual judgment of the participants. Higher values in accuracy or or total entropy indicate that the signals can be more readily detected compared to other visual artifacts that do not contribute to the kinship verification.

The results are analyzed to understand the effect of four different attributes on kinship verification: gender and age of participants, relation between stimuli kin pairs, and facial regions presented in the stimuli. The results are summarized in Table II.

Ii-B1 Effect of Participant’s Gender on Kinship Verification

In face recognition, several studies have demonstrated that women outperform men in the ability to recognize faces [34, 35]. In a meta-analysis study of over 140 face recognition studies, Herlitz and Loven [36] have found that females consistently outperform males in recognizing faces. This fact is also supported in [37], where females performed better than males in the face recognition task. The effect of participant’s gender is analyzed to determine if there exists any difference in the skills of males and females for kinship verification. As shown in Table II, it is observed that there is only 1% increase in the overall accuracy of females as compared to males. Overall accuracy is defined as the proportion of correct kin and correct non-kin responses as compared to the total responses.

However, from Table II, higher values for females as compared to males indicates higher sensitivity of females in detecting kin signal across images. This observation is also supported by the information entropy based on responses from females and males. -test of proportion [38] conducted at 95% confidence level also validates this claim. These quantitative measures give us an intuition that females may have the ability to verify kin better than males. One reason for this could be that the measure being employed for testing kinship is facial similarity analogous to facial recognition; however, this needs to be tested in future studies.

The accuracy for kinship verification increases drastically when the faces are known to the subjects. For familiar faces, female participants achieve an accuracy of 64.54% while the male participants achieve an accuracy of 61.95%. Also, the accuracy of non-kin verification of familiar faces is 72.47% for females whereas it is only 52.34% for males. This is in accordance with the belief that women perform better in episodic memory tasks [39]. For unfamiliar faces, the trend follows the overall accuracy with females outperforming males in kinship verification.

Ii-B2 Effect of Participant’s Age on Kinship Verification

The effect of the age of participants is studied to determine whether people of a particular age group are significantly better than others in verifying kin and non-kin. Due to limited number of participants in the younger and older age groups, the age categories have been combined into three different groups: <30 years, 30-50 years, and >50 years. As shown in Table II, an overall accuracy of 56.95% is observed by the participants of age-group >50 years while the second highest accuracy is observed to be 55.6% in the age-group of <30 years. For the age group >50, a higher value of 0.3986 and a higher total entropy of 0.0159, as shown in Table II, indicates that older age group may better distinguish between kin and non-kin. However, -test of proportion at 95% confidence level does not indicate statistical difference among these groups which suggests that participant’s age may not have an effect on kinship verification.

Ii-B3 Effect of Stimuli Kin Pair Relation on Kinship Verification

In a number of experiments, females have outperformed males in identifying female stimuli faces [40, 41]. Therefore, it is interesting to examine if the relationship of kin pair affects the decision-making process of the participants. As shown in Table II, the sister-sister kin pair has the highest overall accuracy of 66.23%. However, using the test of significance, it is observed that the mother-son pair has the highest value of 0.8211 and the highest total entropy bits of 0.0545 as shown in Table II.

We also analyze the verification results separately for familiar and unfamiliar faces for different kin relations. For familiar faces, we observe that the accuracy of father-son pair increases from 53.49% to 65.98% and the sister-sister kin pair goes up to 82.2% when people are familiar with the faces. This trend is seen in all the pairs and is reflective of the memory-cognitive ability of humans. As expected, the trend for unfamiliar faces is lower than familiar faces and exactly similar to the overall trend i.e. the sister-sister kin pair is the easiest to detect as kin with an accuracy of 46.0%.

Using the values, it is observed that pairs having female stimuli are more accurately detected as kin. The order of the pairs based on descending value is Mother-Son > Sister-Sister > Father-Daughter > Mother-Daughter > Brother-Sister > Father-Son > Brother-Brother. The results are in accordance with the study conducted by Kaminsky et al. [9] wherein they mentioned that the presence of a female stimulus boosts the kinship accuracy. This can be attributed to partial occlusion of facial features such as beard and mustache in men as compared to women. Another reason could be the higher facial recognition capability of female participants in focusing more on female faces than male faces [36].

The results obtained for effect of participants’ gender and age, as well as kin relationship between the stimuli, are used to validate our multi-approach quantitative analysis with conclusions arrived by other researchers who may not have used the same measures as we have. With this validation, we analyze the results obtained for the effect of discriminative local and global face features on kinship verification. Our motivation is to identify the top three regions from the human study to be integrated into the automatic kinship verification.

Ii-B4 Effect of Facial Regions on Kinship Verification

Many studies in psychology have analyzed the effect of global facial features vs. local features for face recognition abilities of humans [42]. Keil [43] has emphasized the role of internal features in face recognition by concluding that eyes determine the optimal resolution for face recognition. These local features have been used as parts of descriptor in computational methods to verify kinship [21]. However, to the best of our knowledge, no study has been conducted to analyze the effect of individual facial regions in kinship verification in a human study with statistical analysis to determine their individual effects. The two above-mentioned studies have focused on larger facial regions by dividing the face into two halves (laterally and horizontally). Intuitively, the subjects should perform better when the whole face is shown. However the results in Table II show that even though the whole face yields an accuracy of 58.36%, it is not very much different compared to local regions. The local features such as not-T region and T region show an accuracy of 57.02% and 55.92% respectively. The trend remains the same even when unfamiliar image responses are taken into account. The accuracy of T region increases to 63.45% when the image subjects are known to humans indicating that the eye features along with the nose play an important role in kinship verification.

These results are supported by the test of perception and total information entropy values from the stimulus and response of participants. The complete face region has the highest value of 0.4531 and total entropy value of 0.0221 as shown in Table II, followed by the not-T region and the T region. A -test of proportion at 95% also validates the above pattern. The results are consistent with the face recognition studies where it has been observed that face outline, eyes, and upper face are important areas for perceiving faces [42].

Iii Proposed Kinship Verification Learning

The analysis of human performance suggests that out of the five facial regions, full face, T-region, and not T-region yield the best performance for kinship verification. Inspired by this observation, we design a kinship verification framework that classifies a pair of input images as kin or not-kin using these three regions. As discussed earlier, it is challenging to define the similarities and differences in kin and non-kin image pairs. Therefore in this research, we propose the

Kinship Verification via Representation Learning framework to learn the representations of faces for kinship verification using deep learning paradigm. Fig. 3 shows the steps involved in the proposed framework.

In the first stage of this framework, the representations of each facial region are learned from external training data in an unsupervised manner. These are learned through the proposed filtered contractive DBN (fcDBN) approach. The individually learned representations are combined to form a compact representation of the face in the second stage. Finally, a multi-layer neural network is trained using these learned feature representations for supervised classification of kin and non-kin. Section III-A gives an overview of deep belief networks followed by the proposed filtered contractive RBMs, and Section III-B describes the kinship feature learning and classification framework.

Fig. 3: Proposed hierarchical kinship verification via representation learning (KVRL-fcDBN) framework. In the first stage of Fig. 3(a), representations of individual regions are learned. A combined representation is learned in the second stage of Fig. 3(a). Fig. 3(b) shows the steps involved in kin vs non-kin classification.

Iii-a Proposed Filtered Contractive DBN

A Deep Belief Network (DBN) is a graphical model that consists of stacked Restricted Boltzmann Machines (RBM) and is trained greedily layer by layer

[44]. An RBM represents a bipartite graph where one set of nodes is the visible layer and the other set of nodes is the hidden layer. The energy function of an RBM is defined as:




where, denotes the visible variables and denotes the hidden variables. The model parameters are denoted by and denotes the weight of the connection between the visible unit and hidden unit and and denote the bias terms of the model. For handling real-valued visible variables such as image pixel intensities, Gaussian-Bernoulli RBMs are one of the popular formulations and the energy function is defined as:


Here, denotes the real-valued visible vector and

are the model parameters. The joint distribution over

and , and the marginal distribution over is defined as:




where, is a partition function.


be the loss function of RBM with the energy function defined in Eq.

5. It can be defined as


In this paper, we extend this formulation and propose filtered contractive DBN (fcDBN) which utilizes filtered contractive RBMs (fcRBM) as its building block. fcRBM has two components: a contractive regularization term and a filtering component which is discussed in detail below.

The idea of introducing contractive penalty stems from Rifai et. al [45]

where they introduce contractive autoencoders. A regularization term is added in the autoencoder loss function for learning robust features as shown in Eq.



where, represents the weight and the bias of the autoencoder to be learned,

represents the activation function,

represents the regularization parameter, and

represents the Jacobian of the input with respect to the encoder function of the autoencoder. For a linear activation function, the contractive penalty boils down to a simple weight decay term (Tikhonov-type regularization). For a sigmoid the penalty is smooth and is given by:


Our work is motivated by the analytic insight and practical success of contractive autoencoders. We propose to apply the contractive penalty term to the RBM formulation. Thus, the modified loss function for contractive RBMs (c-RBM) can be expressed as:


where, represents Frobenius norm of the Jacobian matrix (i.e. it is -norm of the second order differential) as shown in Eq. 11. Penalizing the Frobenius norm of the Jacobian matrix leads to penalization of the sensitivity; which encourages robustness of the representation. The contractive penalty encourages the mapping to the feature space to be contractive to the neighborhood of the training data. The flatness induced by having low valued first derivatives will lead to invariance of the representation for small variations in the input.

We further introduce a filtering approach in the RBM. Facial images have an inherent structure and filters can be used to extract this structural information in order to train the network using only the relevant filtered information. Therefore, we propose extending Eq. 5 (and in a similar manner, Eq. 6) with a filtering approach that can incorporate the structural and relational information in the image using filters.


where, and “” is the convolution operation. is the learned filter of size and therefore, includes and other weight parameters. Here, the filters transform the input image , emphasizing relevant structural information which is used to train the RBM. Utilizing the above energy function, the loss function of the filtered RBM, is defined similarly to Eq. 9. Note that, the proposed formulation is different from convolutional RBMs [46]. In convolutional RBMs, the weights are shared among all locations in the image and thus, a pooling step is required to learn high-level representations. In the proposed formulation, we have introduced separate filters that will account for the structure of the image and learn these filters and weight matrix simultaneously.

Combining the above two components, we define filtered contractive RBMs (fcRBM) and the loss function is modeled as:


where, and are the regularization parameters.

-norm applied over the filters prevents large deviation of values that could potentially have an unwarranted filtering effect on the images. Both the components of the proposed formulation are smooth and hence differentiable; and can be solved iteratively using contrastive divergence based approach. Multiple

fcRBMs are then stacked together to form fcDBN.

Iii-B KVRL-fcDBN for Kinship Verification

The KVRL framework proposed in this research comprises of two phases:

  • Unsupervised hierarchical two-stage face feature representation learning

  • Supervised training using extracted features and kin verification using the learned model

KVRL-fcDBN: The representation of face image is learned by stacking fcRBMs and learning the weights in a greedy layer by layer fashion to form a filtered contractive deep belief network (fcDBN). As shown in Fig. 3, we extract three regions from the input face image to learn both global and local features. These regions are selected based on the results of the human study that indicates complete face, T region and not-T region are more significant than other face regions. In the first stage of the proposed KVRL-fcDBN framework, each region is first resized to a standard image and is converted to vector. Three separate fcDBNs are trained, one for each region and the output from these fcDBNs are combined using another fcDBNs which acts as the second stage of the proposed hierarchical feature learning.

We next apply dropout based regularization throughout the architecture. Srivastava et al. [47] proposed dropout

training as a successful way for preventing overfitting and an alternate method for regularization in the network. The motivation is to inhibit the complex co-adaptation between the hidden nodes by randomly dropping out a few neurons during the training phase. It can be seen as a sampling process from a larger network to create random sub-networks with the aim of achieving good generalization capability. Let

denote the activation function for the layer, and , be the weights and biases for the layer, denotes the element-wise multiplication, and is a binary mask with entries drawn from Bernoulli (1-r) indicating which activations are not dropped out. Then the forward propagation to compute the activation of layer of the architecture involving dropout can be calculated as,


By introducing dropout in the proposed approach, we obtain good generalization that emulates sparse representations to mitigate any possible overfitting. In summary, while the first stage of the KVRL-fcDBN framework learns the local and global facial features, the second stage assimilates the information (i.e. feature fusion) which is used for kinship verification.

Fig. 4: Illustrating the steps involved in the proposed context boosting algorithm where kinship verification scores generated from the KVRL framework are used to improve the face verification performance.
Fig. 5: Humans utilize kinship as a context to identify siblings of famous personalities.

The number of images in currently available kinship datasets are limited and cannot be used directly to train the deep learning algorithms. Therefore, a separate database is needed to train the model employed in the KVRL-fcDBN framework (details are given in Section V-A). The representations learned from the proposed KVRL-fcDBN framework are used for kinship verification. As shown in Fig. 3

(b), for a pair of kin images, the features are concatenated to form the input vector for supervised classification. A three-layer feed-forward neural network is trained for classifying the image pair as kin or non-kin.

Iv Boosting Face Verification using Kinship

Soft biometrics modalities lack the individualization characteristics on their own but can be integrated within a verification system that uses the primary biometric trait such as face to boost the accuracy [48]. Soft biometric traits can often be based on association wherein the context of association can be used to increase the recognition performance in challenging image scenarios [49]. In this research, we propose kinship as a context that can be used as a soft biometric modality to improve the accuracy of face verification. Kinship cues are used by humans in daily life for recognition. For instance, we may recognize a person based on their familiarity with their kin even though we may not have met the person earlier. Such a scenario is depicted in Fig. 5. To incorporate this context, we propose a formulation to incorporate kinship verification scores generated by the proposed framework to boost the performance of any face verification algorithm.

Fig. 4 shows how the proposed KVRL-fcDBN framework is used to improve the performance of face verification algorithms using kin-verification scores. This formulation is generic in nature and independent of the kinship verification and face verification algorithms. As shown in Fig. 6, given a probe face image, face verification score and kinship classification score are computed from the gallery data (claimed identity and associated kin image), which are then used in the proposed formulation. We demonstrate two methods for boosting the performance using Product of Likelihood Ratio (PLR) [50] and Support Vector Machine (SVM) [51].

Fig. 6: A probe image can have a match score (s) with an image in the gallery and a kin score (k) with the associated kin in the gallery to boost the face verification performance.
  • PLR based Score Boosting Algorithm: Let be the face matching score obtained by matching a probe image and a gallery image. represent the kin scores obtained from the probe image and images of the gallery subject. The product of likelihood ratio [52] can be calculated as:


    Here, represents the true kin class, represents the non-kin class, represents the genuine class, represents the impostor class. and

    represent the class conditional probability of the input vector. All four variables are modeled using mixture of Gaussian distributions.

  • SVM based Score Boosting Algorithm: Let be the feature vector representing the concatenation of the face matching and kin verification scores i.e . A support vector machine can be trained on the combined score vector to boost the performance of face verification.

Since we are proposing a generic approach which is independent of the features used for face verification, we have used the commonly explored local binary patterns (LBP) [53] and histogram of oriented gradients (HOG) [54] for face verification.

V Experimental Evaluation

This section describes the datasets, implementation details, and experimental protocols used for evaluating the effectiveness of the proposed representation learning for kinship using hierarchical multi-stage filtered contractive deep belief network (KVRL-fcDBN) along with the PRL and SVM based face verification score boosting algorithms.

Database No. of Subjects Total Images Kin Relations Multiple Images
Cornell Kin [15] 286 286 4 No
UB KinFace [19] 400 600 4 No
KinFaceW-I [24] 1066 1066 4 No
KinFaceW-II [24] 2000 2000 4 No
WVU Kinship 226 904 7 Yes
TABLE III: Characteristics of the five databases used in this research.

V-a Datasets

The efficacy of the proposed kinship verification algorithm is evaluated on the following four publicly available databases.

  • UB KinFace Dataset[19],

  • Cornell Kinship Dataset [15],

  • KinFace-I [24], and

  • KinFace-II [24].

Along with these four, we have also prepared a new kinship database, known as the WVU Kinship Database, containing multiple images of every person111The chrominance based algorithm, given by Bordallo et al. [55] performs poorly on the WVU Kinship database which validates the correctness of the database.. The WVU Kinship dataset consists of 113 pairs of individuals. The dataset has four images per person, which allows us to have intra-class variations for a specific kin-pair along with the inter-class variations generally available with all other databases. It consists of seven kin-relations: Brother-Brother (BB), Brother-Sister (BS), Sister-Sister (SS), Mother-Daughter (MD), Mother-Son (MS), Father-Son (FS), and Father-Daughter (FD). The database has 22 pairs of BB, 9 pairs of BS, 13 pairs of SS, 14 pairs of FD, 34 pairs of FS, 13 pairs of MD and 8 pairs of MS where every pair has eight images each. As shown in Fig. 7, the multiple images per kin-pair also include variations in pose, illumination and occlusion. Table III summarizes the characteristics of all five databases.

Kinship verification results are shown on all five databases. However, the results of face score boosting are shown only on the WVU Kinship database because the other four databases only contain a single image per person.

Fig. 7: Challenges of pose, illumination, and occlusion in multiple images of the same kin-pair.
(a) Cornell Kinship Database
(b) KinFace-I Database
(c) KinFace-II Database
(d) UB Kinship Database
(e) WVU Kinship Database
Fig. 8: Results of kinship verification using the proposed hierarchical KVRL framework.

V-B Implementation Details

Training the fcDBN algorithm to learn the face representation for kinship requires a large number of face images. For this purpose, about 600,000 face images are used. These images are obtained from various sources including CMU-MultiPIE and Youtube faces databases [56], [57]. Note that, existing algorithms do not use outside data; however, as mentioned previously, due to the nature of deep learning paradigm, the proposed algorithm requires large data to learn face representation useful for kinship verification.

For face detection, all the images are aligned using affine transformation and Viola-Jones face detection algorithm [58]. Facial regions are extracted from each image and resized to . The resized regions are converted to a vector of size and given as input to individual fcDBN deep learning algorithm in the first stage of Fig 3(a). For every individual fcDBN, three filtered contractive RBMs are stacked together and all of them are learned in a greedy layer-wise fashion where each layer receives the representation of the output from the previous layer. In the first stage, the number of nodes are 1024, 512, and 512 respectively. An output vector of size 512 is obtained from each deep belief network and is concatenated to form a vector of size 1536. A compact representation is learned from the fcDBN in the second stage and is used for training the classifier. In the second stage of the deep belief network, the size of the three layers are 1536, 1024, and 512, respectively. The dropout is applied with probability on the hidden nodes and on the input vectors. The performance of the proposed KVRL-fcDBN algorithm is also evaluated when only face is used or all the five facial regions (shown in Fig. 2) are used.

Algorithm Cornell UB KinFace-I KinFace-II WVU
KVRL-SDAE 82.0 85.9 92.3 92.7 78.7
KVRL-DBN 83.6 88.3 93.0 93.9 83.5
KVRL-fcDBN 89.5 91.8 96.1 96.2 90.8
TABLE IV: Kinship verification performance of the proposed KVRL framework on 5 different datasets
Algorithm FS FD MS MD
MNRML[24] 74.5 68.8 77.2 65.8
DMML[25] 76.0 70.5 77.5 71.0
KVRL using SDAE 85.0 80.0 85.0 75.0
KVRL using DBN 88.3 80.0 90.0 72.5
KVRL using c-DBN 90.0 84.8 90.0 78.9
KVRL using fcDBN 91.7 87.9 95.2 84.2
(a) Cornell Kinship Dataset
Algorithm Child-Young Parents Child-Old Parents
MNRML[24] 66.5 65.5
DMML[25] 74.5 70.0
KVRL using SDAE 85.9 84.8
KVRL using DBN 88.5 88.0
KVRL using c-DBN 90.0 89.5
KVRL using fcDBN 92.0 91.5
(b) UB Kinship Dataset
Algorithm FS FD MS MD
MRNML[24] 72.5 66.5 66.2 72.0
DML[25] 74.5 69.5 69.5 75.5
Discriminative[26] 76.4 72.5 71.9 77.3
KVRL using SDAE 95.5 88.8 87.1 96.9
KVRL using DBN 96.2 89.6 87.9 97.6
KVRL using c-DBN 97.4 93.3 90.5 98.4
KVRL using fcDBN 98.1 96.3 90.5 98.4
(c) KinFace-I Dataset
Algorithm FS FD MS MD
MNRML[24] 76.9 74.3 77.4 77.6
DML[25] 78.5 76.5 78.5 79.5
Discriminative[26] 83.9 76.7 83.4 84.8
KVRL using SDAE 94.0 89.2 93.6 94.0
KVRL using DBN 94.8 90.8 94.8 95.6
KVRL using c-DBN 96.0 92.4 96.4 96.8
KVRL using fcDBN 96.8 94.0 97.2 96.8
(d) KinFace-II Dataset
Algorithm FS FD MS MD BB BS SS
KVRL using SDAE 80.9 76.1 74.2 80.7 81.6 76.5 80.3
KVRL using DBN 85.9 79.3 76.0 84.8 85.0 79.9 85.7
KVRL using c-DBN 87.9 79.9 83.6 91.3 86.9 82.6 91.8
KVRL using fcDBN 90.8 84.4 90.6 95.2 90.9 87.5 95.7
(e) WVU Kinship Dataset
TABLE V: Comparing the kinship verification performance (%) of the proposed KVRL framework with existing kinship verification algorithms on multiple datasets.
(a) Verification performance with changing the number of filters.
(b) Kinship verification performance with respect to regions taken in the KVRL-fcDBN framework.
Fig. 9: Variations in the performance of KVRL-fcDBN with respect to number of filters and type of facial regions on the WVU kinship database.

V-C Experimental Protocol

V-C1 Kinship Verification

The performance of the proposed KVRL-fcDBN framework is evaluated on the same experimental protocol as described by Yan et al. [25], where five-fold cross-validation for kin verification is performed by keeping the images in all relations to be roughly equal in all folds. This protocol is followed to ensure that the experimental results are directly comparable even though the list of negative pairs may vary. In this algorithm, a random negative pair for kinship is generated such that each image is used only once in the training phase. The performance of the proposed algorithm (KVRL-fcDBN) is compared with the baseline evaluations of KVRL framework along with three state-of-the-art algorithms.

  • Multiview neighborhood repulsed metric learning (MNRML)[24]Since the experimental protocol is same, results are directly reported from the papers.,

  • Discriminative multi-metric learning (DMML)[25]note1 , and

  • Discriminative model[26]note1 .

Since the proposed architecture is flexible in nature, we also utilize Sparse Denoising Autoencoders (SDAE) and Deep Belief Network (DBN) in the KVRL framework. We term these approaches of KVRL framework as KVRL-SDAE and KVRL-DBN. The proposed approach (KVRL-

fcDBN) is compared with KVRL-SDAE, KVRL-DBN and KVRL-cDBN (where contractive RBMs are utilized in the KVRL framework). We also analyze the effect of regions is observed where different combinations of facial regions are given as input to the KVRL-fcDBN framework.

V-C2 Boosting Face Verification using Kinship as Context

The WVU Kinship database is divided into training and testing sets. Similar to kinship verification experiments, the training partition consists of 60% of the dataset and the testing partition consists of the remaining 40% where the subjects are mutually independent and disjoint. In both the sets, two images of an individual are used as probe, while the remaining are used as gallery. Four images of the kin of the individual are kept in the gallery where the association between the kin in the gallery set is known. The proposed KVRL-fcDBN framework is used to generate the kinship scores between the probes and kin images using the fcDBN deep learning algorithm.

(a) ROC using HOG descriptor
(b) ROC using LBP descriptor
Fig. 10: ROC curves summarizing the results of Kinship aided Face Verification using PLR and SVM.

V-D Results of Kinship Verification

Table IV and Fig. 8 shows the results obtained using the experiments conducted on multiple databases. It is observed that KVRL-fcDBN consistently performs better than the KVRL-SDAE and KVRL-DBN approach on all the datasets. The transformation of original input through the filters improves learning of the underlying representations.

Table V also shows the results for different kin-relations obtained using the proposed deep learning algorithms. Compared to existing algorithms, KVRL-fcDBN framework consistently yields state-of-the-art results and shows improvement of up to 21% for all kin relations. It is observed that for UB database, the algorithm performs better when the images belong to children and young parents (Set 1) as compared to when there is a considerable gap between the ages of the kin (Set 2). A general trend appears for KinFace-I, KinFace-II and WVU Kinship database, where the images of kin of the same gender perform better than images belonging to a different gender. Specifically, Father-Son and Mother-Daughter kinship relations have a higher accuracy than Father-Daughter and Mother-Son. This relationship is also observed for the brothers and sisters as compared to Brother-Sister pair in the WVU Kinship database.

The performance of the KVRL-fcDBN approach is also computed with respect to the number of filters as shown in Fig. 9(a). It is observed that the accuracy increases as the number of filters increases but no noticeable improvement is observed after six filters. From the human study, as mentioned previously, it is observed that the full face, T and Not-T regions are more discriminatory and thus are utilized in the KVRL-fcDBN framework. For validation, experiments are performed by providing different regions as input to the KVRL framework and the results are shown in Fig. 9 (b). It is observed experimentally that the combination of face, T and Not-T regions perform the best in the proposed KVRL-fcDBN framework. This approach is also computationally less intensive than using all the regions in the framework.

We also compare the performance of neural network classifier with SVM classifier for kinship verification. Using SVM with RBF kernel, across all the databases yields slightly lower performance compared to the neural network and the difference is 0.2-0.5%. Computationally, on a six-core Xeon Processor with 64GB RAM, the proposed framework requires 1 second for feature extraction and kinship verification.

V-E Results of Boosting Face Verification using Kinship as Context

The results from boosting the face verification performance using both PLR and SVM are shown in Fig. 10. It is observed that HOG descriptor performs better than LBP for face verification on the WVU Kinship dataset. However for both HOG and LBP, the face verification accuracy increases over 20% when kinship scores obtained using the proposed KVRL-fcDBN framework is used to boost the face verification scores. At 0.01% FAR, a performance of 59.4% is observed by using HOG descriptor. This improves to 79.3% when kinship scores are utilized using fcDBN as context and PLR algorithm is used. Similarly, the performance improves to 80.0% when SVM is used along with fcDBN. The improvement is more pronounced for true positive rate (TPR) at lower values of false positive rate (FPR). It is to be noted that the proposed experiment can be performed with any face verification algorithm or feature descriptor and these results suggest that incorporating kinship as soft biometric information improves the face verification performance.

Vi Conclusion

The contributions of this research are four folds: (1) evaluation of human performance in kinship verification, (2) deep learning framework using proposed filtered contractive DBN (fcDBN) for kinship verification, (3) utilizing kinship as soft biometric information for boosting face verification performance, and (4) a new kinship verification database where each subject has multiple images, that is suitable for computation of both kinship verification and kinship-aided face verification. Kin pairs having at least one female subject are found to be easily detected as kin with the pairing of mother-son and sister-sister having the two highest significance. Further, the proposed two-stage hierarchical representation learning framework (KVRL-fcDBN) utilizes the trained deep learning representations of faces to calculate a kinship similarity score and is shown to outperform recently reported results on multiple kinship datasets. Finally, we illustrate that kinship score can be used as a soft biometric to boost the performance of any standard face verification algorithm. As a future research direction, we can extend the proposed algorithm to build the family tree and evaluate the performance on newer kinship databases such as Family In the Wild [59].


The authors would like to thank the associate editor and reviewers for their valuable feedback. The authors also thank Daksha Yadav for reviewing the paper. We gratefully acknowledge the support of NVIDIA Corporation for the donation of the Tesla K40 GPU utilized for this research.


  • [1] L. Hogben, “The genetic analysis of familial traits,” Journal of Genetics, vol. 25, no. 2, pp. 211–240, 1932.
  • [2] D. Lisa, B. Jones, A. Little, and D. Perrett, “Social perception of facial resemblance in humans.” Archives of sexual behavior, vol. 37, no. 1, pp. 64–77, 2008.
  • [3] M. Daly and M. I. Wilson, “Whom are newborn babies said to resemble?” Ethology and Sociobiology, vol. 3, no. 2, pp. 69–78, 1982.
  • [4] N. J. Christenfeld and E. A. Hill, “Whose baby are you?” Nature, vol. 378, no. 6558, p. 669, 1995.
  • [5] S. Bredart and R. M. French, “Do babies resemble their fathers more than their mothers? A failure to replicate Christenfeld and Hill (1995),” Evolution and Human Behavior, vol. 2, pp. 129–135, 1999.
  • [6] P. Bressan and M. F. Martello, “Talis pater, talis filius: perceived resemblance and the belief in genetic relatedness,” Psychological Science, vol. 13, no. 3, pp. 213–218, 2002.
  • [7] R. Burch and G. Gallup, “Perceptions of paternal resemblance predict family violence.” Evolutionary Human Behavior, vol. 21, no. 6, pp. 429–435, 2000.
  • [8] S. M. Platek, J. P. Keenan, G. G. Gallup, and F. B. Mohamed, “Where am I? The neurological correlates of self and other,” Brain research: Cognitive brain research, vol. 19, no. 2, pp. 114–122, 2004.
  • [9] G. Kaminski, S. Dridi, C. Graff, and E. Gentaz, “Human ability to detect kinship in strangers’ faces: effects of the degree of relatedness,” The Royal Society Biological Sciences, vol. 276, pp. 3193–3200, 2009.
  • [10] D. McLain, D. Setters, M. P. Moulton, and A. E. Pratt, “Ascription of resemblance of newborns by parents and nonrelatives,” Evolution and Human Behavior, vol. 21, no. 1, pp. 11–23, 2000.
  • [11] R. Oda, A. Matsumoto-Oda, and O. Kurashima, “Effects of belief in genetic relatedness on resemblance judgments by Japanese raters,” Evolution and Human Behavior, vol. 26, no. 5, pp. 441–450, 2005.
  • [12] L. T. Maloney and M. F. Dal Martello, “Kin recognition and the perceived facial similarity of children,” Journal of Vision, vol. 6, no. 10, 2006.
  • [13] M. F. Dal Martello and L. T. Maloney, “Where are kin recognition signals in the human face?” Journal of Vision, vol. 6, no. 12, 2006.
  • [14] ——, “Lateralization of kin recognition signals in the human face,” Journal of Vision, vol. 10, no. 8, 2010.
  • [15] R. Fang, K. D. Tang, N. Snavely, and T. Chen, “Towards computational models of kinship verification,” in International Conference on Image Processing, 2010, pp. 1577–1580.
  • [16] S. Xia, M. Shao, and Y. Fu, “Kinship verification through transfer learning,” in

    International Joint Conference on Artificial Intelligence

    , 2011, pp. 2539–2544.
  • [17] M. Shao, S. Xia, and Y. Fu, “Genealogical face recognition based on UB kinface database,” in

    Computer Vision and Pattern Recognition Workshops

    , June 2011, pp. 60–65.
  • [18] X. Zhou, J. Hu, J. Lu, Y. Shang, and Y. Guan, “Kinship verification from facial images under uncontrolled conditions,” in ACM Multimedia, 2011, pp. 953–956.
  • [19] S. Xia, M. Shao, J. Luo, and Y. Fu, “Understanding kin relationships in a photo,” IEEE Transactions on Multimedia, vol. 14, no. 4, pp. 1046–1056, 2012.
  • [20] N. Kohli, R. Singh, and M. Vatsa, “Self-similarity representation of weber faces for kinship classification,” in Biometrics: Theory, Applications and Systems, 2012, pp. 245–250.
  • [21] G. Guo and X. Wang, “Kinship measurement on salient facial features,” IEEE Transactions on Instrumentation and Measurement, vol. 61, no. 8, pp. 2322–2325, 2012.
  • [22] X. Zhou, J. Lu, J. Hu, and Y. Shang, “Gabor-based gradient orientation pyramid for kinship verification under uncontrolled environments,” in ACM International Conference on Multimedia, 2012, pp. 725–728.
  • [23] H. Dibeklioglu, A. Salah, and T. Gevers, “Like father, like son: Facial expression dynamics for kinship verification,” in IEEE International Conference on Computer Vision, Dec 2013, pp. 1497–1504.
  • [24] J. Lu, X. Zhou, Y.-P. Tan, Y. Shang, and J. Zhou, “Neighborhood repulsed metric learning for kinship verification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 2, pp. 331–345, 2014.
  • [25] H. Yan, J. Lu, W. Deng, and X. Zhou, “Discriminative multimetric learning for kinship verification,” IEEE Transactions on Information Forensics and Security, vol. 9, no. 7, pp. 1169–1178, 2014.
  • [26] A. Dehghan, E. G. Ortiz, R. Villegas, and M. Shah, “Who do I look like? determining parent-offspring resemblance via gated autoencoders,” in Computer Vision and Pattern Recognition, 2014, pp. 1757–1764.
  • [27] H. Yan, J. Lu, and X. Zhou, “Prototype-based discriminative feature learning for kinship verification,” IEEE Transactions on Cybernetics, vol. PP, no. 99, pp. 1–1, 2014.
  • [28] Q. Liu, A. Puthenputhussery, and C. Liu, “Inheritable fisher vector feature for kinship verification,” in IEEE 7th International Conference on Biometrics Theory, Applications and Systems, Sept 2015, pp. 1–6.
  • [29] P. Alirezazadeh, A. Fathi, and F. Abdali-Mohammadi, “A genetic algorithm-based feature selection for kinship verification,” IEEE Signal Processing Letters, vol. 22, no. 12, pp. 2459–2463, Dec 2015.
  • [30] X. Zhou, Y. Shang, H. Yan, and G. Guo, “Ensemble similarity learning for kinship verification from facial images in the wild,” Information Fusion, pp. –, 2015.
  • [31] M. Buhrmester, T. Kwang, and S. D. Gosling, “Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data?” Perspectives on Psychological Science, vol. 6, no. 1, pp. 3–5, 2011.
  • [32] G. Somanath, M. V. Rohith, and C. Kambhamettu, “Vadana: A dense dataset for facial image analysis,” in International Conference on Computer Vision Workshops, 2011, pp. 2175–2182.
  • [33] G. Somanath and C. Kambhamettu, “Can faces verify blood-relations?” in Biometrics: Theory, Applications and Systems, 2012, pp. 105–112.
  • [34] J. Rehnman and A. Herlitz, “Women remember more faces than men do.” Acta Psychologica, vol. 124, no. 3, pp. 344–355, 2007.
  • [35] ——, “Higher face recognition ability in girls: Magnified by own-sex and own-ethnicity bias,” Memory, vol. 14, no. 3, pp. 289–296, 2006.
  • [36] A. Herlitz and J. Loven, “Sex differences and the own-gender bias in face recognition: A meta-analytic review,” Visual Cognition, vol. 21, no. 9-10, pp. 1306–1336, 2013.
  • [37] T. Susilo, L. Germine, and B. Duchaine, “Face recognition ability matures late: evidence from individual differences in young adults,” Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 5, pp. 1212–1217, 2013.
  • [38] J. Fleiss, B. Levin, and M. Paik, Statistical Methods for Rates and Proportions, ser. Wiley Series in Probability and Statistics.   Wiley, 2004.
  • [39] A. Herlitz, L.-G. Nilsson, and L. Backman, “Gender differences in episodic memory,” Memory and Cognition, vol. 25, no. 6, pp. 801–811, 1997.
  • [40] C. Lewin and A. Herlitz, “Sex differences in face recognition-Women’s faces make the difference,” Brain and Cognition, vol. 50, no. 1, pp. 121–128, 2002.
  • [41] D. B. Wright and B. Sladden, “An own gender bias and the importance of hair in face recognition,” Acta Psychologica, vol. 114, no. 1, pp. 101–114, 2003.
  • [42] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Computational Survey, vol. 35, no. 4, pp. 399–458, 2003.
  • [43] M. S. Keil, “I look in your eyes, honey : Internal face features induce spatial frequency preference for human face processing,” PLoS Computational Biology, vol. 5, no. 3, 2009.
  • [44] G. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006.
  • [45] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, “Contractive auto-encoders: Explicit invariance during feature extraction,” in Proceedings of the 28th International Conference on Machine Learning, 2011, pp. 833–840.
  • [46]

    H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng, “Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations,” in

    Proceedings of the 26th Annual International Conference on Machine Learning.   ACM, 2009, pp. 609–616.
  • [47] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, no. 1, pp. 1929–1958, Jan. 2014.
  • [48] A. K. Jain, S. C. Dass, and K. Nandakumar, “Soft biometric traits for personal recognition systems,” in Biometric Authentication.   Springer, 2004, pp. 731–738.
  • [49] S. Bharadwaj, M. Vatsa, and R. Singh, “Aiding face recognition with social context association rule based re-ranking,” International Conference on Biometrics, 2014.
  • [50] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification (2nd Edition).   Wiley-Interscience, 2000.
  • [51] V. N. Vapnik and V. Vapnik, Statistical learning theory.   Wiley New York, 1998, vol. 1.
  • [52] K. Nandakumar, Y. Chen, S. Dass, and A. Jain, “Likelihood ratio-based biometric score fusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 342–347, 2008.
  • [53] T. Ahonen, A. Hadid, and M. Pietikainen, “Face description with local binary patterns: Application to face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037–2041, 2006.
  • [54] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Computer Vision and Pattern Recognition, vol. 1, 2005, pp. 886–893.
  • [55] M. B. Lopez, E. Boutellaa, and A. Hadid, “Comments on the ”Kinship Face in the Wild” data sets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PP, no. 99, pp. 1–3, 2016.
  • [56] L. Wolf, T. Hassner, and I. Maoz, “Face recognition in unconstrained videos with matched background similarity,” in Computer Vision and Pattern Recognition, 2011, pp. 529–534.
  • [57] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multi-PIE,” Image and Vision Computing, vol. 28, no. 5, pp. 807–813, 2010.
  • [58] P. Viola and M. J. Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, 2004.
  • [59] J. P. Robinson, M. Shao, Y. Wu, and Y. Fu, “Family in the wild (FIW): A large-scale kinship recognition database,” CoRR, vol. abs/1604.02182, 2016. [Online]. Available: http://arxiv.org/abs/1604.02182