Spoofing and Anti-Spoofing with Wax Figure Faces

10/12/2019 ∙ by Shan Jia, et al. ∙ 14

We have witnessed rapid advances in both face presentation attack models and presentation attack detection (PAD) in recent years. Compared to widely studied 2D face presentation attacks (e.g. printed photos and video replays), 3D face presentation attacks are more challenging because face recognition systems (FRS) is more easily confused by the 3D characteristics of materials similar to real faces. Existing 3D face spoofing databases, mostly based on 3D facial masks, are restricted to small data size and suffer from poor authenticity due to the difficulty and expense of mask production. In this work, we introduce a wax figure face database (WFFD) as a novel and super-realistic 3D face presentation attack. This database contains 2300 image pairs (totally 4600) and 745 subjects including both real and wax figure faces with high diversity from online collections. On one hand, our experiments have demonstrated the spoofing potential of WFFD on three popular FRSs. On the other hand, we have developed a multi-feature voting scheme for wax figure face detection (anti-spoofing), which combines three discriminative features at the decision level. The proposed detection method was compared against several face PAD approaches and found to outperform other competing methods. Surprisingly, our fusion-based detection method achieves an Average Classification Error Rate (ACER) of 11.73% on the WFFD database, which is even better than human-based detection.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 4

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Face has been one of the most widely-used biometrics modalities due to its accuracy and convenience for personal verification and identification. However, the increasing popularity and easy accessibility of face modalities also makes face recognition systems (FRS) a major target of spoofing such as presentation attack [19]. This kind of security threats can be easily performed by presenting the FRS a face artifact, which is also known as presentation attack instrument (PAI) in the ISO standard [11]. A recent breach of biometrics database (BioStar) leads to the compromise of as many as 28 million record containing facial recognition and fingerprint data, which can be easily exploited as PAIs by malicious hackers.

Based on the way of generating face artifacts, face presentation attacks can be classified into 2D modalities (e.g., printed/digital photographs or recorded videos on mobile devices such as a tablet) and 3D type (e.g., by wearing a mask or presenting a synthetic model). Existing systems and research on face recognition pay more attention to 2D face PAI due to its simplicity, efficiency, and low cost. However, as material science and 3D printing advance, creating face like 3D structures or materials has become easier and more affordable. When compared against the 2D modalities, 3D face presentation attacks are more realistic and therefore more difficult to be detected. The class of 3D face presentation attacks includes wearing wearable facial masks 

[15], building 3D facial models [58], through make-up [14], and using plastic surgery. Fig. 3 shows several examples of 3D presentation attacks, which have successfully fooled some widely used FRS such as in airport and phones.

Existing research on 3D face presentation attacks focuses more on easy-to-make facial masks. 3D facial mask spoofing had been previously thought of impossible to become a common practice in the literature [59] because 3D masks were deemed much more difficult and expensive to manufacture (e.g., requiring special 3D devices and materials). However, rapid advances in 3D printing technologies and services have made it easier and cheaper to make 3D masks recently. Several 3D mask attack databases have already been created, including 3D Mask Attack Database (3DMAD) [15], 3D-face spoofing database (3DFS-DB) [21], HKBU 3D Mask Attack with Real World Variations Database (HKBU-MARs) [42], Silicone Mask Attack Database (SMAD) [45], and Wide Multi Channel Presentation Attack database (WMCA) [22].

Fig. 1: Examples of 3D presentation attack cases. (a) Airport security system fooled by silicon mask 111Picture is downloaded from https://chameleonassociates.com/security-breach/., (b) Android phones fooled by a 3D-printed head222Picture is downloaded from http://www.floridaforensicscience.com/broke-bunch-android-phones-3d-printed-head/., (c) iPhone X face ID unlocked by a 3D mask333Picture is downloaded from https://boingboing.net/2010/11/05/young-asian-refugee.html..

These 3D face presentation attack databases have collected different 3D masks from the third-party services [15, 42], self-manufacturing [21], or online resources [45]. However, the databases are restricted to small data sizes (mostly less than 30 subjects), low mask qualities (some are not user customized [45, 2]), low diversity in lighting conditions, facial poses and recording devices. These restrictions will greatly limit not only the attack abilities of fake faces but also the validity of research findings about detection performance against 3D presentation attacks.

To address these limitations, we propose to take advantage of the popularity and publicity of numerous celebrity wax figure museums in the world, and collect a large number of wax figure images to create a new Wax Figure Face Database (WFFD). These life-size wax figure faces are all carefully designed and made in clay with wax layers, silicone or resin materials, so that they are super-realistic and similar to real faces. With the development of wax figure manufacture technologies and services, we believe easily obtainable and super-realistic wax figure faces will pose threat to the existing face recognition systems. In fact, the wax figure faces have already been used for identity personation and fraud in real life. In 2012, using photos taken with the wax figures at Hong Kong’s Madam Tussauds Museum (as shown in Fig. 4), six suspects snapped about 600,000 people out of nearly US$475 million under a pyramid sales scam by claiming that their company was supported by Hong Kong chief executives and business tycoons.

Fig. 2: Photos with wax figure faces for fraud444Picture is downloaded from http://www.szdaily.com/content/2012-03/26/content_6594713.htm.. (a) with the wax figure of Hong Kong chief executive Donald Tsang Yam-Kuen, (b) with the wax figure of Hong Kong business tycoon Li Ka-Shing.

In this paper, we introduce these wax figure faces as a more challenging type of 3D face presentation attack and analyze their impact on face recognition systems. The main contributions of this work are summarized below.

  • The new WFFD database is constructed. It consists of 2300 images acquired from 745 subjects (with both real and wax figure faces, totally 4600 faces), which are diversified in terms of age, ethnicity, pose, expression, environment, and cameras. To the best of our knowledge, this is the first large-scale wax figure face database and it has not been proposed as super-realistic 3D face presentation attacks in the open literature.

  • Three classes of discriminative features, including those learned from SqueezeNet and ResNet-50, and multi-block LPQ (Local Phase Quantization) texture feature, are extracted for wax figure face presentation attack detection. In view of their complementary nature, different feature fusion schemes are explored and compared. In particular, a novel multi-feature voting framework based on decision level fusion is proposed and its effectiveness is verified.

  • We have conducted extensive experiments on the WFFD to justify its strong attack (spoofing) ability on three popular face recognition systems and several face PAD methods. The effectiveness of the proposed wax figure face detection (anti-spoofing) method is also demonstrated with comparison to previous state-of-the-art and human-based detection methods.

The rest of this paper is organized as follows. In Section II, we briefly review related research in 3D face presentation attack databases and PAD methods. The new WFFD database and three newly designed protocols are introduced in Section III. Section IV presents the proposed multi-feature voting detection scheme, and experimental results are reported in Section V. Finally, we make several conclusions about this work and future research in Section VI.

Ii Related work

Ii-a Spoofing: 3D face presentation attack databases

Existing 3D face presentation attack databases create attacks mainly based on wearable 3D face masks that are made of materials with face characteristics similar to real faces. 3DMAD [15] is the first publicly available 3D mask database. It used the services of ThatsMyFace555http://thatsmyface.com/. to manufacture 17 masks of users, and recorded 255 video sequences with an RGB-D camera of Micsoft Kinect device for both real access and presentation attacks. This database has been widely used since it provides not only color images and depth images but also manually annotated eye positions for all face samples.

With the development of 3D modeling and printing technologies, more mask databases have been created since 2016. 3DFS-DB [21] is a self-manufactured and gender-balanced 3D face spoofing database, in which 26 printed models were made using two 3D printers: the ShareBot Pro and the CubeX666https://www.sharebot.it. and http://www.cubify.com., which are relatively low-cost and worth about 1,000 and 2,000€, respectively. HKBU-MARs [42] is another 3D mask spoofing database with more variations to simulate real world scenarios. It generated 12 masks from two companies (ThatsMyFace and REAL-F777http://real-f.jp/en_the-realface.html.) with different appearance qualities. A total of 1008 videos were created with 7 camera types and 6 lighting settings. To include more subjects, SMAD database [45] has collected and compiled videos of people wearing silicone masks from online resources. It contains 65 genuine access videos of people auditioning, interviewing, or hosting shows, and 65 attacked videos of people wearing a complete 3D (but not customized) mask which fits well with proper holes for the eyes and mouth.

Besides, there have been some 3D mask spoofing databases with special lighting information for more effective detection. The BRSU Skin/Face/Spoof Database [53] provides multispectral SWIR (Short Wave Infrared) of four wavebands and RGB color images incorporating various types of masks and facial disguises. It contains 137 subjects and considers two face presentation attack scenarios: disguise of the own identity and counterfeiting of a foreign identity with a mask made of silicon, plastic, latex or hard resin materials.

The MLFP database [2] (Multispectral Latex Mask based Video Face Presentation Attack database) is another multispectral database for face presentation attacks using latex and paper masks. It contains 1350 videos of 10 subjects in visible, near infrared (NIR), and thermal spectrums, which are captured at different locations (indoor and outdoor) in an unconstrained environment. The ERPA database [10] provides the RGB and NIR images of both bona fide and 3D mask attack presentations captured using special cameras. This is a small dataset with only 5 subjects involved; the depth information is also available. Both rigid resin-coated masks and flexible silicone masks are considered. Similarly, the recently released WMCA database [22] also used multiple capturing devices/channels, including color, depth, thermal and infrared. It contains 1679 videos with 347 bonafide and 1332 attacks from 72 subjects. A variety of 2D and 3D presentation attacks are included. For the 3D face attacks, it used fake head, rigid mask, flexible silicone masks, and paper masks and produced totally 709 videos.

These databases have played a significant role in designing multiple detection schemes against 3D face presentation attacks. However, they face the problems of small database size (mostly less than 30 subjects), poor authenticity (some based on paper or using noncustomized masks [45, 2]), or low diversity in subject and recording process, which will certainly limit the development of effective and practical PAD schemes.

Ii-B Anti-spoofing: 3D face PAD methods

Detection of 3D fake faces is often more challenging than detecting fake faces with 2D planar surfaces. Existing PAD methods for 3D face presentation attacks are mainly based on the difference between real face skin and mask materials, which can be broadly classified into five categories- namely, reflectance based, texture based, shape based, liveness based, and deep features based.

Earlier studies [30, 60, 57, 53] in 3D mask spoofing detection were based on the reflectance difference of facial skins and mask materials. For example, the distribution of albedo values for illumination of various wavelengths was first analyzed in [30]

to find how different facial skins and mask materials (silicon, latex, and skinjell) behave in terms of reflectance. Then a 2D feature vector consisting of two average radiance values under 850nm (to distinguish between skins and mask materials) and 685nm (to distinguish different facial skin colors) was constructed. Using Fisher’s linear discriminant (FLD) classifier, this method  

[30] achieved 97.78% accuracy in fake face detection on their own experimental data.

Texture based methods explore the texture pattern difference of real faces and masks with the help of texture feature descriptors, such as the widely used Local Binary Patterns (LBP) [33, 36, 16, 15]

, the Binarized Statistical Image Features (BSIF) 

[51, 48], and Haralick features [1]. These methods are easy to implement, but their robustness to different mask spoofing attacks calls for further investigations. For example, different LBP features were tested in  [42] on their proposed database (HKBU-MARs), and it was found that LBP based methods can not generalize well when confronting different mask appearance.

Shape based 3D mask PAD methods use shape descriptors [34, 54, 24] or 3D reconstruction [56] to extract discriminative features from faces and 3D masks. Different from reflectance-based or texture-based detection methods, these schemes only requires standard color images without the need of special sensors. However, their detection performances rely on the 3D mask attack qualities, and may not be roust to super-realistic 3D face presentation attacks.

More recently, some methods explore liveness cues to detect 3D face presentation attacks, such as thermal signatures [10], gaze information [4, 6, 5], and pulse or heartbeat signals [43, 41, 26, 38]. Based on the intrinsic liveness signals, these methods achieve an outstanding performance on distinguishing real faces from masks.

Instead of extracting hand-crafted features, deep feature based methods automatically extract features from face images. Two deep representation approaches were investigated in [47] for detecting spoofing in different biometric modalities. Image quality cues (Shearlet) and motion cues (dense optical flow) were fused in  [18]

using a hierarchical neural network for mask spoofing detection, which achieved a half total error rate(HTER) of 0% on the 3DMAD database. A network based on transfer learning using a pre-trained VGG-16 model architecture is presented in  

[44]

to recognize photo, video and 3D mask attacks. Based on the observation with the importance of dynamic facial texture information, a deep convolutional neural network based approach was developed in  

[52]

. Both intra-dataset and cross-dataset evaluation on 3DMAD and their supplementary dataset indicated the efficiency and robustness of the proposed method. Overall, deep learning based methods are not only efficient in spoof detection but also capable of recognizing different face presentation attacks.

Iii The wax figure face database

To address the weaknesses in existing 3D face presentation attack databases, we introduce a novel super-realistic Wax Figure Face Database (WFFD) with a large size and diversity in this paper. We will elaborate on the data collection process and the design of evaluation protocols related to WFFD in this section.

Iii-a Data collection

The images collected in the WFFD are based on numerous celebrity wax figure images from online resources. These user-customized and life-size wax figure faces are carefully designed and made in clay with wax layers, silicone or resin materials, so that they are super-realistic. We first downloaded as many celebrity wax figure faces as possible, and then collected the corresponding celebrity images as real access attempts. For each subject, the wax figure face and real face were finally grouped in one image to make a clear comparison, as the examples shown in Fig. 3(a). In total, 1000 images were collected from 462 subjects.

Fig. 3: Image examples in the WFFD database. (a) Grouped manually, (b) recorded in the same scenario.

Furthermore, we emphasize one particularly challenging scenario where the wax figure face and real person face were recorded together. Such scenario is only possible when the celebrities attended the unveiling of their own wax figures, as shown in Fig. 3(b). Nevertheless, we have collected a total of 1300 images from 409 subjects for this challenging scenario. With the same recording environment and identical facial poses and expressions, these images are more difficult to distinguish even for humans.

Iii-B Protocol design

Overall, WFFD consists of 2300 images and 4600 faces from both real and wax figure faces of 745 subjects. Inspired by the fraud incident shown in Fig. 4, we have further designed three situational protocols to evaluate the performance of face PAD methods on this database.

1) Protocol I: heterogeneous. This protocol contains images which are grouped manually. Since the wax figure face and real face are recorded from different devices and environmental conditions (e.g., lighting conditions), humans can make use of such subtle difference to distinguish the wax figure from the real face.

2) Protocol II: homogeneous. Images in this protocol record the wax figure face and real face in the same environment with the same camera. This situation is often challenging even for humans to tell wax figure apart from the real person.

3) Protocol III: mixed test. The previous two protocols are combined to simulate real-world operational conditions. Note that with rapid advance in AI technology, it is possible to change the differing background in protocol I to make them appear “homogeneous” by image matting [37].

In each protocol, images are grouped into three non-overlapped subsets: training, validation and testing. More details about the statistics of images in each protocol are shown in Table I.

Protocol #Image #Face #Subject
Train Valid Test Total
Protocol I 600 200 200 1000 2000 462
Protocol II 780 260 260 1300 2600 409
Protocol III 1380 460 460 2300 4600 745
  • Note that the train, validation, and test subsets are non-overlapped.

TABLE I: Details of each protocol in the WFFD

Iii-C Statistics

The statistics information of subject gender, age, ethnicity (detected by Face++ [17]), and face resolution (cropped by the dlib face detector [31]) in the WFFD is shown in Fig. 4. It can be seen that images in the WFFD are relatively gender balanced - with about 60% of male and 40% of female in both protocols. The ethnicity distribution in Fig. 4(b) contains a majority of White subjects (around 60%), followed by about 20% Asians and 10% Blacks, and a small percentage of Indians (no more than 2%). We can also see a wide distribution of age in Fig. 4(c). The two protocols have similar distribution patterns in terms of age, with half subjects being between 30 and 50 years old.

Although the dimensions of most face regions are between and , there is a big difference in the distribution between the two protocols. Matched and grouped manually, the dimensions of face regions in Protocol I are generally larger than those in Protocol II. Additionally, images in Protocol I are more diversified in terms of subject pose, facial expression, recording environment and devices than those in Protocol II. In summary, when compared with other 3D face presentation attack databases (as shown in Table II), our WFFD enjoys several advantages including large size, super reality and high diversity.

Fig. 4: Statistical distribution of the WFFD. (a) Gender, (b) ethnicity, (c) age, (d) face resolution.
Database Year #Subject #Sample Format Material Description
3DMAD [15] 2013 17 255 video paper, hard resin 2D color images + 2.5D depth maps
3DFS-DB [21] 2016 26 520 video plastic 2D, 2.5D images + 3D information
HKBU-MARs [42] 2016 12 1008 video / color images
BRSU [53] 2016 137 141 image silicon, plastic,resin, latex multispectral SWIR, color images
SMAD [45] 2017 / 130 video silicone color images, from online resources
MLFP [2] 2017 10 1350 video latex, paper visible, NIR, thermal images
ERPA [10] 2017 5 86 image resin, silicone RGB, thermal, NIR images + depth
WMCA [22] 2019 72 1679 video rigid, silicone, paper multiple channels of 2D, 3D attacks
WFFD (proposed) 2019 745 2300 image wax figure
color images, realistic, from online resources
TABLE II: Comparison of 3D face presentation attack datasets

Iv Proposed method

For the proposed color image based WFFD database, detection methods using reflectance properties, shape analysis, and liveness cues are simply not applicable. To distinguish wax figure faces from real faces, we have developed a multi-feature voting scheme based on deep learning models and texture descriptors. The overall scheme is decomposed of two steps: multi-feature detection and decision-level voting, which will be elaborated next.

Iv-a Mutli-feature detection

Based on our previous work in [28], we have found deep learning based methods are effective on discovering powerful feature representations for not only 2D face presentation attacks but also wax figure face detection. Based on this observation, two pre-trained deep neural networks are used to provide complementary and robust features for anti-spoofing purpose. One is based on the SqueezeNet [27, 12]. It is mainly comprised of Fire modules, which consist of a squeeze convolution layer with only 1x1 filters, feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters (as shown in Fig. 5(a)). Fire modules and several pooling layers are then stacked to form a small network with reasonably high accuracy. The other is based on ResNet-50 [25]

. Deep Residual Networks (ResNets) are constructed by stacking residual units (see Fig. 5(b)). Thanks to the identity function introduced to the network, the gradient calculation in back-propagation can flow more effectively, which helps alleviate the notorious vanishing gradient problem

[8]. Indeed, ResNet-50 has excellent capability of discovering discriminative features and distinguishing fake faces from real ones [55, 39, 40].

Fig. 5: Building blocks in the two network architectures. (a) Fire module in SqueezeNet, (b) Residual learning in ResNet-50.

Due to the limited size of face presentation attack databases, it is often difficult to train a deep architecture from the scratch. Similar to previous works [22, 12, 44, 55]

, we propose to transfer the two deep neural networks (pre-trained using the celebrated ImageNet dataset) to fit the target database. They were fine-tuned on the training face images by the proposed WFFD database to avoid model overfitting. We first detect and resize all face images to

for SqueezeNet and

for ResNet-50 network as inputs. Formulating face PAD as a binary classification problem, we have removed the networks’ output layer of 1,000 classes and obtained 1,000 dimensional output features of SqueezeNet (the feature dimension is 2,048 for ResNet-50). Then the features were fed into a Softmax classifier using cross-entropy loss function for optimization.

To further improve the detection accuracy, we have considered the inclusion of a traditional texture descriptor, namely multi-block local phase quantization (MB-LPQ), to characterize the intrinsic disparities in the color space of faces. Thanks to the discriminative power of local texture description, MB-LPQ has shown good performance in distinguishing real faces from artifacts in the literature  [28, 12, 9]. In this work, we suggest that MB-LPQ is complementary to the two deep learning based features for wax figure face detection (which will be verified in the experiments in Section V). Taking the resized

face images as inputs, this feature detector converts standard RGB images into YCbCr color space and divides them into multiple blocks. The LPQ features extracted from each block are then concatenated to form the MB-LPQ feature vector, which is fed into a Softmax classifier for making the final prediction (real or fake).

Fig. 6: Block diagram of the proposed detection scheme. Dashed lines indicate the optional process, which are performed only when ‘label compare’ outputs ‘different’.

Iv-B Decision-level voting

The idea of combining classifiers dated back to [32]. There are several ways of fusing the classification results - e.g., sum rule and majority voting. To exploit complementary features in our design, we propose a multi-feature voting scheme based on fusion at the decision level. Anti-spoofing detection labels predicted by two deep learning (SqueezeNet and ResNet-50) models are compared first. If they are the same, the consistent result will be directly output as the final predicted label; otherwise, different prediction results will be combined with the MB-LPQ feature detection result for further voting. The majority voting result of three competing models will be declared as the label of final prediction. The overall framework of the proposed multi-feature detection and combination scheme is shown in Fig. 6.

V Experimental Results

In this section, we first demonstrate the attack ability of the introduced WFFD database by investigating the vulnerability of three popular face recognition systems to super-realistic 3D presentation attacks. Then the proposed multi-feature voting detection scheme is evaluated and compared with both human-based detection and several popular face PAD methods. Finally, we explore and analyze the failure cases of different anti-spoofing detection schemes.

V-a Evaluation metrics

All experimental results were reported based on the ISO/IEC 30107-3 metrics [11]

. For the evaluation of vulnerability across different FRSs, the Impostor Attack Presentation Match Rate (IAPMR) metric was used. This metric has been widely used as an indicator of attack success probability if the FRS is evaluated against its PAD capabilities. It is defined by the proportion of impostor attack presentations using the same PAI species in which target reference is matched in a full-system evaluation of verification systems. For detection performance evaluation, we will calculate three types of errors - i.e., Attack Presentation Classification Error Rate (APCER), Bona Fide Presentation Classification Error Rate (BPCER), and Average Classification Error Rate (ACER).

V-B Vulnerabilities of face recognition systems

Three popular FRSs were considered in our experiments to demonstrate their vulnerability against fake/spoofing faces using the proposed WFFD database. To the best of our knowledge, this is the first study about the attack abilities of the super-realistic database on these popular FRSs. They are two publicly available FRSs: OpenFace [7] and Face++ [17], and a commercial system Neurotechnology VeriLook SDK [49]. Using the thresholds recommended by these FRSs, we have calculated the IAPMR values on three protocols of the WFFD, as presented in Table III.

Protocol Openface Face++ VeriLook
Threshold 0.99* 1e-5 36**
Protocol I 93.20% 91.79% 76.14%
Protocol II 96.85% 96.22% 87.96%
Protocol III 95.26% 94.30% 81.75%
  • Using a squared L2 distance threshold; †Using the confidence threshold at the 0.001% error rate; ** Using the matching score when FAR=0.1%.

TABLE III: IAPMR of three Face Recognition Systems

As shown in Table III, over 91% of the images in the three protocols of the WFFD were successfully matched while using Openface or Face++, which implies the high attack success rates of the proposed WFFD. Meantime, lower values of IAPMR can be observed for the VeriLook SDK. This is attributed to the fact that some wax figure faces with low qualities or special poses cannot be identified by the VeriLook SDK. However, we note that VeriLook tends to produce less successful matches than the other two even for real faces (refer to Fig. 7).

In addition, by comparing the results between Protocol I and Protocol II, we can observe that higher matching rates were achieved for images in Protocol II (homogeneous). This is because FRSs are more easily fooled when fake faces and real faces are recorded by the same camera and even with identical facial expressions and poses. Such findings with IAPMR of suggest that super-realistic WFFD could pose severe threat to existing FRSs without taking presentation attacks into account.

Fig. 7: Comparison of matching rates in different face recognition systems. (a) Openface, (b) Face++, (c) VeriLook SDK.

We have also compared the IAPMR values with the actual matching rates using real to real faces in Fig. 7. It can be seen that the matching rates using the wax figure faces to real faces are close to those using real to real faces for Openface and Face++ systems, which justifies the high fidelity of proposed WFFD. The gap between wax to real and real to real face matching is slightly larger for the VeriLook SDK system, which implies that anti-spoofing capability is an overlooked performance metric in the existing FRSs.

V-C Detection performance of the proposed PAD scheme

Built upon three discriminative features, we want to demonstrate the effectiveness of fusion in detecting wax figure faces from real ones on the WFFD dataset. In our experiments, three different fusion schemes, including the feature-level fusion, score-level fusion, and decision-level fusion, were compared along with the proposed multi-feature voting method (refer to Fig. 6). The overall comparison results in terms of the ACER are shown in Table IV.

It can be seen that without any fusion, the learned features from SqueezeNet model achieved the lowest ACER of 15.33% for Protocol III, indicating the best discriminative power in detecting wax figure faces. Combining the two features learned from SqueezeNet and ResNet-50 models at the feature or score level (based on the sum rule) slightly improves the performance; further combining them with MB-LPQ features results in lower ACERs, reaching around 11.73% for Protocol III. Overall, decision level fusion of the three features showed the best results. Specifically, when compared against direct fusion at the decision level, the proposed scheme can achieve the lowest ACERs for all three protocols also with lower computational cost due to the multi-voting strategy. Meanwhile, we can observe that for most fusion schemes, the error rates in the Protocol II were higher that those in the Protocol I, suggesting that detecting the wax figure faces from real faces recorded in the homogeneous case is the most challenging.

Fusion method Feature
Protocol
I
Protocol
II
Protocol
III
Single feature F1-SqueezeNet 16.25 14.61 15.33
F2-ResNet-50 19.75 20.57 19.02
F3-MB-LPQ 17.50 31.35 22.28
Feature level F1 & F2 15.00 16.35 14.67
F1 & F3 16.00 17.31 15.11
F2 & F3 13.50 19.42 15.54
F1 & F2 & F3 14.75 15.38 13.70
Score level F1 & F2 16.25 15.38 15.00
F1 & F3 16.00 20.38 18.26
F2 & F3 16.75 22.50 19.67
F1 & F2 & F3 12.75 15.96 12.83
Decision level F1 & F2 & F3 11.28 14.23 12.00
Proposed: F1 &
F2 (&F3)
11.25 13.65 11.73
TABLE IV: Comparison results (ACER) of different fusion schemes

Fig. 8 shows the APCER and BPCER performance of different fusion schemes under three protocols of WFFD dataset. It can be observed from Fig. 8(a) that the detection performance (including both APCER and BPCER values) changed little for different fusion schemes under Protocol I. However, under the more challenging Protocol II in Fig. 8(b), all competing fusion schemes achieved higher BPCER than APCER. This suggests that more real faces were incorrectly classified as wax figure faces when they were homologous. Further, feature-level fusion leads to lower BPCER (around 20%) while score-level fusion and decision-level fusion lead to lower APCER values (around 10%). Similar trends can be observed in Fig. 8(c) under the comprehensive Protocol III. Overall, due to the high fidelity and large diversity of wax figure faces in WFFD, distinguishing wax faces from real faces is challenging especially for the homogeneous situation (Protocol II). By exploiting the the complement property among different features, the proposed multi-voting fusion scheme has achieved the lowest APCER of 7.8% and the lowest BPCER of around 15%.

Fig. 8: Comparison results of different fusion schemes under three protocols. (a) Protocol I, (b) Protocol II, (c) Protocol III.

V-D Comparison against other PAD methods

Several face PAD methods were evaluated and compared on the WFFD database to show how they can work for super-realistic 3D presentation attacks. These PAD methods have achieved promising performance in detecting 2D type or 3D mask presentation attacks based on different features. Our benchmark set includes multi-scale LBP [15], the reflectance properties based [35], image quality assessment based [20],color LBP [13], Haralick features [1], VGG-16 model based [44], Chromatic Co-Occurrence of LBP (CCoLBP) [50], and noise modeling based [29]. The experimental results of all benchmark methods were obtained using their public available codes. In addition, we have conducted a controlled human-based detection experiment to test the ability of human eyes in distinguishing wax figure faces from real ones. In our controlled experiment, 20 volunteers (10 men and 10 women, aged between 23 and 55) were asked to determine whether the face is real or not using our self-developed program (as shown in Fig. 9). The classification error rates were calculated and then their average is taken as the final human-based detection result.

Fig. 9: The program for human-based detection.

Table V compares the results of different detection schemes. For Protocol I, we can see that existing face PAD methods for 2D or 3D mask attacks suffered from severe performance degradation with high detection error rates on WFFD, ranging from 26% to 46%. We attribute the poor performances to high diversity and super-realistic attacks in the introduced database. Human based detection has achieved better result with ACER of 16%, but with higher APCER than BPCER, suggesting that more wax figure faces were mistaken for real ones. Our proposed multi-voting scheme achieved the best result with 11.25% ACER. Similar performance differences can be observed under the Protocol II. However, most algorithms achieved higher error rates for this protocol. Such results are reasonable since recording the real faces and wax figure faces in the same scenarios with the same camera results in less difference between real and fake faces. Therefore, it is more difficult to detect the presentation attacks in this homogeneous setting, which is consistent with the results in Table III.

Method Protocol I Protocol II Protocol III
EER APCER BPCER ACER EER APCER BPCER ACER EER APCER BPCER ACER
Multi-scale LBP [15] 33.17 31.22 31.22 31.22 36.62 37.32 33.45 35.39 34.56 33.33 32.92 33.13
Reflectance [35] 41.95 40.00 52.19 46.10 44.37 50.70 44.37 47.53 44.78 46.01 46.22 46.11
Image quality [20] 35.50 30.50 39.50 35.00 38.85 39.23 43.46 41.34 41.30 36.96 43.26 40.11
Color LBP [13] 33.17 30.24 36.10 33.17 37.32 36.62 41.90 39.26 36.81 35.38 35.79 35.58
Haralick features [1] 32.19 25.85 37.07 31.46 38.38 41.55 24.65 33.10 36.81 36.40 32.92 34.66
VGG-16 based [44] 45.85 50.73 41.95 46.34 48.94 40.14 52.82 46.48 48.67 45.19 49.28 47.24
CCoLBP [50] 29.50 26.50 26.00 26.25 28.08 24.62 34.23 29.42 28.04 26.52 29.13 27.83
Noise modeling [29] 52.50 63.00 45.50 54.25 58.85 58.46 59.61 59.04 56.09 58.69 49.98 54.33
Human-based / 20.14 11.86 16.00 / 32.97 17.97 25.47 / 27.39 15.31 21.35
The proposed 11.50 12.00 10.50 11.25 12.00 8.08 19.23 13.65 11.67 7.82 15.64 11.73
TABLE V: Detection error rates (%) on three protocols of the WFFD

The overall results under the Protocol III of different face PAD methods have large differences, with the error rates ranging from 7.82% to 48.67%. The best ACER was achieved in the proposed multi-voting fusion scheme due to the highly discriminative and complementary features, which significantly outperformed other algorithms and human based detection. In terms of the BPCER, the human based detection obtained the lowest value of 15.31%, slightly better than the proposed method. Besides, the CCoLBP features [50] also achieved better results, with all error rates lower than 30%.

V-E Failure case analysis

Based on the detection results on WFFD, we further show and analyze the failure cases to have a deep understanding of both detection methods and database.

Features used in the proposed method. In Fig. 10, we have shown the Venn diagram for the failure cases of the three single features used in our method (SqueezeNet, ResNet-50, and MB-LPQ). It can be seen that they achieved different failure cases in detecting the 920 faces in the testing subset of Protocol III, but only 33 were wrongly classified by all three features, implying the good complementary properties among them. We have also shown these 33 failure cases in Fig. 10, which visually illustrate the challenge with distinguishing between fake faces and real ones even for human eyes. Additionally, we note that more real faces were mistaken for fake ones (highlighted by the green dots in Fig. 10); by contrast, only around one third of failure cases mistaken wax faces by real ones (marked by the red dots in Fig. 10).

Fig. 10: The failure cases in feature based anti-spoofing detection. The left shows the Venn diagram of failure cases associated with three features; and the right shows the exemplar images in Protocols I and II. Note that images with green dots are real faces (but mistaken for wax faces), while images with red dots are wax figure faces (but mistaken for real faces).

Human-based method. We have also analyzed the detection results of 20 volunteers as shown in Table V. We make the following two observations. First, human-based detection performs worse than machine-based for all three protocols, which implies that real vs. wax detection is nontrivial for layperson. However, we note that this is partially due to the lack of training in 20 volunteers. With more experience, human observers tend to perform better in spoofing detection. Second and more interestingly, when compared against machine based detection, human based method was more likely to mistake wax figure faces for real ones for both protocols, as shown in Fig. 11 (there are more red dots than green dots). This is in sharp contrast with what we have observed for machine-based method in Fig. 10 (there are more green dots than red dots).

Fig. 11: The failure cases with high probabilities in human based anti-spoofing detection. Similarly, images with green dots are real faces (but mistaken for wax faces), while images with red dots are wax figure faces (but mistaken for real faces). Note that humans mistake more wax figure faces for real faces (red dots) than the other way around.

Vi Conclusions

To address the limitations in existing 3D face presentation attack databases, we have constructed a new database (WFFD) composed of wax figure faces with high diversity and large size as super-realistic face presentation attacks. The database will be made publicly available to facilitate the improvement and evaluation of different PAD algorithms. Extensive experimental results have demonstrated the vulnerability of popular face recognition systems to these attacks. In particular, we have observed that several existing PAD methods fail in the task of detecting real faces from wax figure faces, demonstrating the challenges when wax figure face are used for 3D attacks. We have developed a multi-voting fusion scheme based on three discriminative and complementary features, which significantly outperformed not only current state-of-the art face PAD methods but also human-based detection. Through detailed analysis of failure cases, we have found that machine-based and human-based methods suffer from different types of errors.

It should be noted that the best performance achieved by the proposed multi-voting scheme still has the error rate of over . Super-realistic wax figure faces are indeed difficult to distinguish from real ones even for humans. We envision that motion based (instead of appearance based) anti-spoofing methods, such as head movement or blink detection, will deserve further study in the future. In view of recent advances in generative adversarial network (GAN)-based video synthesis (e.g., talking Mona Lisa and DeepFake), even motion based anti-spoofing might be foiled by more intelligent spoofing. And there have been already a flurry of works on detecting DeepFake [3], [23], [46]. As many people believe, the arm races between spoof and anti-spoofing will never end.

Acknowledgment

Xin Li’s work is partially supported by the DoJ/NIJ under grant NIJ 2018-75-CX-0032, NSF under grant OAC-1839909 and the WV Higher Education Policy Commission Grant (HEPC.dsr.18.5).

References

  • [1] A. Agarwal, R. Singh, and M. Vatsa (2016) Face anti-spoofing using haralick features. In Biometrics Theory, Applications and Systems (BTAS), 2016 IEEE 8th International Conference on, pp. 1–6. Cited by: §II-B, §V-D, TABLE V.
  • [2] A. Agarwal, D. Yadav, N. Kohli, R. Singh, M. Vatsa, and A. Noore (2017) Face presentation attack with latex masks in multispectral videos. In Computer Vision and Pattern Recognition Workshops, pp. 275–283. Cited by: §I, §II-A, §II-A, TABLE II.
  • [3] S. Agarwal, H. Farid, Y. Gu, M. He, K. Nagano, and H. Li (2019) Protecting world leaders against deep fakes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 38–45. Cited by: §VI.
  • [4] A. Ali, N. Alsufyani, S. Hoque, and F. Deravi (2017) Biometric counter-spoofing for mobile devices using gaze information. In International Conference on Pattern Recognition and Machine Intelligence, pp. 11–18. Cited by: §II-B.
  • [5] A. Ali, S. Hoque, and F. Deravi (2018) Gaze stability for liveness detection. Pattern Analysis and Applications 21 (2), pp. 437–449. Cited by: §II-B.
  • [6] N. Alsufyani, A. Ali, S. Hoque, and F. Deravi (2018) Biometrie presentation attack detection using gaze alignment. In Identity, Security, and Behavior Analysis (ISBA), 2018 IEEE 4th International Conference on, pp. 1–8. Cited by: §II-B.
  • [7] B. Amos, B. Ludwiczuk, M. Satyanarayanan, et al. (2016) Openface: a general-purpose face recognition library with mobile applications. CMU School of Computer Science. Cited by: §V-B.
  • [8] Y. Bengio, P. Simard, P. Frasconi, et al. (1994) Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks 5 (2), pp. 157–166. Cited by: §IV-A.
  • [9] A. Benlamoudi, D. Samai, A. Ouafi, S. Bekhouche, A. Taleb-Ahmed, and A. Hadid (2015) Face spoofing detection using multi-level local phase quantization (ml-lpq). In Proc. of the First Int. Conf. on Automatic Control, Telecommunication and signals ICATS15, Cited by: §IV-A.
  • [10] S. Bhattacharjee and S. Marcel (2017) What you can’t see can help you–extended-range imaging for 3d-mask presentation attack detection. In Proceedings of the 16th International Conference on Biometrics Special Interest Group., Cited by: §II-A, §II-B, TABLE II.
  • [11] I. J. 1. 3. Biometrics (2016) Information technology–biometric presentation attack detection. international organization for standardization. Cited by: §I, §V-A.
  • [12] Z. Boulkenafet, J. Komulainen, Z. Akhtar, A. Benlamoudi, D. Samai, S. E. Bekhouche, A. Ouafi, F. Dornaika, A. Taleb-Ahmed, L. Qin, et al. (2017) A competition on generalized software-based face presentation attack detection in mobile scenarios. In 2017 IEEE International Joint Conference on Biometrics (IJCB), pp. 688–696. Cited by: §IV-A, §IV-A, §IV-A.
  • [13] Z. Boulkenafet, J. Komulainen, and A. Hadid (2015) Face anti-spoofing based on color texture analysis. In Image Processing (ICIP), 2015 IEEE International Conference on, pp. 2636–2640. Cited by: §V-D, TABLE V.
  • [14] C. Chen, A. Dantcheva, T. Swearingen, and A. Ross (2017) Spoofing faces using makeup: an investigative study. In 2017 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), pp. 1–8. Cited by: §I.
  • [15] N. Erdogmus and S. Marcel (2013) Spoofing in 2d face recognition with 3d masks and anti-spoofing with kinect. In Biometrics: Theory, Applications and Systems (BTAS), 2013 IEEE Sixth International Conference on, pp. 1–6. Cited by: §I, §I, §I, §II-A, §II-B, TABLE II, §V-D, TABLE V.
  • [16] N. Erdogmus and S. Marcel (2014) Spoofing face recognition with 3d masks. IEEE transactions on information forensics and security 9 (7), pp. 1084–1097. Cited by: §II-B.
  • [17] Face++ Face compare sdk. Note: https://www.faceplusplus.com/face-compare-sdk/ Cited by: §III-C, §V-B.
  • [18] L. Feng, L. Po, Y. Li, X. Xu, F. Yuan, T. C. Cheung, and K. Cheung (2016) Integration of image quality and motion cues for face anti-spoofing: a neural network approach. Journal of Visual Communication and Image Representation 38, pp. 451–460. Cited by: §II-B.
  • [19] I. O. for Standardization (2017) Information technology–biometric presentation attack detection–part 3: testing and reporting. JTC 1/SC 37, Geneva, Switzerland, ISO/IEC FDIS 30107-3:2017. Cited by: §I.
  • [20] J. Galbally and S. Marcel (2014) Face anti-spoofing based on general image quality assessment. In Pattern Recognition (ICPR), 2014 22nd International Conference on, pp. 1173–1178. Cited by: §V-D, TABLE V.
  • [21] J. Galbally and R. Satta (2016) Three-dimensional and two-and-a-half-dimensional face recognition spoofing using three-dimensional printed models. IET Biometrics 5 (2), pp. 83–91. Cited by: §I, §I, §II-A, TABLE II.
  • [22] A. George, Z. Mostaani, D. Geissenbuhler, O. Nikisins, A. Anjos, and S. Marcel (2019) Biometric face presentation attack detection with multi-channel convolutional neural network. IEEE Transactions on Information Forensics and Security. Cited by: §I, §II-A, TABLE II, §IV-A.
  • [23] D. Güera and E. J. Delp (2018)

    Deepfake video detection using recurrent neural networks

    .
    In 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. Cited by: §VI.
  • [24] B. Hamdan and K. Mokhtar (2017) The detection of spoofing by 3d mask in a 2d identity recognition system. Egyptian Informatics Journal. Cited by: §II-B.
  • [25] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §IV-A.
  • [26] J. Hernandez-Ortega, J. Fierrez, A. Morales, and P. Tome (2018) Time analysis of pulse-based face anti-spoofing in visible and nir. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 544–552. Cited by: §II-B.
  • [27] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer (2016) SqueezeNet: alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360. Cited by: §IV-A.
  • [28] S. Jia, C. Hu, G. Guo, and Z. Xu (2019) A database for face presentation attack using wax figure faces. arXiv preprint arXiv:1906.11900. Cited by: §IV-A, §IV-A.
  • [29] A. Jourabloo, Y. Liu, and X. Liu (2018) Face de-spoofing: anti-spoofing via noise modeling. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 290–306. Cited by: §V-D, TABLE V.
  • [30] Y. Kim, J. Na, S. Yoon, and J. Yi (2009) Masked fake face detection using radiance measurements. JOSA A 26 (4), pp. 760–766. Cited by: §II-B.
  • [31] D. E. King (2009)

    Dlib-ml: a machine learning toolkit

    .
    Journal of Machine Learning Research 10 (Jul), pp. 1755–1758. Cited by: §III-C.
  • [32] J. Kittler, M. Hatef, R. P. Duin, and J. Matas (1998) On combining classifiers. IEEE transactions on pattern analysis and machine intelligence 20 (3), pp. 226–239. Cited by: §IV-B.
  • [33] N. Kose and J. Dugelay (2013) Countermeasure for the protection of face recognition systems against mask attacks. In Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on, pp. 1–6. Cited by: §II-B.
  • [34] N. Kose and J. Dugelay (2013) On the vulnerability of face recognition systems to spoofing mask attacks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 2357–2361. Cited by: §II-B.
  • [35] N. Kose and J. Dugelay (2013) Reflectance analysis based countermeasure technique to detect face mask attacks. In Digital Signal Processing (DSP), 2013 18th International Conference on, pp. 1–6. Cited by: §V-D, TABLE V.
  • [36] N. Kose and J. Dugelay (2013) Shape and texture based countermeasure to protect face recognition systems against mask attacks. In Computer Vision and Pattern Recognition Workshops, 2013 IEEE Conference On, pp. 111–116. Cited by: §II-B.
  • [37] A. Levin, D. Lischinski, and Y. Weiss (2007) A closed-form solution to natural image matting. IEEE transactions on pattern analysis and machine intelligence 30 (2), pp. 228–242. Cited by: §III-B.
  • [38] X. Li, J. Komulainen, G. Zhao, P. C. Yuen, and M. Pietikäinen (2017) Generalized face anti-spoofing by detecting pulse from face videos. In International Conference on Pattern Recognition, pp. 4244–4249. Cited by: §II-B.
  • [39] Y. Li and S. Lyu (2018) Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656 2. Cited by: §IV-A.
  • [40] A. Liu, J. Wan, S. Escalera, H. Jair Escalante, Z. Tan, Q. Yuan, K. Wang, C. Lin, G. Guo, I. Guyon, et al. (2019) Multi-modal face anti-spoofing attack detection challenge at cvpr2019. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0. Cited by: §IV-A.
  • [41] S. Liu, X. Lan, and P. C. Yuen (2018) Remote photoplethysmography correspondence feature for 3d mask face presentation attack detection. pp. 558–573. Cited by: §II-B.
  • [42] S. Liu, B. Yang, P. C. Yuen, and G. Zhao (2016) A 3d mask face anti-spoofing database with real world variations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 100–106. Cited by: §I, §I, §II-A, §II-B, TABLE II.
  • [43] S. Liu, P. C. Yuen, S. Zhang, and G. Zhao (2016) 3d mask face anti-spoofing with remote photoplethysmography. In European Conference on Computer Vision, pp. 85–100. Cited by: §II-B.
  • [44] O. Lucena, A. Junior, V. Moia, R. Souza, E. Valle, and R. Lotufo (2017) Transfer learning using convolutional neural networks for face anti-spoofing. In International Conference Image Analysis and Recognition, pp. 27–34. Cited by: §II-B, §IV-A, §V-D, TABLE V.
  • [45] I. Manjani, S. Tariyal, M. Vatsa, R. Singh, and A. Majumdar (2017) Detecting silicone mask-based presentation attack via deep dictionary learning. IEEE Transactions on Information Forensics and Security 12 (7), pp. 1713–1723. Cited by: §I, §I, §II-A, §II-A, TABLE II.
  • [46] M. Maras and A. Alexandrou (2019)

    Determining authenticity of video evidence in the age of artificial intelligence and in the wake of deepfake videos

    .
    The International Journal of Evidence & Proof 23 (3), pp. 255–262. Cited by: §VI.
  • [47] D. Menotti, G. Chiachia, A. Pinto, W. R. Schwartz, H. Pedrini, A. X. Falcao, and A. Rocha (2015) Deep representations for iris, face, and fingerprint spoofing detection. IEEE Transactions on Information Forensics and Security 10 (4), pp. 864–879. Cited by: §II-B.
  • [48] S. Naveen, R. S. Fathima, and R. Moni (2016) Face recognition and authentication using lbp and bsif mask detection and elimination. In Communication Systems and Networks, International Conference on, pp. 99–102. Cited by: §II-B.
  • [49] Neurotechnology VeriLook sdk. Note: http://www.neurotechnology.com/ verilook.html Cited by: §V-B.
  • [50] F. Peng, L. Qin, and M. Long (2018) CCoLBP: chromatic co-occurrence of local binary pattern for face presentation attack detection. In 2018 27th International Conference on Computer Communication and Networks (ICCCN), pp. 1–9. Cited by: §V-D, §V-D, TABLE V.
  • [51] R. Raghavendra and C. Busch (2014) Novel presentation attack detection algorithm for face recognition system: application to 3d face mask attack. In Image Processing (ICIP), 2014 IEEE International Conference on, pp. 323–327. Cited by: §II-B.
  • [52] R. Shao, X. Lan, and P. C. Yuen (2017) Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3d mask face anti-spoofing. In Biometrics (IJCB), 2017 IEEE International Joint Conference on, pp. 748–755. Cited by: §II-B.
  • [53] H. Steiner, A. Kolb, and N. Jung (2016) Reliable face anti-spoofing using multispectral swir imaging. In Biometrics (ICB), 2016 International Conference on, pp. 1–8. Cited by: §II-A, §II-B, TABLE II.
  • [54] Y. Tang and L. Chen (2017) 3D facial geometric attributes based anti-spoofing approach against mask attacks. In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, pp. 589–595. Cited by: §II-B.
  • [55] X. Tu and Y. Fang (2017) Ultra-deep neural network for face anti-spoofing. In International Conference on Neural Information Processing, pp. 686–695. Cited by: §IV-A, §IV-A.
  • [56] Y. Wang, S. Chen, W. Li, D. Huang, and Y. Wang (2018) Face anti-spoofing to 3d masks by combining texture and geometry features. In Chinese Conference on Biometric Recognition, pp. 399–408. Cited by: §II-B.
  • [57] Y. Wang, X. Hao, Y. Hou, and C. Guo (2013) A new multispectral method for face liveness detection. In Pattern Recognition (ACPR), 2013 2nd IAPR Asian Conference on, pp. 922–926. Cited by: §II-B.
  • [58] Y. Xu, T. Price, J. Frahm, and F. Monrose (2016) Virtual u: defeating face liveness detection by building virtual models from your public photos. In 25th USENIX Security Symposium (USENIX Security 16), pp. 497–512. Cited by: §I.
  • [59] Z. Zhang, J. Yan, S. Liu, Z. Lei, D. Yi, and S. Z. Li (2012) A face antispoofing database with diverse attacks. In Biometrics (ICB), 2012 5th IAPR international conference on, pp. 26–31. Cited by: §I.
  • [60] Z. Zhang, D. Yi, Z. Lei, and S. Z. Li (2011) Face liveness detection by learning multispectral reflectance distributions. In Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pp. 436–441. Cited by: §II-B.