FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems

04/08/2021 ∙ by Liang Tong, et al. ∙ University of Connecticut Washington University in St Louis 0

We present FACESEC, a framework for fine-grained robustness evaluation of face recognition systems. FACESEC evaluation is performed along four dimensions of adversarial modeling: the nature of perturbation (e.g., pixel-level or face accessories), the attacker's system knowledge (about training data and learning architecture), goals (dodging or impersonation), and capability (tailored to individual inputs or across sets of these). We use FACESEC to study five face recognition systems in both closed-set and open-set settings, and to evaluate the state-of-the-art approach for defending against physically realizable attacks on these. We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks. Moreover, we observe that open-set face recognition systems are more vulnerable than closed-set systems under different types of attacks. The efficacy of attacks for other threat model variations, however, appears highly dependent on both the nature of perturbation and the neural network architecture. For example, attacks that involve adversarial face masks are usually more potent, even against adversarially trained models, and the ArcFace architecture tends to be more robust than the others.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 7

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Face recognition has received much attention [jain2011handbook, parkhi2015deep, schroff2015facenet, wen2016discriminative, liu2017sphereface, wang2018cosface]

in recent years. Empowered by deep convolutional neural networks (CNNs), it has become widely used in various areas, including security-sensitive applications, such as airport check-in, online financial transactions, and mobile device login. The success of such

deep face recognition is particularly striking, with 99% prediction accuracy on benchmark datasets [parkhi2015deep, liu2016large, liu2017sphereface, deng2019arcface].

Despite its widespread success in computer vision applications, recent studies have found that deep face recognition models are vulnerable to

adversarial examples in both digital space [madry2018towards, dong2019efficient, yang2020delving] and physical space [sharif2016accessorize]. The former directly modifies an input face image by adding imperceptible perturbations to mislead face recognition (henceforth, digital attacks). The latter is characterized by adding adversarial perturbations that can be realized on physical objects (e.g., wearing an adversarial eyeglass frame [sharif2016accessorize]), which are subsequently captured by a camera and then fed into a face recognition model to fool prediction (henceforth, physically realizable attacks). As such, the aforementioned domains, especially critical domains such as security and finance, are subjected to risks of opening the backdoor for the attackers. For example, in face recognition supported financial/banking services, an illegal user may bypass biometric verification and steal money from victims’ accounts. Therefore, there exists a vital need for methods that can comprehensively and systematically evaluate the robustness of face recognition systems in adversarial settings, which in turn can shed light on the design of robust models for downstream tasks.

The main challenges of comprehensive evaluation of the robustness of face recognition lie in dealing with the diversity of face recognition systems and adversarial environments. First, different face recognition systems consist of various key components (e.g., training data and neural architecture); such diversity results in different performance and robustness. To enable comprehensive and systematic evaluations, it is crucial to assess the robustness of every individual or a combination of face recognition components in adversarial settings. Second, adversarial example attacks can vary by the nature of perturbations (e.g., pixel-level or physical space), an attacker’s goal, knowledge, and capability. For a given face recognition system, its robustness against a specific type of attack may not generalize to other kinds [wu2019defending].

In spite of recent advances in adversarial attacks [sharif2016accessorize, dong2019efficient, yang2020delving] that demonstrate the vulnerability of face recognition systems, most existing methods fail to address the aforementioned challenges due to the following reasons. First, current efforts appeal to either white-box attacks or black-box attacks to obtain a lower bound or upper bound of robustness. These bounds indicate the vulnerability of face recognition systems in adversarial settings but lack the understanding of how each component of face recognition contributes to such vulnerability. Second, while most existing approaches focus on a specific type of attack (e.g., digital attacks that incur imperceptible noise [dong2019efficient, yang2020delving]), they fail to explore the different levels of robustness in response to various attacks (e.g., physically realizable attacks).

To bridge this gap, we propose FaceSec, a fine-grained robustness evaluation framework for face recognition systems. FaceSec incorporates four dimensions in evaluation: the nature of adversarial perturbations (pixel-level or face accessories), the attacker’s accurate knowledge about the target face recognition system (training data and neural architecture), goals (dodging or impersonation), and capability (individual or universal attacks). Specifically, we implement both digital and physically realizable attacks in FaceSec. We leverage the PGD attack [madry2018towards], the state-of-the-art digital attack paradigm, and the eyeglass frame attack [sharif2016accessorize] as the representative of physically realizable attacks. Additionally, we propose two novel physically realizable attacks: one involves pixel-level adversarial stickers on human faces, and the other adds color grids on face masks. Moreover, to facilitate universal attacks that produce image-agnostic perturbations, we propose a systematic approach that works on top of the attack paradigms described above.

In summary, this paper makes the following contributions:

  1. We propose FaceSec, the first robustness evaluation framework that enables researchers to (i) identify the vulnerability of each face recognition component to adversarial examples, and (ii) assess different levels of robustness under various adversarial circumstances.

  2. We propose two novel physically realizable attacks: the pixel-level sticker attack and the grid-level face mask attack. These allow us to explore adversarial robustness against different types of physically realizable perturbations. Particularly, the latter responds to the pressing needs for security analysis of face recognition systems, as face masks have become common face accessories during the COVID-19 pandemic.

  3. We propose a general approach to produce universal adversarial examples for a batch of face images. Compared to previous works, our paradigm has a significant speedup and is more efficient in evaluation.

  4. We perform a comprehensive evaluation on five publicly available face recognition systems in various settings to demonstrate the efficacy of FaceSec.

2 Background and Related Work

2.1 Face Recognition Systems

Figure 1: Closed-set and open-set face recognition systems.

Generally, deep face recognition systems aim to solve the following two tasks: 1) Face identification, which returns the predicted identity of a test face image; 2) Face verification, which indicates whether a test face image (also called probe face image) and the face image stored in the gallery belong to the same identity. Based on whether all testing identities are predefined in the training set, face recognition systems can be further categorized into closed-set systems and open-set systems [liu2017sphereface], as illustrated in Fig. 1.

In closed-set face recognition tasks, all the testing samples’ identities are enrolled in the training set. Specifically, a face identification task is equivalent to a multi-class classification

problem by using the standard softmax loss function in the training phase 

[taigman2014deepface, sun2014deepa, sun2014deepb]. And a face verification task is a natural extension of face identification by first performing the classification twice (one for the test image and the other for the gallery) and then comparing the predicted identities to see if they are identical.

In contrast, there are usually no overlaps between identities in the training and testing set for open-set tasks. In this setting, a face verification task is essentially a metric learning problem, which aims to maximize intra-class distance and minimize inter-class distance under a chosen metric space by two steps [schroff2015facenet, parkhi2015deep, wen2016discriminative, liu2016large, liu2017sphereface, deng2019arcface]

. First, we train a feature extractor that maps a face image into a discriminative feature space by using a carefully designed loss function; Then, we measure the distance between feature vectors of the test and gallery face images to see if it is above a verification threshold. As an extension of face verification, the face identification task requires additional steps to compare the distances between the feature vectors of the test image and each gallery image, and then choose the gallery’s identity corresponding to the shortest distance.

This paper focuses on face identification for closed-set systems, as face verification is just an extension of identification in this setting. Likewise, we focus on face verification for open-set systems.

2.2 Digital and Physical Adversarial Attacks

Recent studies have shown that deep neural networks are vulnerable to adversarial attacks. These attacks produce imperceptible perturbations on images in the digital space to mislead classification [szegedy14intriguing, goodfellow15explaining, carlini2017towards] (henceforth, digital attacks). While a number of attacks on face recognition fall into this category (e.g., by adding small bounded noise over the entire input [dong2019efficient] or perceptible but semantically meaningful perturbation on a restricted area of the input [qiu2019semanticadv]), of particular interest in face recognition, are attacks in the physical world (henceforth, physical attacks).

Generally, physical attacks have three characteristics [wu2019defending]. First, the attackers directly modify the actual entity rather than digital features. Second, the attacks can mislead state-of-the-art face recognition systems. Third, the attacks have low suspiciousness (i.e., by adding objects similar to common “noise” on a small part of human faces). For example, an attacker can fool a face recognition system by wearing an adversarial eyeglass frame [sharif2016accessorize], a standard face accessory in the real world.

In this paper, we focus on both digital attacks and the digital representation of physical attacks (henceforth, physically realizable attacks). Specifically, physically realizable attacks are digital attacks that can produce adversarial perturbations with low suspiciousness, and these perturbations can be realized in the physical world by using techniques such as 3-D printing (e.g., Fig. 2 illustrates one example of such attacks on face recognition systems). Compared to physical attacks, physically realizable attacks can evaluate robustness of face recognition systems more efficiently: on the one hand, realizable attacks allow us to iteratively modify digital images directly so the evaluation can significantly speedup compared to modifying real-world objects and then photographing them; on the other hand, robustness to physically realizable attacks provides the lower bound of robustness to physical attacks, as the former has fewer constraints and larger solution space.

Figure 2: Sticker attack: an example of physically realizable attacks on face recognition systems. Left: original input image. Middle: adversarial sticker on the face. Right: predicted identity. In practice, the adversarial stickers can be printed and put on human faces.

Formally, both digital and physically realizable attacks can be performed by solving the following general form of an optimization problem (e.g., for closed-set identification task):

(1)

where is the target face recognition model, is the adversary’s utility function (e.g., the loss function used to train ), is the original input face image, is the associated identity, is the adversarial perturbation, and is the feasible space of the perturbation. Here, denotes the mask matrix that constrains the area of perturbation; it has the same dimension as and contains s where perturbation is allowed, and s where there is no perturbation.

2.3 Adversarial Defense for Face Recognition

While there have been numerous defense approaches to make face recognition robust to adversarial attacks, many of them focus on digital attacks and have been proved to be broken under adaptive attacks [carlini2017towards, tramer2020adaptive]. Here, we describe one representative defense approach, adversarial training [madry2018towards], that is scalable, not defeated by adaptive attacks, and has been leveraged to defend against physically realizable attacks on face recognition systems.

The main idea of adversarial training is to minimize prediction loss of the training data, where an attacker tries to maximize the loss. In practice, this can be done by iteratively using the following two steps: 1) Use an attack method to produce adversarial examples of the training data; 2) Use any optimizer to minimize the loss of predictions on these adversarial examples. Wu et al. [wu2019defending]

propose to use DOA—adversarial training with the rectangular occlusion attacks—to defend against physically realizable attacks on closed-set face recognition systems. Specifically, the rectangular occlusion attack included in DOA first heuristically locates a rectangular area among a collection of possible regions in an input face image, then fixes the position and adds adversarial occlusion inside the rectangle. It has been shown that DOA can significantly improve the robustness against the eyeglass frame attack 

[sharif2016accessorize] for closed-set VGG-based face recognition system [parkhi2015deep] by 80%. However, as we will show in Section 4, DOA would fail to defend against other types of attacks, such as the face mask attack proposed in Section 3.1.

3 Methodology

In this section, we introduce FaceSec for fine-grained robustness evaluation of face recognition systems. Our goal is twofold: 1) identify vulnerability/robustness of each essential component that comprises a face recognition system, and 2) assess robustness in a variety of adversarial settings. Fig. 3 illustrates an overview of FaceSec. Let be a face recognition system with a neural architecture that is trained on a training set by an algorithm (e.g.

, stochastic gradient descent),

FaceSec evaluates the robustness of via a quadruplet:

(2)

where represents an attacker who tries to produce adversarial examples to fool . is the perturbation type, such as perturbations produced by pixel-level digital attacks and physically realizable attacks. denotes the attacker’s knowledge on the target system , i.e., the information about which sub-components of are leaked to the attacker. is the goal of the attacker, such as the circumvention of detection and the misrecognition as a target identity. represents the attacker’s capability. For example, an attacker can either individually perturb each input face image, or produce universal perturbations for images batch-wise. Next, we will describe each element of FaceSec in details.

Figure 3: An overview of FaceSec.

3.1 Perturbation Type (P)

In FaceSec, we consider three categories of attacks with different perturbation types: digital attack, pixel-level physically realizable attack, and grid-level physically realizable attack, as shown in Fig. 4.

Figure 4: Perturbation types in FaceSec.

Digital Attack. Digital attack produces small perturbations on the entire input face image. We use the -norm version of the PGD attack [madry2018towards] as the representative of this category111We also tried other digital attacks (e.g., CW [carlini2017towards] and JSMA [papernot2016limitations]), but these were either less effective than PGD or unable to be extended to universal attacks (see Section 3.4)..

Pixel-level Physically Realizable Attack. This category of attack features pixel-level perturbations that can be realized in the physical world (e.g., by printing them on glossy photo papers). In this case, the attacker adds large pixel-level perturbations on a small area of the input image (e.g., face accessories). In FaceSec, we use two attacks of this category: eyeglass frame attack [sharif2016accessorize] and sticker attack. The former allows large perturbations within an eyeglass frame, and it can successfully mislead VGG-based face recognition systems [parkhi2015deep]. We propose the latter to produce pixel-level perturbations that are added on less important face areas than the eyeglass frame, i.e., the two cheeks and forehead of human faces, as illustrated in Fig. 2 and 4. Typically, the stickers are rectangular occlusions, which cover a total of about area of an input face image.

Figure 5: Transformations for the grid-level face mask attack.

Grid-level Physically Realizable Attack. In practice, pixel-level perturbations are not printable on face accessories made of coarse materials, such as face masks using cloths and non-woven fabrics. To address this issue, we propose the grid-level physically realizable face mask attack, which adds a color grid on face masks, as shown in Fig. 4. Formally, the face mask attack on closed-set systems is formulated as the following optimization problem as a variation of Eq. (1) (formulations for other settings are presented in Appendix A):

(3)

where is a color matrix; each element of represents an RGB color. is the matrix that constrains the area of perturbations. is a sequence of transformations that convert to a face mask with a color grid in digital space by the following steps, as shown in Fig. 8. First, we use the interpolation transform to scale up the color matrix into a color grid in a background image, which has the same dimension as and all pixel values set to be 0. Then, we split the color grid into the left and right parts, each of which has four corner points. Afterward, we use a perspective transformation on each part of the grid for a 2-D alignment, which is based on the position of its source and destination corner points. Finally, we add the aligned color grid onto the input face image . Details of the perspective transformation and the algorithm for solving the optimization problem in Eq. (3) can be found in Appendix A.

3.2 Attacker’s System Knowledge (K)

The key components of a face recognition system are the training set and neural architecture . It is natural to ask how do these two components contribute to the robustness against adversarial attacks. From the attackers’ perspective, we propose several evaluation scenarios in FaceSec, which represent adversarial attacks performed under different knowledge levels on and .

Zero Knowledge. Both and are invisible to the attacker, i.e., . This is the weakest adversarial setting, as no critical information of is leaked. Thus, it provides an upper bound for robustness evaluation on . In this scenario, the attacks are referred to as black-box attacks, where the attacker needs no internal details of to compromise it.

There are two general ways towards black-box attacks, query-based attack [chen2017zoo, nitin2018practical] and transfer-based attack [papernot2016transferability]

. We employ the latter because the former attack requires a large number of online probes to repeatedly estimate the loss gradients of

on adversarial examples, which is less practical than fully offline attacks when access to prediction decisions is unavailable. The latter method is built upon the transferability of adversarial examples [papernot2016transferability, dong2018boosting]. Specifically, an attacker first collects a sufficient of training samples and builds a surrogate training set . Then, a surrogate system is constructed by training a surrogate neural architecture on for the same task as , i.e., . Afterward, the attacker obtains a set of adversarial examples by performing white-box attacks on the surrogate system , which constitutes the transferable adversarial examples for evaluating the robustness of .

Training Set. This scenario enables the assessment of the robustness of the training set of in adversarial settings. Here, only the training set is visible to the attacker, i.e., . Without knowing , an attacker constructs a surrogate system by training a surrogate neural architecture on , i.e., . Then, the attacker performs the transfer-based attack aforementioned on and evaluates by using the transferred adversarial examples.

Neural Architecture. Similarly, the attacker may only know the neural architecture of but has no access to the training set , i.e., . This enables us to evaluate the robustness of the neural architecture of . Without knowing , the attacker can build its surrogate system and conduct the transfer-based attack to evaluate .

Full Knowledge. In the worst case, the attacker can have an accurate knowledge of both the training set and neural architecture (i.e., ). Thus, it provides a lower bound for robustness evaluation on . In this scenario, the attacker can fully reproduce in an offline setting and then performs white-box attacks on .

The evaluation method described above is based on the assumption that the adversarial examples in response to a surrogate system can always mislead the target system . However, there is no theoretical guarantee, and recent studies show that some transferred adversarial examples can only fool the target system with a low success rate [liu2016delving].

To boost the transferability of adversarial examples produced on the surrogate system, we leverage two techniques: momentum-based attack [dong2018boosting] and ensemble-based attack [liu2016delving, dong2018boosting]. First, inspired by the momentum-based attack, we integrate the momentum term into the iterative process of the white-box attacks on the surrogate system to stabilize the update directions and avoid the local optima. Thus, the resulting adversarial examples are more transferable. Second, when the neural architecture of the target system is unavailable, we construct the surrogate system using an ensemble of models with different neural architectures rather than a single model, i.e., , where is an ensemble of

models. Specifically, we aggregate the output logits of

in a similar way to [dong2018boosting]. The rationale behind this is that if an adversarial example can fool multiple models, it is more likely to mislead other models.

Target System Attacker’s Goal Formulation
Closed-set Dodging
Closed-set Impersonation
Open-set Dodging
Open-set Impersonation
Table 1: Optimization formulations by the attacker’s goal.

3.3 Attacker’s Goal (G)

In addition to the attacker’s system knowledge about , adversarial attacks can differ in specific goals. In FaceSec, we are interested in the following two types of attacks with different goals:

Dodging/Non-targeted. In a dodging attack, an attacker aims to have his/her face misidentified as another arbitrary face. e.g., the attacker can be a terrorist who wants to bypass a face recognition system for biometric security checking. As the dodging attack has no specific identity as which it aims to predict an input face image, it is also called the non-targeted attack.

Impersonation/Targeted. In an impersonation/targeted attack, an attacker seeks to produce an adversarial example that is misrecognized as a target identity. For example, the attacker may try to camouflage his/her face to be identified as an authorized user of a laptop, which uses face recognition for authentication.

In FaceSec, we formulate the dodging attack and impersonation attack as constrained optimization problems, corresponding to different face recognition systems and the attacker’s goals, as shown in Table 1. Here, denotes the softmax cross-entropy loss used in closed-set systems, represents the distance metric for open-set systems (e.g.

, the cosine distance obtained by subtracting cosine similarity from one),

is the input face image and the associated identity, is the adversarial perturbation, represents a face recognition system which is built on either a single model or an ensemble of models with different neural architectures, denotes the mask matrix that constrains the area of perturbation (similar to Eq. (1)), is the -norm bound of . For closed-set systems, we use to represent the target identity of impersonation attacks. For open-set systems, we use to denote the gallery face image that belongs to the identity as , and as the gallery image for the target identity of impersonation.

Note that the formulations listed in Table 1 work for both digital attacks and physically realizable attacks: For the former, we use a small value of and let be an all-one matrix to ensure imperceptible perturbations on the entire image. For the latter, we use a large and let to constrain in a small area of .

3.4 Attacker’s Capability (C)

In practice, even when the attackers share the same system knowledge and goal, their capabilities can still be different due to the time and/or budget constraints, such as the budget for printing adversarial eyeglass frames [sharif2016accessorize]. Thus, in FaceSec, we consider two types of attacks corresponding to different attacker’s capabilities: individual attack and universal attack.

Individual Attack. The attacker has a strong capability with enough time and budget to produce a specific perturbation for each input face image. In this case, the optimization formulations are the same as those shown in Table 1.

Universal Attack. The attacker has a time/budget constraint such that he/she is only able to generate a face-agnostic perturbation that fools a face recognition system on a batch of face images instead of every input.

One common way to compute a universal perturbation is to sequentially find the minimum perturbation of each data point in the batch and then aggregate these perturbations [moosavi2017universal]. However, this method requires orders of magnitude running time: it processes only one image at each iteration, so a large number of iterations are needed to obtain a satisfactory universal perturbation. Moreover, it only focuses on digital attacks and cannot be generalized to physically realizable attacks, which seek large perturbations in a restricted area rather than the minimum perturbations.

To address these issues, we formulate the universal attack as a maxmin optimization as follows (using the dodging attack on closed-set systems as an example):

(4)

where is a batch of input images that share the universal perturbation . Compared to  [moosavi2017universal], our approach has several advantages: First, we can significantly improve the efficiency by processing images batchwise. Second, our formulation can explicitly control the universality of the perturbation by setting different values of . Third, our method can be generalized to both digital attacks and physically realizable attacks. Details of our algorithm for solving the optimization problem in Eq. (4) and the formulations for other settings can be found in Appendix B.

4 Experiments

In this section, we evaluate a variety of face recognition systems using FaceSec on both closed-set and open-set tasks under different adversarial settings.

4.1 Experimental Setup

Target Model Training Set Neural Architecture Loss
VGGFace [parkhi2015deep] VGGFace [parkhi2015deep] VGGFace [parkhi2015deep] Triplet [parkhi2015deep]
FaceNet [facenetpytorch] CASIA-WebFace [yi2014learning] InceptionResNet [szegedy2016inception] Triplet [schroff2015facenet]
ArcFace18 [arcfacepytorch] MS-Celeb-1M [guo2016ms] IResNet18 [Liang2018Learning] ArcFace [deng2019arcface]
ArcFace50 [arcfacepytorch] MS-Celeb-1M [guo2016ms] IResNet50 [Liang2018Learning] ArcFace [deng2019arcface]
ArcFace101 [arcfacepytorch] MS-Celeb-1M [guo2016ms] IResNet101 [Liang2018Learning] ArcFace [deng2019arcface]
Table 2: Open-set face recognition systems in our experiments.

Datasets. For closed-set systems, we use a subset of the VGGFace2 dataset [cao2018vggface2]. Specifically, we select 100 classes, each of which has 181 face images. For open-set systems, we employ the VGGFace2, MS-Celeb-1M [guo2016ms], CASIA-WebFace [yi2014learning] datasets for training surrogate models, and the LFW dataset [LFWTech] for testing.

Neural Architectures. The face recognition systems with five different neural networks are evaluated in our experiments: VGGFace [parkhi2015deep], InceptionResNet [szegedy2016inception], IResNet18 [Liang2018Learning], IResNet50 [Liang2018Learning], and IResNet101 [Liang2018Learning].

Evaluation Metric. We use attack success rate

= 1 - accuracy as the evaluation metric. Specifically, a higher attack success rate indicates that a face recognition system is more fragile in adversarial settings, while a lower rate shows higher robustness against adversarial attacks.

Implementation. For open-set face recognition, we directly applied five publicly available pre-trained face recognition models as the target models for attacks, as summarized in Table 2. At prediction stage, we used photos randomly selected from frontal images in the LFW dataset [LFWTech], each of which is aligned by using MTCNN [zhang2016joint] and corresponds to one identity. And we used another photos of the same identities as the test gallery. We computed the cosine similarity between the feature vectors of the test and gallery photos. If the score is above a threshold corresponding to a False Acceptance Rate of , then the test photo is predicted to have the same identity as the gallery photo.

For closed-set face recognition, we randomly split each class of the VGGFace2 subset into three parts: for training, for validation, and

for testing. To train closed-set models, we used standard transfer learning with the open-set models listed in Table 

2. Specifically, we initialized each closed-set model with the corresponding open-set model, and then added a final fully connected layer, which contains neurons. Unless otherwise specified, each model was trained for epochs with a training batch size of . We used the Adam optimizer [kingma2015AdamAM] with an initial learning rate of , then dropped the learning rate by at the 20th and 35th epochs.

For each physically realizable attack in FaceSec, we used as the norm bound for perturbations allowed, and ran each attack for iterations. For the PGD attack [madry2018towards], we used an bound and iterations. The dimension of the color grid for face mask attacks is set to . The mask matrices that constrain the areas of perturbations for physically realizable attacks are visualized in Fig. 6.

Figure 6: Mask matrices for physically realizable attacks in FaceSec.

4.2 Robustness of Face Recognition Components

We begin by using FaceSec to assess the robustness of face recognition components in various adversarial settings. For a given target face recognition system and a perturbation type , we evaluate the training set and neural architecture of with the four evaluation scenarios presented in Section 3.2. Specifically, when is invisible to the attacker, we construct the surrogate system by ensembling the models built on the other four neural architectures shown in Table 2. In the scenarios where the attacker has no access to , we build the surrogate training set with another VGGFace2 subset that has the same classes as in closed-set settings, and use the other four training sets listed in Table 2 for open-set tasks. We present the experimental results for dodging attacks on closed-set face recognition systems in Table 3, and the results for zero-knowledge dodging attacks on open-set VGGFace and FaceNet in Table 4. The other results can be found in Appendix C. Additionally, we evaluate the efficacy of using momentum and ensemble methods to improve transferability of adversarial examples, which is detailed in Appendix D.

Target System Attack Type Attacker’s System Knowledge
Z T A F
VGGFace PGD 0.40 0.51 0.93 0.94
Eyeglass Frame 0.23 0.28 0.70 0.99
Sticker 0.05 0.06 0.47 0.98
Face Mask 0.26 0.32 0.63 1.00
FaceNet PGD 0.83 0.83 1.00 1.00
Eyeglass Frame 0.13 0.16 0.90 1.00
Sticker 0.01 0.01 0.92 1.00
Face Mask 0.30 0.42 0.83 1.00
ArcFace18 PGD 0.87 0.92 0.97 1.00
Eyeglass Frame 0.06 0.06 0.44 1.00
Sticker 0.01 0.01 0.37 1.00
Face Mask 0.27 0.33 0.71 1.00
ArcFace50 PGD 0.87 0.90 0.81 0.99
Eyeglass Frame 0.09 0.12 0.44 0.99
Sticker 0.00 0.01 0.14 0.94
Face Mask 0.29 0.36 0.67 0.99
ArcFace101 PGD 0.81 0.78 0.86 0.96
Eyeglass Frame 0.03 0.03 0.26 0.98
Sticker 0.04 0.04 0.08 0.95
Face Mask 0.26 0.36 0.54 0.99
Table 3: Attack success rate of dodging attacks on closed-set face recognition systems by the attacker’s system knowledge. Z represents zero knowledge, T is training set, A is neural architecture, and F represents full knowledge.
Target Model Attack Type
PGD Sticker Eyeglass Frame Face Mask
VGGFace 0.26 0.56 0.79 0.67
FaceNet 0.55 0.13 0.54 0.62
Table 4: Attack success rate of dodging attacks on open-set face recognition systems with zero knowledge.

It can be seen from Table 3 that: the neural architecture is significantly more fragile than the training set in most adversarial settings. For example, when only the neural architecture is exposed to the attacker, the sticker attack has a high success rate of on FaceNet. In contrast, when the attacker only knows the training set, the attack success rate significantly drops to . In addition, by comparing each row of Table 3 that corresponds to the same target system, we observe that digital attacks (PGD) are considerably more potent than their physically realizable counterparts on closed-set systems, while grid-level perturbations on face masks are noticeably more effective than pixel-level physically realizable perturbations (i.e., the eyeglass frame attack and the sticker attack). Moreover, by comparing the zero knowledge attacks in Table 3 and 4, we find that open-set face recognition systems are more vulnerable than closed-set systems such that nearly all perturbation types of attacks (even the black-box sticker attack that often fails in closed-set) tend to be more likely to successfully transfer across different open-set systems (i.e., these are more susceptible to black-box attacks), which should raise more concerns about their security.

4.3 Robustness Under Universal Attacks

Target System Attack Type Attacker’s Capability
N=1 N=5 N=10 N=20
VGGFace PGD 0.94 0.86 0.31 0.15
Eyeglass Frame 0.99 0.91 0.52 0.23
Sticker 0.98 0.66 0.34 0.09
Face Mask 1.00 1.00 0.88 0.56
FaceNet PGD 1.00 1.00 0.80 0.21
Eyeglass Frame 1.00 1.00 1.00 0.62
Sticker 1.00 1.00 0.98 0.61
Face Mask 1.00 1.00 1.00 0.91
ArcFace18 PGD 1.00 1.00 0.64 0.08
Eyeglass Frame 1.00 0.96 0.44 0.08
Sticker 1.00 0.56 0.09 0.00
Face Mask 0.99 0.92 0.90 0.67
ArcFace50 PGD 1.00 0.80 0.37 0.05
Eyeglass Frame 0.99 0.81 0.38 0.07
Sticker 0.91 0.28 0.06 0.00
Face Mask 0.99 0.98 0.81 0.72
ArcFace101 PGD 0.96 0.91 0.24 0.03
Eyeglass Frame 0.98 0.71 0.19 0.02
Sticker 0.93 0.15 0.03 0.00
Face Mask 0.99 0.92 0.90 0.67
Table 5: Attack success rate of dodging attacks on closed-set face recognition systems by the universality of adversarial examples. Here, represents the batch size of face images that share a universal perturbation.

Next, we use FaceSec to evaluate the robustness of face recognition systems with various extents of adversarial universality by setting the parameter in Eq. (4) to different values. For a given , we split the testing set into mini-batches of size , and produce a specific perturbation for each batch. Note that when , a universal attack is reduced to an individual attack. Table 7 shows the experimental results for universal dodging attacks on closed-set systems. The other results are presented in Appendix E.

Our first observation is that face recognition systems are significantly more vulnerable to the universal face masks than other types of universal perturbations. Under a large extent of universality (e.g., when ), face mask attacks remain success rates. Particularly noteworthy is the universal face mask attacks on FaceNet, which can achieve a rate as high as . In contrast, other universal attacks can have relatively low success rates (e.g., for eyeglass frame attack on ArcFace18). The second observation is that the robustness of a face recognition system against different types of universal perturbations is highly dependent on its neural architecture. For example, the ArcFace101 architecture is more robust than the others in most settings, while FaceNet tends to be the most fragile one.

4.4 Is “Robust” Face Recognition Really Robust?

While numerous approaches have been proposed for making deep neural networks more robust to adversarial examples, only a few [wu2019defending] focus on defending against physically realizable attacks on face recognition systems. These defense approaches have achieved good performance for certain types of realizable attacks and neural architectures, but their effectiveness for other types of attacks and face recognition systems remains unknown. In this section, we apply FaceSec to evaluate the state-of-the-art defense paradigms. Specifically, we first use DOA [wu2019defending], a method that defends closed-set VGGFace against eyeglass frame attacks [sharif2016accessorize] to retrain each closed-set system. We then evaluate the refined systems using the three physically realizable attacks included in FaceSec. Fig. 7 shows the experimental results for dodging attacks.

Figure 7: Attack success rate of dodging physically realizable attacks on closed-set systems with DOA retraining.

Our first observation is that the state-of-the-art defense approach, DOA, fails to defend against the grid-level perturbations on face masks for most neural architectures. Specifically, face mask attacks can achieve success rates on four out of the five face recognition systems refined by DOA. Moreover, we observe that adversarial robustness against one type of perturbation can not be generalized to other types. For example, while VGGface-DOA exhibits a relatively high level of robustness (more than a accuracy) against pixel-level perturbations (i.e., stickers and eyeglass frames), it is very vulnerable to grid-level perturbations (i.e., face masks). In contrast, using DOA on FaceNet can successfully defend face mask perturbations with the attack success rate significantly dropping from to , but it’s considerably less effective against eyeglass frames and stickers. In summary, these results show that the effectiveness of defense is highly dependent on the nature of perturbation and neural architectures, which in turn, indicates that it is critical to consider different types of attacks and neural architectures when evaluating a defense method for face recognition systems.

5 Conclusion

We present FaceSec, a fine-grained robustness evaluation framework for face recognition systems. FaceSec incorporates four evaluation dimensions and can work on both face identification and verification of open-set and closed-set systems. To our best knowledge, FaceSec is the first-of-its-kind platform that supports to evaluate the risks of different components of face recognition systems from multiple dimensions and under various adversarial settings. The comprehensive and systematic evaluations on five state-of-the-art face recognition systems demonstrate that FaceSec can greatly help understand the robustness of the systems against both digital and physically realizable attacks. We envision that FaceSec can serve as a useful framework to advance future research of adversarial learning on face recognition.

References

Appendix A Grid-level Face Mask Attack

a.1 Formulation

The optimization formulations of the proposed grid-level face mask attacks under different settings are presented in Table 6. Here, is the target face recognition model, is the original input face image. is a color matrix; each element of represents an RGB color. denotes the mask matrix that constrains the area of perturbation; it contains s where perturbation is allowed, and s where there is no perturbation. For closed-set systems, denotes the softmax cross-entropy loss function, is the identity of , and is the target identity for impersonation attacks. For open-set settings, is the cosine distance (obtained by subtracting cosine similarity from one), is the gallery image of , and is the target gallery image for impersonation. represents a set of transformations that convert the color matrix to a face mask with a color grid in digital space. Specifically, contains two transformations: interpolation transformation and perspective transformation, which are detailed below.

a.2 Interpolation Transformation

The interpolation transform starts from a color matrix and uses the following two steps to scale into a face image, as illustrated in Fig. 8: First, it resizes the color matrix from to a rectangle with pixels, so as to reflect the size of a face mask in a face image in digital space while preserving the layout of the color grids represented by . Specifically, in FaceSec, each input face image has pixels. Let and . Then, we put the face mask into a background image, such that the pixels in the rectangular area have the same value with , and those outside the face mask area have values of s.

Figure 8: Transformations for the grid-level face mask attack.

a.3 Perspective Transformation

Once the rectangle is embedded into a background image, we use a 2-D alignment that relies on the perspective transformation by the following steps. First, we divide into a left half part and a right half part ; each is rectangular with four corners. Then, we apply the perspective transformation to project each part to be with aligned coordinates, such that the new coordinates align with the position when a face mask is put on a human face, as shown in Fig. 8. Let and be the left and right part of the aligned face mask, the perspective transformation aims to find a matrix () for each part such that the coordinates satisfy:

where

and

Finally, we merge and to obtain the aligned grid-level face mask.

0:  Target system ;Input face image and its identity ; The number of iterations ; Step size ; Momentum parameter .
0:  The color matrix of adversarial face mask .
1:  Initialize the color matrix , momentum ;
2:  Use interpolation and perspective transformations to convert : ;
3:  for each  do
4:     ;
5:     ;
6:     ;
7:     Clip such that pixel values of are in ;
8:  end for
9:  return  .
Algorithm 1 Computing adversarial face mask.

a.4 Computing Adversarial Face Masks

The algorithm for computing the color grid for adversarial face mask attack is outlined in Algorithm 1. Here, we use the dodging attack on closed-set systems as an example. The algorithms for other settings are similar. Note that is the resulting color matrix, and the corresponding adversarial example is .

Appendix B Universal Attack

b.1 Optimization Formulation

The formulations of universal perturbations are presented in Table 7. In FaceSec, we mainly focus on universal dodging attacks. Effective universal impersonation attack is still an open problem, and we leave it for future work.

b.2 Computing Universal Perturbations

0:  Target system ;Input face image batch ; The number of iterations ;Step size ;Momentum parameter .
0:  The universal perturbation for .
1:  Initialize , ;
2:  for each  do
3:     for each  do
4:        ;
5:     end for
6:     ;
7:     ;
8:     ;
9:     Clip such that pixel values of are in ;
10:  end for
11:  return  .
Algorithm 2 Finding universal perturbations.
Figure 9: Attack success rate of dodging attacks with different open-set targets and surrogate models. Upper left: PGD attack. Upper right: Eyeglass frame attack. Lower left: Sticker attack. Lower right: Face mask attack.

The algorithm for finding universal perturbations is presented in Algorithm 2. Here, we use the dodging attack on closed-set systems as an example. The algorithms for other settings are similar. Note that in practice, Line 3–6 in Algorithm 2 can be executed in a paralleled manner by using GPUs. Therefore, compared to traditional methods that iterate every data point to find a universal perturbation [moosavi2017universal], our approach can achieve a significant speedup.

Appendix C Robustness of Face Recognition Components

c.1 Open-set Systems Under Dodging Attacks

To study the robustness of open-set system components under dodging attacks, we employ six different face recognition systems and then evaluate the attack success rates of dodging attacks corresponding to different target and surrogate face recognition models. Specifically, besides the five systems (VGGFace, FaceNet, ArcFace18, ArcFace50, and ArcFace101) presented in Table 2 of the main paper, we build a face recognition model by training FaceNet [schroff2015facenet] using the VGGFace2 dataset [cao2018vggface2] (henceforth, FaceNet+). Here, FaceNet and FaceNet+ are trained using the same neural architecture but different training sets, while the ArcFace variations share the same training data but with different architectures. The results are presented in Fig. 9.

We have the following two observations, which are similar to those observed from dodging attacks on closed-set systems in the main paper. First, in most cases, an open-set system’s neural architecture is more fragile than its training set. For example, under the PGD attack, adversarial examples in response to FaceNet+ have a success rate on FaceNet (which is trained using the same architecture but different training data), while the success rates among the ArcFace systems (which are built with the same training set but different neural architectures) are only around . However, there are also some cases where the neural architecture exhibits similar robustness to the training set. For example, when black-box attacks are too weak (under sticker attack), both neural architecture and training set are robust; when the attacks are too strong (under face mask attack), these two components exhibit similar levels of vulnerability. Second, the grid-level face mask attack is considerably more effective than the PGD attack, and significantly more potent than other physically realizable attacks. Like dodging attacks in closed-set settings, most black-box pixel-level physically realizable attacks have relatively low transferability on open-set face recognition systems, with only about success rate.

Target System Attack Type Attacker’s System Knowledge
Z T A F
VGGFace PGD 0.11 0.21 0.35 1.00
Eyeglass Frame 0.01 0.01 0.03 0.95
Sticker 0.00 0.00 0.00 1.00
Face Mask 0.00 0.01 0.02 1.00
FaceNet PGD 0.23 0.32 1.00 1.00
Eyeglass Frame 0.00 0.00 0.28 0.99
Sticker 0.01 0.00 0.21 1.00
Face Mask 0.00 0.00 0.26 0.99
ArcFace18 PGD 0.18 0.25 0.69 1.00
Eyeglass Frame 0.01 0.01 0.05 0.89
Sticker 0.00 0.00 0.01 0.94
Face Mask 0.01 0.01 0.03 0.77
ArcFace50 PGD 0.13 0.15 0.45 0.87
Eyeglass Frame 0.02 0.02 0.03 0.67
Sticker 0.00 0.00 0.00 0.58
Face Mask 0.01 0.01 0.01 0.60
ArcFace101 PGD 0.14 0.16 0.42 0.96
Eyeglass Frame 0.00 0.00 0.03 0.58
Sticker 0.00 0.00 0.00 0.50
Face Mask 0.01 0.01 0.04 0.73
Table 8: Attack success rate of impersonation attacks on closed-set face recognition systems by the attacker’s system knowledge. Z represents zero knowledge, T is training set, A is neural architecture, and F represents full knowledge.
Figure 10: Attack success rate of impersonation attacks with different open-set targets and surrogate models. Upper left: PGD attack. Upper right: Eyeglass frame attack. Lower left: Sticker attack. Lower right:Face mask attack.

c.2 Closed-set Systems Under Impersonation Attacks

Here, we use impersonation attacks to evaluate the robustness of closed-set systems. In our experiments, all the closed-set models are 100-class classifiers, as introduced in Section 4.1 of the main paper. For any input face image

and its identity , we let the target identity of the impersonation attack to be . An impersonation attack is successful only when the resulting adversarial example is misclassified as the target identity . The results are shown in Table 8.

We have two key findings. First, compared to Table 3 of the main paper, we observe that closed-set systems are significantly more robust to impersonation attacks than dodging attacks. Especially when an attacker has no accurate knowledge about the target system, the attack success rate of physically realizable attacks can be as low as . Second, it can be seen that closed-set systems exhibit moderate robustness against digital impersonation attacks. In such attacks, the knowledge of neural architecture is significantly more important than the training set. For example, by knowing the neural architecture of ArcFace18, a PGD attack can achieve a success rate. In contrast, this rate drops to when only the training set is visible to the attacker.

c.3 Open-set Systems Under Impersonation Attacks

To evaluate impersonation attacks on open-set systems, we randomly select 100 pairs from the LFW dataset [LFWTech] in a way similar to Section 4.1 of the main paper. Each pair contains two face images corresponding to different identities. We let one image as the input and the other as the target gallery image . An impersonation attack is successful only when the resulting adversarial example and are verified as the same identity. The experimental results are presented in Fig. 10.

Similar to the impersonation attacks on closed-set systems, we have the following observations that are consistent with our previous summary. First, open-set systems are very robust to black-box impersonation physically realizable attacks. In most cases, these attacks can only achieve a success rate of less than . In contrast, the PGD attack is significantly more potent. And under this attack, the neural architecture is considerably more vulnerable than the training set (e.g., comparing FaceNet variations to ArcFace models).

Appendix D Efficacy of Momentum and Ensemble Models in Transfer-based Attacks

Next, we evaluate the efficacy of using momentum and ensemble-based surrogate models in transfer-based dodging attacks. For a given closed-set target face recognition system, we first train a surrogate model using the same training data. Specifically, we use both a single surrogate trained on a different architecture222For a given target model, we trained four single surrogates corresponding to the other four architectures. Below, we only present the result of the surrogate that has the highest attack success rate., and an ensembled surrogate by ensembling the other four systems in the way described in Section 3.2 of the main paper. We then produce white-box dodging attacks on the surrogate and evaluate the resulting examples’ attack success rate on the target model. For each attack, we compare the momentum method (i.e., w/ mmt) and the conventional gradient-based approach (i.e., w/o mmt). The results are shown in Table 91011, and 12.

We have two key observations. First, both ensemble and momentum contribute to stronger transferability, although in most cases, ensemble contributes more. For example, the ensemble method can boost the transferability of PGD attacks on FaceNet by , while the improvement by momentum is only about . Second, the efficacy of momentum and ensemble models is highly dependent on the nature of perturbation. For digital attacks, these methods combined can significantly improve transferability by up to . In grid-level face mask attacks, the improvement is as considerable as up to . However, both methods can only marginally boost the transferability of pixel-level realizable attacks. Especially in the sticker attacks, the improvement is nearly negligible. We leave effective transfer-based pixel-level physically realizable attacks as an open problem for future research.

Target System Surrogate System
Single Ensemble
w/o mmt w/ mmt w/o mmt w/ mmt
VGGFace 0.08 0.16 0.43 0.51
FaceNet 0.42 0.52 0.73 0.83
ArcFace18 0.42 0.51 0.87 0.92
ArcFace50 0.35 0.55 0.86 0.90
ArcFace101 0.32 0.39 0.71 0.78
Table 9: Attack success rate of dodging PGD attacks on closed-set face recognition systems. Here, only the target system’s training data is visible to the attacker, and we use different surrogate models.
Target System Surrogate System
Single Ensemble
w/o mmt w/ mmt w/o mmt w/ mmt
VGGFace 0.17 0.22 0.26 0.28
FaceNet 0.08 0.09 0.14 0.16
ArcFace18 0.02 0.03 0.05 0.06
ArcFace50 0.05 0.05 0.10 0.12
ArcFace101 0.02 0.03 0.02 0.03
Table 10: Attack success rate of dodging eyeglass frame attacks on closed-set face recognition systems. Here, only the target system’s training data is visible to the attacker, and we use different surrogate models.
Target System Surrogate System
Single Ensemble
w/o mmt w/ mmt w/o mmt w/ mmt
VGGFace 0.02 0.02 0.06 0.06
FaceNet 0.00 0.00 0.01 0.01
ArcFace18 0.00 0.00 0.01 0.01
ArcFace50 0.00 0.00 0.00 0.01
ArcFace101 0.00 0.01 0.04 0.04
Table 11: Attack success rate of dodging sticker attacks on closed-set face recognition systems. Here, only the target system’s training data is visible to the attacker, and we use different surrogate models.
Target System Surrogate System
Single Ensemble
w/o mmt w/ mmt w/o mmt w/ mmt
VGGFace 0.18 0.26 0.20 0.32
FaceNet 0.26 0.38 0.42 0.42
ArcFace18 0.21 0.33 0.21 0.33
ArcFace50 0.28 0.34 0.36 0.36
ArcFace101 0.22 0.34 0.30 0.36
Table 12: Attack success rate of dodging face mask attacks on closed-set face recognition systems. Here, only the target system’s training data is visible to the attacker, and we use different surrogate models.
Target System Attack Type Attacker’s Capability
N=1 N=5 N=10 N=20
VGGFace PGD 1.00 0.89 0.81 0.53
Eyeglass Frame 1.00 1.00 1.00 1.00
Sticker 1.00 1.00 1.00 1.00
Face Mask 1.00 1.00 1.00 1.00
FaceNet PGD 1.00 0.02 0.02 0.02
Eyeglass Frame 1.00 1.00 1.00 1.00
Sticker 1.00 1.00 0.99 0.90
Face Mask 1.00 1.00 0.99 0.98
ArcFace18 PGD 1.00 0.96 0.79 0.46
Eyeglass Frame 0.99 0.86 0.70 0.67
Sticker 1.00 1.00 1.00 0.99
Face Mask 0.98 0.98 0.93 0.92
ArcFace50 PGD 1.00 0.91 0.75 0.47
Eyeglass Frame 0.99 0.78 0.67 0.62
Sticker 1.00 1.00 1.00 0.00
Face Mask 0.99 0.99 0.99 0.94
ArcFace101 PGD 1.00 0.68 0.68 0.41
Eyeglass Frame 1.00 0.85 0.73 0.65
Sticker 0.99 0.98 0.97 0.97
Face Mask 1.00 1.00 1.00 1.00
Table 13: Attack success rate of dodging attacks on open-set face recognition systems by the universality of adversarial examples. Here, represents the batch size of face images that share a universal perturbation.

Appendix E Universal Attacks

Finally, we evaluate open-set systems under universal dodging attacks. The results are shown in Table 13. Compared to Table 5 of the main paper, we find that open-set systems are significantly more fragile to universal perturbations of all types than their closed-set counterparts. For example, when , the open-set ArcFace101 is susceptible to all the four types of universal attacks, while in the closed-set setting it is only vulnerable to the universal face mask attack. Moreover, we again observe that the universal grid-level face mask attack is more effective than the other perturbation types. Here, we also find that the sticker attack is as potent as the face mask attack in open-set settings.