Robust Physical-World Attacks on Face Recognition

Face recognition has been greatly facilitated by the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications. However, recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition. In this work, we study sticker-based physical attacks on face recognition for better understanding its adversarial robustness. To this end, we first analyze in-depth the complicated physical-world conditions confronted by attacking face recognition, including the different variations of stickers, faces, and environmental conditions. Then, we propose a novel robust physical attack framework, dubbed PadvFace, to model these challenging variations specifically. Furthermore, considering the difference in attack complexity, we propose an efficient Curriculum Adversarial Attack (CAA) algorithm that gradually adapts adversarial stickers to environmental variations from easy to complex. Finally, we construct a standardized testing protocol to facilitate the fair evaluation of physical attacks on face recognition, and extensive experiments on both dodging and impersonation attacks demonstrate the superior performance of the proposed method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 9

11/27/2020

Robust Attacks on Deep Learning Face Recognition in the Physical World

Deep neural networks (DNNs) have been increasingly used in face recognit...
12/31/2017

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

In this paper we show that misclassification attacks against face-recogn...
09/29/2021

On Brightness Agnostic Adversarial Examples Against Face Recognition Systems

This paper introduces a novel adversarial example generation method agai...
06/08/2021

Simulated Adversarial Testing of Face Recognition Models

Most machine learning models are validated and tested on fixed datasets....
04/14/2021

Meaningful Adversarial Stickers for Face Recognition in Physical World

Face recognition (FR) systems have been widely applied in safety-critica...
09/04/2021

Real-World Adversarial Examples involving Makeup Application

Deep neural networks have developed rapidly and have achieved outstandin...
02/07/2020

On the Robustness of Face Recognition Algorithms Against Attacks and Bias

Face recognition algorithms have demonstrated very high recognition perf...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Face recognition has achieved substantial success with the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications, such as video surveillance and face authentication huang2020curricularface; deng2019arcface; wang2018cosface. However, some recent works demonstrate that DNN-based face recognition models are very vulnerable to adversarial examples, even small and malicious perturbations can cause incorrect predictions sharif2016accessorize; sharif2019general; dong2019efficient; komkov2019advhat; xiao2021improving. For instance, when wearing an adversarial eyeglass frame, an attacker can deceive face recognition to be incorrectly recognized as another identity sharif2016accessorize. Such an adversarial phenomenon has raised serious concerns about the security of face recognition and it is imperative to understand its adversarial robustness.

Adversarial attack has been the most commonly adopted surrogate for the evaluation of adversarial robustness. Existing attack methods on face recognition can be categorized into two types: (1) digital attacks where an attacker can perturb input images of face recognition directly in the digital domain goodfellow2014explaining; dong2019efficient; qiu2020semanticadv, and (2) physical attacks realized by imposing adversarial perturbations to real faces in the physical world, e.g., wearable adversarial stickers sharif2016accessorize; sharif2019general; komkov2019advhat. As attackers usually cannot access and modify the digital input of physical-world face recognition systems, physical attacks are more practical for evaluating their adversarial robustness. However, in contrast to the plethora of digital attack methods, few works are proposed to address physical attacks on face recognition, which remains challenging due to the complicated physical condition variations.

In this work, we study the sticker-based physical attacks that aim to generate wearable adversarial stickers to deceive state-of-the-art face recognition, for a better understanding of its adversarial robustness. For robust physical attacks, an adversarial sticker should survive against complicated physical-world conditions. To this end, we first provide an in-depth analysis of different physical-world conditions when conducting attacks on face recognition, including sticker and face variations, as well as environmental condition variations such as lighting conditions, camera angles, etc. Then, we propose a novel robust physical attack framework, dubbed PadvFace, that considers and models these physical-world condition variations specifically. Though some prior works show the possibility of performing physical attacks on face recognition sharif2016accessorize; sharif2019general; komkov2019advhat, their performance is still unsatisfactory where only partial environmental variations are considered. For instance, the work of komkov2019advhat adopted advhat to attack physical-world face recognition systems. Yet it did not address the chromatic aberration of stickers, facial variations, as well as adequate environmental conditions. In this work, we demonstrate that these physical-world variations also influence attack performance significantly .

In terms of the optimization in physical attacks, Expectation Over Transformation (EOT) athalye2018synthesizing is a standard optimizer that aggregates different physical-world condition variations to generate robust perturbations but simply treats each of them equally. However, we have the following observations: (1) the attack complexity of an adversarial sticker varies with different physical-world conditions, and (2) the optimization of physical attacks generally lead to the non-convex optimization problem due to the high non-linearity of DNNs. Thus, simply adapting the adversarial sticker to all kinds of physical-world variations equally could make the optimization difficult and lead to inferior solutions.

To alleviate these, we propose a novel Curriculum Adversarial Attack (CAA) algorithm that advocates for exploring the attack complexity difference of physical-world conditions and gradually aggregates these conditions from easy to complex during the optimization. CAA adheres to the principles of curriculum learning bengio2009curriculum, which has shown the benefits in obtaining better local minima and superior generalization for the non-convex optimization. Finally, we build a standardized testing protocol for physical attacks on face recognition and conduct a comprehensive experimental study on the adversarial robustness of state-of-the-art face recognition models, under both dodging and impersonation attacks. Extensive experimental results demonstrate the superior performance of the proposed method.

The contributions of our work are four-fold:

  • We propose a novel physical attack method, dubbed PadvFace, that models complicated physical-world condition variations in attacking face recognition.

  • We explore the attack complexity with various physical-world conditions and propose an efficient curriculum adversarial attack (CAA) algorithm.

  • We build a standardized testing protocol for facilitating the fair evaluation of physical attacks on face recognition.

  • We conduct a comprehensive experimental study and obtain the superior performance of physical attacks.

2 Related Work

Physical-world attacks aim to deceive deep neural networks by perturbing objects in the physical world athalye2018synthesizing; wang2019advpattern; xu2020adversarial. It is usually realized by firstly generating adversarial perturbations in the digital space and then fabricating and attacking in the physical world. Due to the complicated physical-world variations, there will be inevitable distortions when directly imposing the digital perturbations in the physical world. Hence, the research focus of current physical attacks lies in how to efficiently model and incorporate complicated physical-world conditions.

Currently, most physical attacks focus on image classification duan2020adversarial; eykholt2018robust; zhao2020defenses; jan2019connecting; li2019adversarial and object detection huang2020universal; zhang2018camou; chen2018shapeshifter; zhao2019seeing; zolfi2021translucent. However, face recognition is a quite different task with different model properties and objectives huang2020curricularface; deng2019arcface; wang2018cosface. This makes attacking face recognition specifically different when conducting it in the physical world since more facial variations need to be involved beyond general environmental conditions. Recently, there have been some attempts of physical attacks on face recognition sharif2016accessorize; sharif2019general; komkov2019advhat; pautov2019adversarial; yin2021adv. Due to better reproducibility and being harmless to human beings, sticker-based attacks have been the mainstream approaches.

For sticker-based adversarial attacks, Sharif et al. sharif2016accessorize; sharif2019general explored adversarial eyeglass frames for physical attacks. They demonstrated that it was possible to deceive face recognition models by wearing an adversarial eyeglass. However, they did not involve the illumination or face variations. Furthermore, they only considered limited-scale face recognition models that were trained to recognize up to 143 identities and conducted attacks by perturbing face classification scores. In contrast, state-of-the-art (SOTA) face recognition models, such as ArcFace deng2019arcface or CosFace wang2018cosface

, are based on the pair-wised (cosine) similarity and usually trained on tens of thousands of identities and millions of training images. Thus, such eyeglass-based attack methods fail to fool SOTA face recognition models, which is verified by Komkovand and Petiushko

komkov2019advhat. The work of komkov2019advhat proposed advhat to deceive the ArcFace model that trained on the large-scale dataset MS1MV2 deng2019arcface. They verified the adversarial hat could reduce the cosine similarity of two facial images from the same person. However, they did not consider the facial variations of attackers, as well as the chromatic aberration of adversarial stickers induced by printers and cameras.

Meanwhile, existing physical attacks commonly adopt the EOT optimizer athalye2018synthesizing that treats different environmental variations equally during the optimization. However, we demonstrate in this work that the attacking complexity varies with physical-world conditions and it is better to consider such characteristics for robust attacks.

Figure 1: Overall framework of the proposed robust PadvFace, where ‘D2P’ denotes Digital-to-Physical Module, ‘’ denotes Sticker Transformation Module, and ‘’ conducts transformations on adversarial faces.

3 Proposed Method

3.1 Preliminary

Let be an input facial image and be an anchor facial image, be the face recognition model and denotes the learned feature embedding of . The state-of-the-art face recognition model is generally realized based on the similarity between and . There are two types of adversarial attacks on face recognition: dodging attacks that aim to reduce the similarity between facial images from the same identity and impersonation attacks that aim to increase the similarity between facial images from different identities. For dodging attacks where and are captured from the same identity, the optimization of adversarial sticker can be formulated as

(1)

where denotes the attack loss, e.g., the cosine loss . In contrast, impersonation attacks, where and are sampled from two different identities, can be optimized by minimizing .

3.2 Physical Attack Challenges

The physical-world condition variations are challenging when attacking face recognition. Firstly, there would be physical-world variations w.r.t. the sticker: 1) spatial constraints that the sticker cannot cover all parts of faces, such as facial organs; 2) inevitable deformation and position disturbance when wearing the sticker and fitting it to the real face. 3) chromatic aberration of the sticker caused by the printers and cameras. The sticker is first fabricated by printers and then wore and photographed by cameras for attacking face recognition. Due to the limitation of printer resolution and different shooting conditions of photographing, there would be the chromatic aberration of the sticker between the digital space and the physical world.

Secondly, there would be physical-world variations w.r.t. the adversarial face: 1) photographing variations containing camera angles, head poses, and lighting conditions, etc.; 2) internal facial variations of attackers, such as different facial expressions and movements.

3.3 Robust PadvFace Framework

In this section, we propose a robust physical attack framework on face recognition, dubbed PadvFace, which considers and models the challenging physical-world conditions. Specifically, we adopt a rectangular sticker pasted on the forehead of an attacker without covering facial organs. The overall framework of the proposed PadvFace is illustrated in Fig. 1.

The rectangular sticker is firstly fed into a Digital-to-Physical (D2P) module to model the chromatic aberration induced by printers and cameras. Then, a sticker transformation module is introduced to simulate variations w.r.t. the sticker when pasting on a real-world face. In the meantime, an initial mask is also fed into by sharing the transformation with that on , generating the blending mask . After these, the sticker is blended with a randomly selected facial image according to the blending mask , resulting in an initial adversarial image . This initial adversarial image further inputs a transformation module to simulate the environmental variations such as different poses and lighting conditions, leading to the ultimate adversarial facial image for deceiving face recognition, i.e.,

(2)

where is a random Gaussian noise and , is a randomly selected facial image. More details of each module are introduced as follows.

Digital-to-Physical (D2P) Module. There are two types of chromatic aberration: 1) the fabrication error induced by printers, which mainly refers to the color deviation between the digital color and its printed version due to the limitation of printer resolution; 2) photographing error caused by cameras, such as sensor noises and different lighting conditions. To alleviate these issues, we develop a Digital-to-Physical (D2P) module to simulate the chromatic aberration of the sticker inspired by xu2020adversarial. Specifically, the proposed D2P module is realized by training a multi-layer perception (MLP) to learn a 1:1 mapping from a customized digital color palette to its physically printed and photographed version, as illustrated in Fig. 2 (a-b), where Fig. 2 (c) presents the learned color palette from the proposed D2P module. Note that only parts of these color palettes are shown for the brief illustration, and the full color palettes and details of D2P are provided in Appendix.

Figure 2: Examples of color palettes.

Sticker Transformation Module contains: 1) sticker deformation when pasting on a real-world face, including the off-plane bending and 3D rotations. Following komkov2019advhat, we adopt a parabolic transformation operator to simulate the off-plane bending and a 3D transformation for rotations; 2) position disturbance as it is hard to paste the sticker precisely at the same position as designed, such as random rotations and translations.

Face Transformation Module. As analyzed above, the attacker also undergoes a set of physical-world variations induced by different poses, lighting conditions, and internal facial variations, etc. To model these variations, we sample facial images from both physical and synthetic transformations. Specifically, for internal facial variations, we capture real-world facial images with different facial expressions and movements, leading to a set of facial images . To simulate the variations induced by different camera angles, poses and lighting conditions, we consider transformation that includes random rotation, scaling, translation, contrast and brightness on the adversarial facial images.

Module Variations
D2P Chromatic aberration from printers and cameras
Parabolic transformation, rotation, translation
Rotation, scaling, translation, contrast, brightness
Facial expressions, facial movements
Random Gaussian noise
Table 1: Physical-world variations in the proposed PadvFace.

The overall physical-world variations considered by the proposed PadvFace are summarized in Table 1. To address these challenging variations, EOT algorithm athalye2018synthesizing is commonly used to implement robust physical attacks. Specifically, EOT first samples transformations from and then optimizes the following objective as

(3)

where we adopt and for shorthand. is the corresponding adversarial image generated by Eq. (2). is the total-variation loss introduced to enhance the smoothness of the sticker and is a regularization parameter. The D2P module and the synthetic transformations in and are all differentiable. And model (3

) can be solved by stochastic gradient descent algorithm.

1:Attacked face recognition model , physical transformations , anchor image , initial sticker , curriculum parameters with .
2:for  =  do
3:    for  =  do.
4:       Fixed , updating as .
5:       Fixed , updating via gradient descent.
6:    end for
7:end for
8:Robust adversarial sticker .
Algorithm 1 Curriculum Adversarial Attack

3.4 Curriculum Adversarial Attack

We have the following observations for model (3). Firstly, for generating an adversarial sticker, the attack complexity varies with different physical-world conditions. In Fig. 3, we show the dodging attack performance of a fixed sticker under various facial variations, illumination variations, and 100 random sampled transformations from . And higher adversarial cosine similarity indicates lower attack performance and higher attack complexity. The results show that the difficulty of physical attacks varies with different conditions. Secondly, due to the high nonlinearity of DNNs, model (3) generally leads to a non-convex optimization problem. Hence, directly fitting the adversarial sticker to all kinds of physical-world conditions in model (3) could make the optimization difficult, resulting in inefficient solutions.

Figure 3: Dodging attack difficulties under different physical conditions, where metric is the adversarial cosine similarity.

In light of these, we propose an efficient curriculum adversarial attack (CAA) algorithm to gradually optimize adversarial stickers from easy to complex physical-world conditions. Given an adversarial sticker , larger attack loss of indicates higher attack complexity under the condition . Thus, can serve as an appropriate surrogate for the measurement of the complexity of . Based on this, we assign a learnable weight parameter for each transformation in and formulate the objective of CAA as

(4)

where is a regularizer and is a curriculum parameter.

Let , model (4) can be solved by alternatively optimizing and while keeping one of them fixed. Firstly, given , the optimization w.r.t.  reduced to

(5)

which can be solved by stochastic gradient descent method. Secondly, given , the optimal is determined by

(6)

leading to a closed-form solution as . Thus, easier transformation with lower attack loss will be assigned with a larger weight and dominate the updating of in the following step. On the other side, the value of is monotonically increased to involve more and more complex transformations during the optimization. As a result, the proposed CAA algorithm can generate the sticker from easy to complex transformations gradually. The overall algorithm of CAA is reported in Algorithm 1.

CAA adheres to the principle of curriculum learning bengio2009curriculum, which learns from easy to complex tasks and has shown the benefits in obtaining better local minima and superior generalization for many non-convex problems huang2020curricularface; kumar2010self; fan2017self; cai2018curriculum. To the best of our knowledge, we are the first to explore the complexity of physical-world conditions in adversarial attacks and to aggregate them via curriculum learning towards the robust performance.

4 Experiments

4.1 Testing Protocol

The constructed testing protocol is shown in Figure 4, which contains two phases: (1) attack launching stage for collecting facial images and generating adversarial stickers, and (2) attack evaluation stage for evaluating attacking performance under various physical conditions.

Methods Experimenters D I Illus FaceVars Poses Images
Advhat 10 3 1 8 128
Ours 10 3 4 35 5880
Table 2: Statistic comparison of physical evaluation cases between Advhat komkov2019advhat and our PadvFace. (D: dodging attacks; I: impersonation attacks; Illus/FaceVars/Poses: number of illumination/facial/pose variations; Images: number of evaluated images.)

Attack Launching Stage. For each experimenter, we first take 4 videos under the normal light with 4 different facial variations: happy, sad, neutral, and mouth-open. We use iPhone-12 to take 19201080 resolution videos and each video lasts about 3 5 seconds. The camera is placed in front of the experimenter with a distance of 50cm and the experimenter is asked to sit steady without pose variations. Then, we randomly sample a single frame for each video as , which would be taken as inputs of attack models for generating robust adversarial stickers.

Attack Evaluation Stage. The stickers generated in the launching stage are firstly printed by Canon Generic Plus PCL6 and then wore to deceive face recognition systems. We consider complicated environmental variations for the evaluations of this stage, including (1) different head poses. Existing methods usually acquire this by asking the experimenter to make certain pose changes, which is hard to control and cannot lead to a fair comparison between different methods due to the poor repeatability. To alleviate this issue, we customize a cruciform rail to make accurate movements for reducing the potential effects by uncontrolled experimenter movements. The experimenter is asked to sit in front of the cruciform rail with a distance of 50cm, and the cruciform rail carries the camera and moves in four directions (up, down, left, and right) sequentially to capture facial images with different poses. As a result, we can obtain the facial images with fine-controlled pose variations. (2) different lighting conditions. To imitate the illumination variations, we select a room without any extra window and use a KN-18C annular light as the light source, which is also placed in front of the experimenter with a distance of 50cm. We select three different light intensities, refer to dark/normal/light respectively, and examples of face images under different lighting conditions are provided in Appendix. (3) facial variations. As considered in the attack launching stage, we also involve four facial variations: happy/sad/neutral/mouth-open.

For each experimenter with a certain sticker, we take six videos: (a) three videos under the dark/normal/light lighting conditions with the neutral expression, and (b) three videos with the happy/sad/mouth-open variations under the normal illumination. The movement ranges of the cruciform rail are kept same for each video, i.e., left-right angles and the up-down angles . Afterwards, about 35 frames are captured from each video as the adversarial images . For a better evaluation, we also collect six videos for each experimenter without any stickers, i.e., benign faces .

We take the ‘neutral’ facial image captured from the attack launching round as an anchor , and calculate the benign cosine similarity and the adversarial cosine similarity . Instead of the attack success rate, taking the cosine similarity as the metric would lead to more precise evaluation without the effects caused by the threshold setting.

The statistics of the physical evaluation cases are presented in Table 2, where we also tabulate those of Advhat komkov2019advhat for comparison. It is worth noting that the evaluation of physical attacks is much more challenging than that of digital attacks, making the number of experimenters or attack cases relatively small. Yet our evaluations are already on the largest scale compared to existing works.

Figure 4: The whole pipeline of the proposed testing protocol.

Experimental Setting. We take the state-of-the-art ArcFace model222https://github.com/deepinsight/insightface/wiki/Model-Zoo trained on the large-scale dataset MS1MV2 as our attacked face recognition model, which has 99.77% benign recognition accuracy on the LFW benchmark huang2008labeled. The default sticker size is 400900 in pixels, obtained 13cm

5.8cm in the physical world. All experiments are conducted with the TensorFlow platform and a NVIDIA Tesla P40 GPU. The detailed setting of the proposed CAA algorithm is provided in

Appendix.

In the following, we denote the proposed method that considers all physical variations with the CAA optimizer as PadvFace- and introduce its two variants: PadvFace- that does not involve facial variations in and illumination transformations in , and PadvFace- realized by further substituting the standard EOT optimizer for the CAA optimizer in PadvFace-.

4.2 Evaluations of Physical Attacks

In this section, we evaluate the proposed PadvFace with dodging and impersonation attacks in the physical world. And we take Advhat komkov2019advhat as the main baseline, which is the most comparative sticker-based attack method for our evaluations on large-scale face recognition. Since Advhat did not capture the internal facial variations and illumination variations during the optimization, we align this setting and adopt the PadvFace- for a fair comparison. Nevertheless, our PadvFace- still keeps the D2P module and CAA optimization algorithm, which are the two main differences in contrast to Advhat. In consequence, we evaluate both methods with the ‘neutral’ expression under the ‘normal’ light and Fig. 5 provides some attack examples. The numerical results of dodging and impersonation attacks on 10 experimenters are reported in Table 3, where and denote the benign and the adversarial cosine similarity, respectively. For dodging attacks, lower denotes better performance, while for impersonation attacks, higher denotes better performance.

Dodging Attack () Impersonation Attack ()
ID ID
01 0.91 0.29 0.27 0102 0.16 0.28 0.46
02 0.94 0.43 0.33 0203 0.09 0.26 0.35
03 0.91 0.33 0.33 0304 -0.08 0.17 0.21
04 0.88 0.42 0.25 0409 0.12 0.19 0.21
05 0.94 0.60 0.51 0504 0.04 0.21 0.24
06 0.93 0.38 0.32 0607 0.10 0.26 0.29
07 0.93 0.34 0.31 0708 0.20 0.33 0.37
08 0.95 0.32 0.26 0809 0.12 0.24 0.33
09 0.93 0.37 0.28 0901 0.09 0.19 0.19
10 0.89 0.28 0.20 1003 0.03 0.32 0.41
Average 0.92 0.38 0.31 Average 0.09 0.24 0.30
Table 3: Results of Advhat and Ours (PadvFace-B) on dodging and impersonation attacks. Best results are in bold. Each ID pair in impersonation attacks indicates ‘attackervictim’.

The results of dodging attacks are shown in the left of Table 3. Compared to the benign similarity without any sticker, our proposed method achieves significantly lower adversarial cosine similarities, demonstrating its superior dodging attack performance. For example, for ID=01, after wearing the adversarial sticker, the cosine similarity drops from 0.91 to 0.27, posing a serious threat to the physical-world face recognition system. Furthermore, compared with Advhat, we also obtain significant performance improvements under most cases. On the average performance of all 10 experimenters, our method achieves 0.31 adversarial cosine similarity while that of Advhat is only 0.38, leading to 18.4% relative improvement.

As for impersonation attacks, we randomly select a single victim anchor from 10 experimenters for each attacker, obtaining 10 ‘attackervictim’ pairs. The evaluations are shown in the right of Table 3. As shown in the Table, our proposed method can significantly increase the cosine similarities of two different identities by wearing the adversarial stickers under all cases. For instance, for the attacking pair (10 ), the adversarial cosine similarity increases from 0.03 to 0.41. Furthermore, we also obtain better attack performance than Advhat on most attacking pairs. On the average of all 10 cases, our method leads to an average cosine similarity of 0.30 while that of Advhat is 0.24, the relative performance improvement over Advhat is 25%. In addition, compared with dodging attacks, impersonation attacks that aim to deceive a specific identity are generally more difficult.

We also conduct the paired t-test further and observe that our PadvFace-B is significantly better than Advhat with p=0.005 for both dodging and impersonation attacks. In summary, the superior performance of dodging and impersonation attacks on PadvFace-

over Advhat demonstrates the effectiveness of the developed D2P module and CAA optimizer.

Figure 5: Face examples of physical dodging and impersonation attacks on Advhat vs. Ours (PadvFace-B).
Dodging Attack()
ID 04 07 09
Dark 0.81 0.48 0.38 0.92 0.38 0.34 0.91 0.56 0.48
Normal 0.83 0.50 0.38 0.95 0.44 0.38 0.93 0.50 0.46
Light 0.81 0.49 0.36 0.96 0.39 0.29 0.90 0.49 0.43
Average 0.82 / 0.49 / 0.37 0.94 / 0.40 / 0.34 0.92 / 0.52 / 0.45
Impersonation Attack()
ID 0708 0809 0910
Dark 0.21 0.36 0.37 0.10 0.33 0.39 0.03 0.32 0.40
Normal 0.20 0.37 0.40 0.12 0.33 0.37 0.03 0.33 0.41
Light 0.22 0.36 0.39 0.14 0.37 0.42 0.04 0.37 0.41
Average 0.21 / 0.36 / 0.39 0.12 / 0.35 / 0.40 0.03 / 0.34 / 0.41
Table 4: Comparison with the neutral expression under illumination variations. Metric is ‘benign () PadvFace-B () PadvFace-F ()’. Best results are in bold.

4.3 Experiments of Environmental Variations

With the proposed standardized testing protocol, we can make more quantitative analyses of environmental variations. In this section, we analyze in-depth the impact of internal facial variations and illumination variations. We compare the performance of PadvFace-B and PadvFace-F under both dodging and impersonation attacks.

The evaluation results under different lighting conditions are given in Table 4, where all experimenters are asked to keep the neutral expression during the attack evaluation stage. We have the following observations: (1) the attacking performance varies under different illumination conditions, and (2) PadvFace-F that incorporates the illumination transformations in consistently outperforms PadvFace-B that learned with a fixed normal illumination under all cases. Specifically, for ID=09, PadvFace-F has 12.5% performance improvement than PadvFace-B (0.52 vs. 0.45) for dodging attacks, and 19.9% performance improvement (0.34 vs. 0.41) for impersonation attacks.

The evaluations under internal facial variations are presented in Table 5, where all adversarial images are collected under the normal illumination. It can be observed that when attackers make different facial variations, the attack performance would vary as well. This point can be further verified by the superior performance of PadvFace-F over PadvFace-B, where PadvFace-F involves facial variations during the optimization but PadvFace-B learns only based on neutral expression. Specifically, for ID=09, PadvFace-F achieves 9.4% dodging attack performance improvement (0.53 vs. 0.48) and 13.0% impersonation attack performance improvement (0.31 vs. 0.35) compared with PadvFace-B. These experimental results demonstrate that considering internal facial variations and illumination variations during the attacking process can largely boost the robustness of learned adversarial stickers.

Dodging Attack ()
ID 04 07 09
Happy 0.80 0.41 0.32 0.94 0.40 0.34 0.90 0.51 0.49
Sad 0.66 0.29 0.18 0.93 0.43 0.35 0.79 0.47 0.43
Neutral 0.83 0.50 0.38 0.95 0.44 0.38 0.93 0.50 0.46
Mouth-open 0.78 0.47 0.32 0.91 0.45 0.35 0.82 0.65 0.54
Average 0.77 / 0.42 / 0.30 0.93 / 0.43 / 0.35 0.86 / 0.53 / 0.48
Impersonation Attack ()
ID 0708 0809 0910
Happy 0.22 0.36 0.39 0.10 0.32 0.39 0.02 0.34 0.37
Sad 0.18 0.22 0.31 0.12 0.36 0.40 0.07 0.32 0.35
Neutral 0.20 0.37 0.40 0.12 0.33 0.37 0.03 0.33 0.41
Mouth-open 0.21 0.34 0.38 0.15 0.34 0.39 0.06 0.26 0.29
Average 0.20 / 0.32 / 0.37 0.12 / 0.34 / 0.39 0.04 / 0.31 / 0.35
Table 5: Comparison with internal facial variations under the normal illumination. Metric is ‘benign () PadvFace-B () PadvFace-F ()’. Best results are in Bold.

4.4 Ablation Study

Figure 6: Convergence of dodging attack performance with the proposed CAA vs. existing EOT optimization algorithms.

D2P Module. We utilize PadvFace-S for evaluating the effectiveness of the proposed D2P module. We randomly select three experimenters and conduct dodging attacks in physical world. The evaluation results are presented in Table 6, where ‘w/ D2P’ refers to PadvFace-S and ‘w/o D2P’ refers to the variant of PadvFace-S without the D2P module. The benign and adversarial images are captured under the ‘neutral’ expression and ‘normal’ illumination condition. As can be observed, the D2P module can significantly benefit the performance of physical attacks for all three experimenters, demonstrating the effectiveness of modeling the chromatic aberration induced by printers and cameras.

ID benign w/o D2P w/ D2P
01 0.87 0.29 0.22
02 0.95 0.38 0.32
10 0.89 0.19 0.16
Table 6: Ablation study of D2P module.

CAA Algorithm. In Fig. 6, we plot the tendency curves of the cosine similarity loss with a certain experimenter, to explore the difference in optimizing process between CAA and EOT. CAA refers to PadvFace-F and EOT refers to the variant of PadvFace-F by replacing the CAA optimizer with the EOT optimizer. The cosine similarity loss at each iteration in Fig. 6 is calculated by the average of the adversarial cosine losses of 400 randomly sampled transformations of . As can be expected, learning with easy physical-world conditions causes a relatively slower convergence rate in the early optimization stage compared with the EOT optimizer. However, as the iteration increases, more and more complex physical-world conditions are involved and the proposed CAA algorithm leads to better performance with lower cosine similarity loss at the end of the learning process.

Target Models benign Advhat Ours
CosFace yang2020delving 0.30 0.34 0.38
MobileFace yang2020delving 0.17 0.26 0.29
CurriculumFace cai2018curriculum 0.06 0.14 0.16
Table 7: Transferability of impersonation attacks.

4.5 Discussions

Inconspicuousness. For physical attacks, the inconspicuousness aims to make adversarial perturbations unnoticed. However, based on our experiments, attacking physical-world face recognition is intrinsically hard, and the current attack performance is far from satisfactory even without the inconspicuousness constraint. Thus, in this paper, we primarily focus on efficiently modeling the complicated physical-world conditions in attacking face recognition, and leaving the inconspicuousness of the adversarial stickers to future work.

Transferability. We evaluate the attack robustness of the proposed PadvFace when transferring to other face recognition models. The average cosine similarity of 10 experimenters are provided in Table 7, with ArcFace as the source model. As can be observed, our proposed method can achieve consistently better performance than Advhat on all target models. Note that we do not specifically impose constraints on the transferability of PadvFace. Nevertheless, we believe that it could be possible to introduce the advanced study from another branch of adversarial attacks, i.e., transfer attacks, to further improve the transferability.

5 Conclusion

In this work, we study the adversarial vulnerability of physical-world face recognition by sticker-based adversarial attacks. For robust adversarial attacks, we analyze in detail the complicated physical-world condition variations in attacking face recognition and propose a novel physical attack method that considers and models these variations. We further propose an efficient curriculum adversarial attack algorithm that gradually learns the sticker from easy to complex physical-world variations. We construct a standardized testing protocol for facilitating the fair evaluation of physical attacks on face recognition. Extensive experimental results demonstrate the effectiveness of the proposed method for dodging and impersonation physical attacks.

References

Appendix A Appendix

A.1  Details of Digital-to-Physical (D2P) Module

As analyzed in Sec. 3.3, the D2P module aims to imitate the chromatic aberration induced by printers and cameras during physical attacks. It is realized by a two-layer MLP with 100 hidden nodes to learn a 1:1 color mapping from the digital space to the physical space. In the following, we first report the details of producing color palettes and then provide the training details of the MLP. Furthermore, we provide the evaluation results of the D2P module on adversarial stickers.

Figure A7: Color palettes.

Color Palettes. For the learning of the MLP, we need to capture both RGB colors in the digital space and their corresponding values after printing and photographing in the physical world. To this end, we first construct a set of color anchors in the digital space. Since we can not enumerate all colors in RGB space, we only make a subset of it. Specifically, our color anchors consist of 512 colors generated by sampling Red, Green, and Blue colors, respectively.

To better capture the corresponding colors after printing and photographing in the physical world, we reshape to the size of 1632 and replicate each color anchor of to 4040 pixel square, leading to the digital color palette with 6401280 in pixels in Fig. A7 (a). Then, we print the digital color palette and photograph it under the normal illumination with the distance of 50cm in our testing environment (refers to Sec. 4.1), obtaining the physical color palette in Fig. A7 (b). In addition, we average the pixel square of , where and denote the height and the width of (b), resulting in with the size of 1632 as the corresponding colors of in the physical world.

MLP Training Details. Taking the digital color anchors as the input and its physical counterpart

as the ground truth, we train our MLP through Adam optimizer with 100,000 epochs. The initial learning rate is 0.01 that decays by the factor of 10 at the epochs of 50,000 and 70,000, respectively. As a result, we can obtain the learned color palette by D2P in Fig 

A7 (c), which only has the Mean Square Error (MSE) of 0.0001 with the ground truth of (b).

Figure A8: Examples of adversarial stickers.

Further Evaluation. This part further evaluates the performance of the learned D2P module on adversarial stickers. As shown in Fig. A8

, given an arbitrary digital adversarial sticker in (a), we first print and photograph it in our testing environment, obtaining the physical sticker in (b). In the meantime, the digital sticker is also fed into the D2P module to obtain the learned D2P sticker in (c). We test Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity (MSSIM), and MSE, between pairs ‘Digital-to-Physical’ stickers,

i.e., (a) and (b), as well as ‘D2P-to-Physical’ stickers, i.e., (c) and (b), in Fig. A8 for evaluating the performance of trained MLP. Corresponding results are reported in Table A8. Note that higher MSSIM and lower PSNR and MSE denote better results. As can be observed, the adversarial sticker learned by our D2P is closer to the physical adversarial sticker than the digital one. Therefore, the proposed D2P module can effectively address the chromatic aberration induced by printers and cameras during physical attacks.

Sticker Metrics PSNR (dB) MSSIM MSE
Digital-to-Physical 18.27 0.55 0.014
D2P-to-Physical 21.58 0.62 0.006
Table A8: Performance of D2P module over adversarial sticker pairs. Higher MSSIM and lower PSNR and MSE denote better results. Best results are shown in bold.
Module Variations Min Max
Parabolic angle - +
Parabolic rate -2 +2
Rotation - +
Translation -1 +1
Rotation - +
Scaling 0.94 1.06
Translation -2 +2
Contrast 0.5 1.1
Brightness 0.05 0.1
Table A9: Transformation parameters, where each parameter is randomly sampled at equal intervals from the specified range. ‘Parabolic angle’ and ‘Parabolic rate’ are integrated as ‘Parabolic transformations’.

A.2  Details of Curriculum Adversarial Attack (CAA) Algorithm

For the proposed curriculum adversarial attack method, the weight of TV loss is set to in model (4). More specifically, in Algorithm 1, for the updating of , the learning rate is set to 0.02 with the momentum of 0.95. We adopt three curriculum learning stages, i.e., in Algorithm 1. The number of inner iteration is set to 2000/2000/3000 for each curriculum stage, respectively. Moreover, at the beginning of each curriculum learning stage , we determine the curriculum parameter as follows: let denote the curriculum proportion, where . We set the curriculum proportion to , 0.8 and 1.0, respectively. Besides, each inner updating (step3 and step4 of Algorithm 1) is performed based on a randomly sampled mini-batch of size 32 from for the training efficiency.

In addition, we provide the settings of the transformation parameters of . Note that contains four facial variations (i.e., happy/neutral/sad/mouth-open) and is a random Gaussian noise with and . The transformation parameters of and are given in Table A9.

A.3  Examples of Three Light Conditions

We provide examples of adversarial faces under three real-world light conditions (i.e., dark/normal/light) used in our experiments in Fig. A9. All facial images come from our collected dataset following the proposed testing protocol.

Figure A9: Examples of adversarial faces under three real-world light conditions.