Mask-invariant Face Recognition through Template-level Knowledge Distillation

by   Marco Huber, et al.

The emergence of the global COVID-19 pandemic poses new challenges for biometrics. Not only are contactless biometric identification options becoming more important, but face recognition has also recently been confronted with the frequent wearing of masks. These masks affect the performance of previous face recognition systems, as they hide important identity information. In this paper, we propose a mask-invariant face recognition solution (MaskInv) that utilizes template-level knowledge distillation within a training paradigm that aims at producing embeddings of masked faces that are similar to those of non-masked faces of the same identities. In addition to the distilled knowledge, the student network benefits from additional guidance by margin-based identity classification loss, ElasticFace, using masked and non-masked faces. In a step-wise ablation study on two real masked face databases and five mainstream databases with synthetic masks, we prove the rationalization of our MaskInv approach. Our proposed solution outperforms previous state-of-the-art (SOTA) academic solutions in the recent MFRC-21 challenge in both scenarios, masked vs masked and masked vs non-masked, and also outperforms the previous solution on the MFR2 dataset. Furthermore, we demonstrate that the proposed model can still perform well on unmasked faces with only a minor loss in verification performance. The code, the trained models, as well as the evaluation protocol on the synthetically masked data are publicly available:



page 1

page 4


Unmasking Face Embeddings by Self-restrained Triplet Loss for Accurate Masked Face Recognition

Using the face as a biometric identity trait is motivated by the contact...

PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation

Deep neural networks have rapidly become the mainstream method for face ...

My Eyes Are Up Here: Promoting Focus on Uncovered Regions in Masked Face Recognition

The recent Covid-19 pandemic and the fact that wearing masks in public i...

Boosting Masked Face Recognition with Multi-Task ArcFace

In this paper, we address the problem of face recognition with masks. Gi...

iExam: A Novel Online Exam Monitoring and Analysis System Based on Face Detection and Recognition

Online exams via video conference software like Zoom have been adopted i...

The Effect of Wearing a Mask on Face Recognition Performance: an Exploratory Study

Face recognition has become essential in our daily lives as a convenient...

A Unified Framework for Masked and Mask-Free Face Recognition via Feature Rectification

Face recognition under ideal conditions is now considered a well-solved ...

Code Repositories


Mask-invariant Face Recognition through Template-level Knowledge Distillation

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The current COVID-19 pandemic presented new challenges to biometric technologies [19]. To reduce the risk of spreading the virus, the use of contactless biometrics has become increasingly important. Face recognition (FR) in particular had already established itself as a contactless biometric modality before the pandemic due to its performance [5] and its passive, universal and non-intrusive nature [25]. However, FR systems have also been confronted with new realities, most notably the wearing of masks by the general public to prevent the spread of the contagious virus. The presence of a mask that hides facial features can weaken the FR system [35, 9, 10] and thus reduce confidence in the system’s decisions [19]. A study by the National Institute of Standards and Technology (NIST) [35] as well as a study from the Department of Homeland Security [2] concluded, that the wearing of masks has a significant negative effect on the accuracy of FR systems. Further studies from the scientific community confirmed this [9, 10]. It is important to note that the negative effect of masks affects not only the recognition performance of automated FR systems but also the performance of human operators [8], presentation attack detection [13]

, and quality estimation

[15]. Therefore, it is important to find technical solutions to make FR systems robust to wearing masks in response to the given circumstances. The importance of this topic has recently led to the organization of several competitions addressing masked face recognition [48, 11, 5].

The challenge of masked faces for automatic FR came into scientific focus with the onset of the COVID-2019 pandemic. Previously, some work generally dealt with the problem of occluded faces, which also included sunglasses, other facial accessories, and other face occlusions [26, 43, 39]. Also in the same direction, the detection (not the recognition) of occluded faces or masked faces [16, 2, 29] was also discussed in the literature. Such works addressed the problem of detecting faces that are largely occluded or faces that are partially covered by masks. There were only a few works that address enhancing the recognition performance on masked faces. Li et al. [28] proposed a cropping and attention-based approach to train a face recognition model on the periocular area of masked faces. Neto et al. [34]

also focus on the upper part of the face, for this, they use a constraint triplet loss to get optimized embeddings for masked face recognition. Another approach that has been followed is to synthetically replace the area hidden by the mask using Generative Adversarial Networks (GANs)

[41, 27, 18]. Li et al. [27] proposed a solution to in-paint the masked area while trying to maintain the identity information. A different approach was taken by Boutros et al. [4], where the authors proposed a self-restrained triplet loss to train an on-the-top solution that learns to transfer the templates of masked faces into templates that possess the properties of non-masked face templates. Several approaches were proposed as part of the MFRC-21 Challenge [5]. Most of the submissions used a ResNet [21] architecture and ArcFace [12] loss as a foundation. Anwar and Raychowdhury [1] trained a model on synthetically masked images using the Inception-ResNet v1 [40] with the triplet-loss FaceNet [37]. Moreover, Zheng et al. [47]

used large-scale web-collected database and corresponding tags without manual annotations and used frequency domain information to train an FR that is more robust to masked faces.

Fig. 1: Overview of the proposed MaskInv approach for Masked Face Recognition:

The pre-trained teacher network and the student network to be trained are forwarded the same images. With a probability

of 0.5, a synthetic mask is added to the input image of the student network. During training, the mean squared error () is calculated between the face embedding of the teacher and the student network. The calculated error then contributes to the overall loss of the student network that is trained additionally with ElasticFace-Arc loss () [3].

Knowledge Distillation (KD) is a technique that is commonly used to improve the performance and generalizability of lightweight models. This is achieved by transferring knowledge learned by a teacher model to a (usually) smaller student model. The student model is guided by the teacher model to learn additional relationships discovered by the teacher model that goes beyond the information stored in ground truth labels [22]. For Face Recognition, KD has already been used to reduce the complexity of face recognition models to produce well-performing lightweight models [6, 33], or to counter the problem of low-resolution FR [42, 17]. Li et al. [27] used KD between a teacher and a student model to distill the feature distribution of unmasked faces to recovery identity information on inpainted masked faces.

In this work, we propose a mask-invariant face recognition solution, namely the MaskInv. MaskInv utilizes knowledge from a pre-trained face recognition model using KD on template-level while also being guided by a margin-based identity classification loss (ElasticFace [3]). The student model is trained with masked and non-masked face images so that it can deal with both cases, while the KD process ensures that the model would produce embeddings of masked faces that are similar to those of non-masked faces of the same identities. We investigated single and two-stage training paradigm, where the latter puts more emphasis on the KD at later training stages proving to enhance the masked FR performance. We additionally baseline our solution, to the same training paradigm that includes using face image augmented with masks, however, without optimizing the embeddings using KD.

Our experiments demonstrate, in a stepwise ablation study, the accuracy gains of our MaskInv solution on different scenarios of masked face recognition. This study utilizes seven different benchmarks, two with real and five with simulated masked face images. We further prove the superiority of our MaskInv solution by comparing the achieved results to the top academic performers of the MFR 2021 masked face recognition challenge [5], where the MaskInv outperformed all the academic solutions in both, masked vs masked and masked vs non-masked scenarios.

The paper is structured as follows: in Section II we detail and rationalize our proposed approach. The experimental setup, the databases used for training and evaluation, and the evaluation criteria are described in Section III. Subsequently, we present and discuss our results in Section IV, both in terms of a detailed ablation study and comparison with SOTA. In Section V we conclude our work.

Ii Methodology

In this section, we present and rationalize our proposed methodology to create mask-invariant face embeddings through our MaskInv solution. Our approach achieves that by jointly learning the correct identity classification of masked and non-masked face images, and ensuring that the embeddings of masked faces are similar to those of non-masked images of the same identities through embedding-level KD. The KD teaches a student network to process a masked face in a manner that produces an embedding similar to the non-masked face embedding produced by the teacher, and thus try to neglect the non-identity related information introduced by the mask.

A well-performing face recognition model trained on non-masked faces acts as the teacher model in our knowledge-distillation architecture. A second FR system acts as the student network and is trained with interaction to the teacher model to be mask-invariant, and thus produce embeddings from masked faces that are similar to those produced from non-masked faces. A schematic overview of the proposed learning scenario is presented in Figure 1. During the training, the same images are simultaneously fed to both, the teacher and the student model. On the images forwarded to the student model, we apply a synthetic mask with a probability , while the images forwarded to the teacher network remain unaltered. The synthetic mask is created by placing a mask template on the face, depending on the landmarks used for the face alignment during the image pre-processing. The synthetic mask is applied to a proportion of the images fed to the student model (probability ) to ensure that it still deals optimally with non-masked faces and to enable a more stable training process. For the teacher network, we use a pre-trained auxiliary network that guides the newly trained student network. To achieve our goal, the student network is not only trained to produce a correct classification decision but additionally optimize the produced embedding to be similar to that of the teacher network (on non-masked faces). Thus, the student network is trained using a combined loss , consisting of two different losses. Formally, we define the total loss as:



refers to the recently published SOTA FR loss function ElasticFace-Arc

[3] and to the mean squared error loss in the template-level KD process. The ElasticFace-Arc loss function relaxes the fixed margin constraint of similar high-performing FR loss functions and therefore provides space for flexible identification separability. It outperforms several other SOTA FR loss functions especially at hard cross-pose benchmarks [3]. Formally, it can be defined as [3]:


where denotes the batch size, the number of identities, the scale parameter, the margin parameter, and

the standard deviation. The function

returns a random value from a Gaussian distribution with the mean

and the standard deviation . All the parameters are set as defined in [3]. The is calculated as part of the KD between the feature embeddings of the teacher network and the feature embeddings of the student network to optimize the embedding itself rather than the network classification behavior. This ensures that the embedding distortions caused by masks are kept to a minimum and therefore guides the KD process to produce a mask-invariant student network. The used loss, mean squared error, can be formalized as:


where and are the feature representations obtained from the embedding layer of the student and teacher model, respectively, and is the size of the embeddings.

Since the learned feature embeddings are normalized, the range of is rather small and thus, as proposed by [6], we weight them with a weight during the first training step. This enables the knowledge transfer by allowing the to contribute to the overall loss while keeping the emphasis on learning identity classification by the .

We propose two different paradigms based on the methodology described above. In the MaskInv-HG (Mask-invariant High Guidance) approach, we increased the weighting to when the loss stabilized to further emphasize the adaption of the network to the masked data. In the MaskInv-LG (Mask-invariant Low Guidance) approach the weighting remains unchanged throughout the training process. For a detailed ablation study, we additionally present the results of the third solution, where the is set to zero, and thus no KD is applied, rather the student network is trained independently with faces augmented with a synthetic mask with the probability , this solution will be referred to as ElasticFace-Arc-Aug. All the training parameters will be introduced in the next section.

Iii Experimental Setup

Iii-a Training Setup

As the teacher network, we use a pre-trained FR model based on the ResNet-100 [21] architecture, trained on the MS1MV2 dataset [12] with ElasticFace-Arc loss [3], which has been made publicly available by the authors111 For the training details of the teacher model, we refer to the original ElasticFace paper [3]. We chose this model because it advanced the SOTA (such as ArcFace [12] and MagFace [31]) on six difficult mainstream benchmarks and is publicly available.

For the student model, we also use the ResNet-100 [21] architecture as the ResNet-100 architecture is widely used in SOTA FR approaches [31, 12, 3]. For the ElasticFace-Arc loss we set the scale parameter to , similar to [12, 3, 24] and the margin parameter to and the standard deviation to , following [3]

. The mini-batch size is set to 512. The model is trained with Stochastic Gradient Descent (SGD) optimizer with an initial learning rate of

. The momentum is set to and the weight decay to . The learning rate is divided by 10 at 80k, 140k, 210k training iterations. The total amount of training iterations is set to 295k iterations. With these parameters and training process, we follow the training process defined in [3]. For the MaskInv-HG variant, we increase the weighting from 100 to 3000 after 227k iterations. We do this to further sensitize the model to the masked data. For the MaskInv-LG variant, the weighting remains unchanged throughout the training. In the experiments, we investigate the performance of both, the student model with low guidance (MaskInv-LG) and the student model with high guidance (MaskInv-HG), in a detailed ablation study. For a detailed ablation study, we also evaluate our ElasticFace-Arc-Aug solution where the is set to zero (no KD). Here, the student network is trained independently (with the same training procedure as the teacher) with faces augmented with a synthetic mask with the probability .

Following recent trends [12, 3, 31, 24], the model is trained on the MS1MV2 dataset [12], which is the same data used to train the teacher model. The MS1MV2 is a refined version of MS-Celeb-1M [20] and contains 5.8M images of 85k identities. For the teacher network, the images are used unmodified, while for the student network, with a probability of 0.5, synthetic masks with random colors and random small deviations in shape are added. The synthetic masked images were created by mapping a template mask image on the extracted landmarks used for pre-processing with small variations in the mapped key points. All the images are aligned and cropped to 112x112x3 using MTCNN [44] and then normalized to have pixel values between -1 and 1. The simulated mask approach will be publicly provided to ensure comparability and reproducibility.

Fig. 2: (a, e) Images from the MS1MV2 dataset used for training (b, f) Images from the LFW benchmark (c, g) Images from the MFR2 dataset, (d,h) Images from the MFRC-21 dataset. The masks in (e) and (f) were synthetically added for the training or evaluation purpose on masked images. The images in each column are from the same identity.

Iii-B Evaluation Dataset & Benchmarks

To demonstrate the effect of our proposed masked face recognition approach, we investigate the behavior on both, real and simulated masked face datasets. Furthermore, we observe the performance on non-masked benchmarks to show that our MaskInv solution can still operate well on non-masked data.

Regarding the performance on real masked data, we use the database MFRC-21 of the MFR competition [5] as well as MFR2 [1]. The MFRC-21 dataset consists of images of 47 individuals taken in a collaborative but varying scenario. The authors of MFRC-21 provided two different evaluation scenarios: non-masked vs masked and masked vs masked. The evaluation on this database (MFRC-21) is essential as it enables a wide range comparison to SOTA solutions presented in the competition [5]. Additionally to the MFRC-21, the MFR2 dataset [1] is also used in this work. The MFR2 consists of masked images collected in the wild of 53 identities with a total of 269 images. The authors provide a list of 848 pairs of images to be compared.

We additionally opted to investigate the performance of our MaskInv solution on large-scale databases. To do that, we rely on simulated masks to adapt established large-scale face recognition benchmarks into evaluating masked face recognition performance. To generate the simulated mask, we follow the same synthetic mask-creating process detailed in subsection III-A. However, we do not use the random shift in the mask position to maintain a realistic appearance, but we retain the random color of the mask. Following the proposed scenarios of the MFRC-21 competition, we evaluate both, masked vs masked and non-masked vs masked comparison pairs, where we either apply the simulated mask on both images or only on one image in the comparison pair. As a basis for our simulated large-scale masked dataset, we use five mainstream datasets that also cover various additional challenges. The five used benchmarks are: LFW [23], CFP-FP [38], AgeDB-30 [32], CALFW [46], and CPLFW [45].

LFW [23] is an unconstrained face verification benchmark and contains 13,233 images of 5749 different identities. We follow the standard protocol and applied the masks on the defined 6000 comparison pairs. The CFP-FP dataset [38] addresses the comparison between frontal and profile faces and the evaluation protocol contains 3500 genuine and 3500 imposter pairs. AgeDB [32] is an in-the-wild dataset for age-invariant face verification. We use the most reported and most challenging scenario AgeDB-30 with an age gap of 30 years between the images of the individuals. Both, the CALFW dataset [46] and the CPLFW [45] are based on the LFW dataset. While the CALFW dataset focuses on cross-age evaluation, the CPLFW dataset focus on cross-pose evaluation. Both dataset protocols provide 3000 genuine and 3000 imposter comparison pairs. The imposter pairs of CALFW and CPLFW are selected from the same gender and ethnicity to reduce the effect of these attributes on the recognition performance.

To sum it up, we use seven databases to evaluate our MaskInv solution, two with real masks and five common face recognition benchmarks with simulated masks. Sample images from the used datasets are shown in Figure 2.

Iii-C Evaluation Criteria

For the evaluation, we consider several metrics. For the sake of comparability and reproducibility, we follow the evaluation metrics used and proposed in the utilized benchmarks and datasets. We nevertheless acknowledge the evaluation metrics defined in the ISO/IEC 19795-1

[30] standard.

The verification performance on the MFRC-21 dataset [5] is evaluated and reported as the false non-match rate (FNMR) at two operation points. The operation points are denoted as FMR1000 and FMR100 and refer to the points which provide the lowest FNMR for a false match rate (FMR) and , respectively. As the FMR1000 has been proposed as the best practice evaluation operation point for high-security scenarios, e.g. automatic border control systems by the European Border and Coast Guard Agency (Frontex) [14], we rate it higher than FMR100. To get an indication of generalizability, we also report the Fisher Discriminant Ratio [36] between the genuine and imposter distributions as a separability measure, similar to [5].

On the MFR2 dataset [1] we follow the evaluation metrics reported in the original work. This includes the true positive rate (TPR) at a false acceptance rate (FAR) of (FAR2000), the achieved accuracy at this operation point (ACC2000) and the maximum accuracy of the network (ACC) [1]. For the benchmarks LFW [23], CFP-FP [38], AgeDB-30 [32], CALFW [46], and CPLFW [45], we follow the original metric in the respective benchmarks where they all report the verification performance as the verification accuracy in percentage points as defined in the corresponding benchmark references.

Mask vs No-Mask FMR1000 FMR100 FDR
ArcFace [12] 0.07154 0.06009 8.6899
MagFace [31] 0.07641 0.05906 8.4147
ElasticFace-Arc (teacher) [3] 0.08112 0.06004 8.2282
ElasticFace-Arc-Aug (ours) 0.05792 0.05692 10.8983
MaskInv-LG (ours) 0.05951 0.05696 10.6944
MaskInv-HG (ours) 0.05849 0.05637 11.3271
TABLE I: Ablation Study on the MFRC-21 dataset - Mask vs No-Mask: With the KD and the high guidance (MaskInv-HG), the separability and the performance at FMR100 increased.
Mask vs Mask FMR1000 FMR100 FDR
ArcFace [12] 0.06504 0.05925 9.6864
MagFace [31] 0.06694 0.05794 9.8259
ElasticFace-Arc (teacher) [3] 0.07109 0.05795 9.5872
ElasticFace-Arc-Aug (ours) 0.05808 0.05606 10.9477
MaskInv-LG (ours) 0.05923 0.05681 10.9671
MaskInv-HG (ours) 0.05886 0.05654 11.5337
TABLE II: Ablation Study on the MFRC-21 dataset - Mask vs Mask: Similar to the Mask vs Non-Mask scenario, the KD, and the high guidance were beneficial to the separability.
FAR2000 ACC2000 ACC
ArcFace [12] 85.61% 92.81% 93.51%
MagFace [31] 88.92% 94.46% 95.17%
ElasticFace-Arc (teacher) [3] 83.25% 91.63% 95.05%
ElasticFace-Arc-Aug (ours) 92.45% 96.11% 96.93%
MaskInv-LG (ours) 91.98% 95.99% 96.46%
MaskInv-HG (ours) 92.21% 96.11% 96.34%
TABLE III: Ablation Study on the MFR2 dataset: The FAR2000 and ACC2000 increased with the KD and the high guidance.

Iv Results

In this section, we present the evaluation results achieved by our MaskInv solution. We start with an extended ablation study to investigate the influence of our proposed solution in its two training paradigms MaskInv-LG and MaskInv-HG in comparison to the teacher baseline (ElasticFace-Arc) and the ElasticFace-Arc-Aug (trained without KD). Later on, we take a closer look at the performance of our approach in comparison to results published in the literature.

Iv-a Ablation Study

Fig. 3: The ElasticFace-Arc loss (a) and the KD loss (b) for the MaskInv-HG and MaskInv-LG training paradigms. For the MaskInv-HG, the (is Equation 1 is increased from 100 to 3000 after 227k iterations. This change results in only a minimum change in the the ElasticFace-Arc loss, but a huge drop in the embedding optimization loss (KD loss), when compared to the MaskInv-LG training paradigm where the is unchanged.

In the following ablation study, we investigate step by step the impact of our two-stage training paradigm by looking into the (1) teacher baseline, (2) the model trained with mask-augmented images but no KD (), (3) the single step MaskInv-LG, and (4) the 2 stage MaskInv-HG solutions. We additionally, present our ablation result study in perspective of two additional (to ElaticFace-Arc) top-performing face recognition solutions (ArcFace [12], MagFace [31], both also based on ResNet-100 and trained on MS1MV2 [12]) to motivate mask-specific solutions, while we maintain the comparison to SOTA solutions that specifically targeted masked face recognition in the next section. The results on the MFRC-21 dataset of the different models are shown in Table I and Table II for the masked vs non-masked and the masked vs masked scenarios, respectively. We focus our discussion on the verification performance FMR1000 as this has been proposed by Frontex as a best practice operation point for processes just as automatic border control [14]. Table I and Table II show that the KD is beneficial to the verification performance in comparison to the teacher model and that the adjusted models outperform the traditional FR models on masked faces. While the ElasticFace-Arc-Aug shows in few cases a better performance in comparison to MaskInv-HG, it lacks the same level of separability, measured in FDR, and thus the expected generalizability of the performance. Table III presents a similar ablation study on the MFR2 database, where again the proposed MaskInv-HG presented superior performance to the teacher and the MaskInv-LG model by scoring an FAR2000 of 92.21%, in comparison to 91.98%, and 83.25% for the MaskInv-LG, and the teacher network, respectively. A similar trend can be seen for ACC2000 metric. The maximum accuracy ACC shows mixed results, however such a metric is not measured at a comparable operation point between different solutions and thus can not lead to a fair comparison, which is also the reason why such metrics are not included in the verification performance metrics in ISO/IEC 19795-1 [30]. The poorer performance of the two well-known MagFace and Arcface-based models on the real masked datasets (MFRC-21 and MFR2) shows that a tailored solution, such as the one we are presenting here) for MFR is necessary.

In Table IV we present the results on five face recognition benchmarks. The experiments were performed on three different versions of the datasets: 1) on the unaltered data (No Mask), 2) on the dataset with synthetics masks applied on one of the images of each pair (Masked vs Non- Masked), 3) on the dataset with synthetic masks on both images of each pair (masked vs masked). The results from Table IV show that all three models are better at handling unmasked data than masked data and outperform conventional FR models in the latter case. The proposed MaskInv solutions guided by the KD outperform the teacher model as well as the ElasticFace-Arc-Aug model on most synthetic mask benchmarks. However, the MaskInv-LG and MaskInv-HG do iterate on the top performance spot. For example, on the masked vs non-masked settings of the CFP-FP benchmark, the MaskInv-HG comes first with an accuracy of 96.94%, followed by 96.83% and 95.91% by the MaskInv-LG and ElasticFace-Arc-Aug, respectively. However, all far ahead of the teacher and other baseline methods. Similar scenarios can be seen in the masked vs masked setting, with the ElasticFace-Arc-Aug edging closer to the MaskInv solutions. This is the case as the masked vs masked setting benefits less from the main goal of the MaskInv solution, which is to create face representations similar between masked and unmasked faces, while the masked vs masked setting require only the similarity between masked faces. The lower masked face recognition performances of the solutions based on MagFace, ArcFace, and ElasticFace-Arc (teacher) again indicate the need for specifically designed solution as our MaskInv.

To examine the influence of the proposed training paradigm in detail, we present the plot of the two losses in Figure 3 plotted along the training iterations. When comparing to the MaskInv-LG, the (see Equation 1) is increased for the MaskInv-HG from 100 to 3000 after 227k iterations resulting in a minimum change in the the ElasticFace-Arc loss, but a huge drop in the embedding optimization loss (KD loss). This corresponds to the targeted effect of not effecting the identity classification performance of the model, while enhancing the similarity of the embedding (whether masked or not) to that of the teacher model.

In summary, the ablation study showed that our proposed MaskInv solution benefits from learning to produce similar embeddings for masked and non-masked faces through knowledge distillation. This is specifically beneficial when comparing masked to non-masked faces as demonstrated and rationalized earlier, which is the more practical comparison scenario (e.g. non-masked reference in passport). The experiments performed on benchmarks addressing special challenges such as cross-pose and cross-age, show that the proposed approach also can be transferred to these cases. Furthermore, the performance of traditional non-MFR models show that a separate solution for masked faces is needed to achieve competitive results on masked images face recognition.

ArcFace [12] 99.82 98.27 98.15 95.45 92.08
MagFace [31] 99.83 98.46 98.17 96.15 92.87
ElasticFace-Arc (teacher) [3] 99.80 98.67 98.35 96.17 93.27
ElasticFace-Arc-Aug (ours) 99.80 97.61 97.68 96.08 92.40
MaskInv-LG (ours) 99.82 97.53 97.83 96.05 92.65
MaskInv-HG (ours) 99.82 97.57 97.83 96.10 92.82
Masked vs Non-Masked
ArcFace [12] 99.45 95.50 94.68 94.76 89.83
MagFace [31] 99.33 94.00 94.12 94.15 88.67
ElasticFace-Arc (teacher) [3] 99.40 95.29 95.38 94.42 90.40
ElasticFace-Arc-Aug (ours) 99.58 95.91 96.73 95.58 91.08
MaskInv-LG (ours) 99.67 96.83 97.07 95.90 91.98
MaskInv-HG (ours) 99.73 96.94 96.69 95.75 91.67
Masked vs Masked
ArcFace [12] 99.00 92.01 92.37 93.32 87.07
MagFace [31] 98.47 89.26 91.82 92.62 84.72
ElasticFace-Arc (teacher) [3] 98.90 92.01 93.47 93.73 87.55
ElasticFace-Arc-Aug (ours) 99.55 96.07 95.08 95.25 90.10
MaskInv-LG (ours) 99.57 96.29 96.51 95.45 91.45
MaskInv-HG (ours) 99.57 96.29 96.47 95.42 91.32
TABLE IV: Ablation Study on Face Recognition Benchmarks in three different scenarios, Accuracy in %: The proposed MaskInv models are capable of performing well on unmasked data and suffer only a small drop in accuracy. The MaskInv-HG and MaskInv-LG are the top two performers on most benchmarks for both the masked vs non-masked and masked vs masked scenarios.

Iv-B Comparison with SOTA

FMR1000 FMR100 FDR
MaskInv-HG (ours) 0.05849 0.05637 11.3271
MTArcFace [5] 0.05860 0.05860 10.7497
MaskedArcFace [5] 0.05963 0.05687 10.4484
SMT-MFR-1 [5] 0.06003 0.05704 10.6824
LMI-SMT-MFR-1 [5] 0.06205 0.05722 9.7384
SMT-MFR-2 [5] 0.06268 0.05584 11.2025
VIPLFACE-M [5] 0.06279 0.05681 8.2371
LMI-SMT-MFR-2 [5] 0.07096 0.05848 8.5278
VIPLFACE-G [5] 0.07269 0.05750 8.1693
MFR-NMRE-B [5] 0.08344 0.05819 7.9504
MFR-NMRE-F [5] 0.17660 0.08125 5.3876
Neto et al. [34] - 0.28252 -
TABLE V: Comparison with SOTA on the MFRC-21 Dataset - Mask vs No-Mask. Our MaskInv-HG model outperforms the top-performing academic solutions on the FMR1000 and the FDR metrics. On the FMR100, our model is placed second.
Solution FMR1000 FMR100 FDR
MaskInv-HG (ours) 0.05886 0.05654 11.5337
SMT-MFR-1 [5] 0.06012 0.05825 11.0344
LMI-SMT-MFR-1 [5] 0.06061 0.05856 9.9091
SMT-MFR-2 [5] 0.06172 0.05792 11.3090
MaskedArcFace [5] 0.06245 0.05825 10.5731
VIPLFACE-G [5] 0.06359 0.05843 9.4147
MTArcface [5] 0.06390 0.05850 10.1700
LMI-SMT-MFR-2 [5] 0.06586 0.05916 8.8742
VIPLFACE-M [5] 0.06788 0.05759 8.9859
MFR-NMRE-B [5] 0.12903 0.05970 8.1196
MFR-NMRE-F [5] 0.19890 0.09630 4.7322
Neto et al. [34] - 0.23507 -
TABLE VI: Comparison with SOTA on the MFRC-21 Dataset - Mask vs Mask. Our MaskInv-HG model outperforms all top-performing academic solution on all evaluation metrics.

In this subsection, we compare our proposed approach with previously published results on both the MFRC-21 [5] and MFR2 [1] datasets. Since that the ablation study proved the benefit of our proposed MaskInv-HG solution, especially in terms of generalization, we compare the results of this solution to SOTA results. The comparison to SOTA is limited to these two benchmarks (out of 7 used) as the rest of the benchmarks do not have comparable results in the literature. For the MFRC-21 dataset, we limit ourselves for the evaluation to verification performance and, unlike the challenge organizers, do not consider model compactness because we do not propose a novel architecture in this work. In addition, we consider the top-ranked academic submissions to the challenge, as they provide a detailed description of their proposed approaches. We compare our solution to the ten top-ranked academic approaches in the MFRC-21 challenge [5] on the masked vs non-masked (Table V) and the masked vs masked scenario (Table VI). A more detailed description of the different solutions in the MFRC-21 competition can be found in the competition paper [5].

In both scenarios, our proposed MaskInv-HG outperformed the academic solutions submitted to the MFRC-21 challenges. In the masked vs non-masked (Table V) scenario, our model achieved a verification performance FMR1000 of 0.05849 and beats the best-performing academic solution MTArcFace that achieved a verification performance FMR1000 of 0.05860. As mention earlier, we focus our evaluation on the FMR1000 as it has been proposed as a best practice operation point for automatic border control by Frontex [14]. On the FMR100, only the SMT-MFR-2 model achieved a higher verification performance than our proposed MaskInv-HG model. In addition, our model achieves the highest FDR, which indicates a better separability of imposter and genuine distribution than the other approaches, and thus indicates higher generalizability. In the masked vs masked scenario (Table VI), our MaskInv-HG model outperforms the solutions proposed to the challenge on all evaluation metrics. In contrast to the other top-3 solutions on the masked vs non-masked scenario, the performance of our model on masked vs masked was only slightly worse (FMR1000 = 0.05886) than on the mask vs unmasked dataset (FMR1000 = 0.05849), regarding the FMR1000. This shows the flexibility of our proposed MaskInv-HG model to handle both scenarios well by creating similar face representations for masked and non-masked faces.

Solution FAR2000 ACC2000 ACC
MaskInv-HG (ours) 91.21% 96.11% 96.34%
MTF Retrained [1] 82.78% 91.18% 95.99%
TABLE VII: Comparison with SOTA on the MFR2 Dataset: Our proposed MaskInv-HG model outperforms the model proposed by the authors of the dataset in each evaluation metric.

In Table VII, we compare our results on the MFR2 dataset with the proposed solution ”MTF Retrained” of the authors of the dataset [1]. The ”MTF Retrained” is based on the Inception-ResNet v1 [40] and is trained on a subset from the VGGFace2 dataset [7] with the triplet-loss FaceNet [37]. On the images of this subset, different synthetic masks were added. In all three evaluation metrics, our MaskInv-HG model outperformed the MTF-Retrained model. Our model increased the performance by percentage points of the FAR2000, percentage points of the corresponding accuracy at FAR2000 (ACC2000), and the maximum accuracy from to .

In summary, our proposed approach outperformed the previous SOTA in both the MFRC-21 challenge and the MFR2 evaluation protocol. This proves the ability of the proposed MaskInv solution to adapt to masked faces and provide a step forward towards accurate masked face recognition in both scenarios, masked vs masked and masked vs non-masked faces.

V Conclusion

In this paper, we proposed a mask-invariant face recognition solution that does not only aim at building discriminant face embeddings, but rather extend this target to building embeddings that maintain their intra-identity similarity whether wearing or not wearing a mask. This novel approach, namely the MaskInv, is based on jointly learning to separate between identities despite wearing masks through identity classification learning, and learning to produce similar embeddings for masked and non-masked faces of the same identity through embedded level KD from a teacher network. In a detailed ablation study, on two real masked datasets as well as on five mainstream face verification benchmarks, different stages of our proposed approach have shown to enhance the performance of masked face recognition. The proposed solution outperformed current SOTA approaches when compared to the academic solutions submitted to the recent masked face recognition competition MFRC-21. Moreover, our proposed solution maintains high-level of accuracy when verifying non-masked faces as we demonstrated show on five widely used benchmarks addressing different variations including cross-pose and cross-age verification. As this and future pandemic foreseen effect on the world is still unknown, masked face recognition, which has been brought into focus by the global COVID-19 pandemic, will be of further interest for our society and require novel solutions. Additionally, the potential of the presented mask-invariant face representation learning may not be limited to its use for masked face recognition, but could also prove useful for the problem of occluded face recognition in general, which is still to be studied.


  • [1] A. Anwar and A. Raychowdhury (2020) Masked face recognition for secure authentication. CoRR abs/2008.11104. Cited by: §I, §III-B, §III-C, §IV-B, §IV-B, TABLE VII.
  • [2] B. Batagelj, P. Peer, V. Štruc, and S. Dobrišek (2021) How to correctly detect face-masks for covid-19 from visual information?. Applied Sciences 11 (5). External Links: Link, ISSN 2076-3417, Document Cited by: §I, §I.
  • [3] F. Boutros, N. Damer, F. Kirchbuchner, and A. Kuijper (2021) ElasticFace: elastic margin loss for deep face recognition. CoRR abs/2109.09416. Cited by: Fig. 1, §I, §II, §III-A, §III-A, §III-A, TABLE I, TABLE II, TABLE III, TABLE IV.
  • [4] F. Boutros, N. Damer, F. Kirchbuchner, and A. Kuijper (2021) Self-restrained triplet loss for accurate masked face recognition. External Links: 2103.01716 Cited by: §I.
  • [5] F. Boutros, N. Damer, J. N. Kolf, K. Raja, F. Kirchbuchner, R. Ramachandra, A. Kuijper, P. Fang, C. Zhang, F. Wang, D. Montero, N. Aginako, B. Sierra, M. Nieto, M. E. Erakin, U. Demir, H. K. Ekenel, A. Kataoka, K. Ichikawa, S. Kubo, J. Zhang, M. He, D. Han, S. Shan, K. Grm, V. Struc, S. Seneviratne, N. Kasthuriarachchi, S. Rasnayaka, P. C. Neto, A. F. Sequeira, J. R. Pinto, M. Saffari, and J. S. Cardoso (2021) MFR 2021: masked face recognition competition. In IJCB, pp. 1–10. Cited by: §I, §I, §I, §III-B, §III-C, §IV-B, TABLE V, TABLE VI.
  • [6] F. Boutros, P. Siebke, M. Klemt, N. Damer, F. Kirchbuchner, and A. Kuijper (2021) PocketNet: extreme lightweight face recognition network using neural architecture search and multi-step knowledge distillation. CoRR abs/2108.10710. Cited by: §I, §II.
  • [7] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman (2018) VGGFace2: A dataset for recognising faces across pose and age. In FG, pp. 67–74. Cited by: §IV-B.
  • [8] N. Damer, F. Boutros, M. Süßmilch, M. Fang, F. Kirchbuchner, and A. Kuijper (2021) Masked face recognition: human vs. machine. CoRR abs/2103.01924. Cited by: §I.
  • [9] N. Damer, F. Boutros, M. Süßmilch, F. Kirchbuchner, and A. Kuijper (2021) Extended evaluation of the effect of real and simulated masks on face recognition performance. IET Biometrics 10 (5), pp. 548–561. Cited by: §I.
  • [10] N. Damer, J. H. Grebe, C. Chen, F. Boutros, F. Kirchbuchner, and A. Kuijper (2020) The effect of wearing a mask on face recognition performance: an exploratory study. In BIOSIG, LNI, Vol. P-306, pp. 1–10. Cited by: §I.
  • [11] J. Deng, J. Guo, X. An, Z. Zhu, and S. Zafeiriou (2021) Masked face recognition challenge: the insightface track report. CoRR abs/2108.08191. Cited by: §I.
  • [12] J. Deng, J. Guo, N. Xue, and S. Zafeiriou (2019) ArcFace: additive angular margin loss for deep face recognition. In CVPR, pp. 4690–4699. Cited by: §I, §III-A, §III-A, §III-A, TABLE I, TABLE II, TABLE III, §IV-A, TABLE IV.
  • [13] M. Fang, N. Damer, F. Kirchbuchner, and A. Kuijper (2021) Real masks and spoof faces: on the masked face presentation attack detection. Pattern Recognition, pp. 108398. External Links: ISSN 0031-3203 Cited by: §I.
  • [14] Frontex (2015) Best practice technical guidelines for automated border control (abc) systems. Cited by: §III-C, §IV-A, §IV-B.
  • [15] B. Fu, F. Kirchbuchner, and N. Damer (2021) The effect of wearing a face mask on face image quality. CoRR abs/2110.11283. External Links: 2110.11283 Cited by: §I.
  • [16] S. Ge, J. Li, Q. Ye, and Z. Luo (2017) Detecting masked faces in the wild with lle-cnns. In CVPR, pp. 426–434. Cited by: §I.
  • [17] S. Ge, S. Zhao, C. Li, and J. Li (2019) Low-resolution face recognition in the wild via selective knowledge distillation. IEEE Transactions on Image Processing 28 (4). External Links: Document Cited by: §I.
  • [18] M. Geng, P. Peng, Y. Huang, and Y. Tian (2020) Masked face recognition with generative data augmentation and domain constrained ranking. In ACM Multimedia, pp. 2246–2254. Cited by: §I.
  • [19] M. Gomez-Barrero, P. Drozdowski, C. Rathgeb, J. Patino, M. Todisco, A. Nautsch, N. Damer, J. Priesnitz, N. W. D. Evans, and C. Busch (2021) Biometrics in the era of COVID-19: challenges and opportunities. CoRR abs/2102.09258. Cited by: §I.
  • [20] Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao (2016) MS-celeb-1m: A dataset and benchmark for large-scale face recognition. In ECCV (3), Lecture Notes in Computer Science, Vol. 9907. Cited by: §III-A.
  • [21] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §I, §III-A, §III-A.
  • [22] G. E. Hinton, O. Vinyals, and J. Dean (2015)

    Distilling the knowledge in a neural network

    CoRR abs/1503.02531. Cited by: §I.
  • [23] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller (2007-10) Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report Technical Report 07-49, University of Massachusetts, Amherst. Cited by: §III-B, §III-B, §III-C.
  • [24] Y. Huang, Y. Wang, Y. Tai, X. Liu, P. Shen, S. Li, J. Li, and F. Huang (2020) CurricularFace: adaptive curriculum learning loss for deep face recognition. In CVPR, pp. 5900–5909. Cited by: §III-A, §III-A.
  • [25] A. K. Jain, A. Ross, and S. Prabhakar (2004) An introduction to biometric recognition. IEEE Trans. Circuits Syst. Video Technol. 14 (1), pp. 4–20. Cited by: §I.
  • [26] H. Jia and A. M. Martínez (2009) Support vector machines in face recognition with occlusions. In CVPR, pp. 136–141. Cited by: §I.
  • [27] C. Li, S. Ge, D. Zhang, and J. Li (2020) Look through masks: towards masked face recognition with de-occlusion distillation. In ACM Multimedia, pp. 3016–3024. Cited by: §I, §I.
  • [28] Y. Li, K. Guo, Y. Lu, and L. Liu (2021) Cropping and attention based approach for masked face recognition. Appl. Intell. 51 (5), pp. 3012–3025. Cited by: §I.
  • [29] M. Loey, G. Manogaran, M. H. N. Taha, and N. E. M. Khalifa (2021)

    A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the covid-19 pandemic

    Measurement 167. External Links: ISSN 0263-2241 Cited by: §I.
  • [30] A. Mansfield (2006) Information technology–biometric performance testing and reporting–part 1: principles and framework. ISO/IEC. Cited by: §III-C, §IV-A.
  • [31] Q. Meng, S. Zhao, Z. Huang, and F. Zhou (2021) MagFace: A universal representation for face recognition and quality assessment. In CVPR, pp. 14225–14234. Cited by: §III-A, §III-A, §III-A, TABLE I, TABLE II, TABLE III, §IV-A, TABLE IV.
  • [32] S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, and S. Zafeiriou (2017) AgeDB: the first manually collected, in-the-wild age database. In CVPR Workshops, pp. 1997–2005. Cited by: §III-B, §III-B, §III-C.
  • [33] D. Nekhaev, S. Milyaev, and I. Laptev (2019) Margin based knowledge distillation for mobile face recognition. In ICMV, SPIE Proceedings. Cited by: §I.
  • [34] P. C. Neto, F. Boutros, J. R. Pinto, M. Saffari, N. Damer, A. F. Sequeira, and J. S. Cardoso (2021) My eyes are up here: promoting focus on uncovered regions in masked face recognition. BIOSIG abs/2108.00996. Cited by: §I, TABLE V, TABLE VI.
  • [35] M. L. Ngan, P. J. Grother, and K. K. Hanaoka (2020) Ongoing face recognition vendor test (frvt) part 6b: face recognition accuracy with face masks using post-covid-19 algorithms. Cited by: §I.
  • [36] N. Poh and S. Bengio (2004) A study of the effects of score normalisation prior to fusion in biometric authentication tasks. Technical report IDIAP. Cited by: §III-C.
  • [37] F. Schroff, D. Kalenichenko, and J. Philbin (2015) FaceNet: A unified embedding for face recognition and clustering. In CVPR, pp. 815–823. Cited by: §I, §IV-B.
  • [38] S. Sengupta, J. Chen, C. D. Castillo, V. M. Patel, R. Chellappa, and D. W. Jacobs (2016) Frontal to profile face verification in the wild. In WACV, pp. 1–9. Cited by: §III-B, §III-B, §III-C.
  • [39] V. Štruc, S. Dobrišek, and N. Pavešić (2010) Confidence weighted subspace projection techniques for robust face recognition in the presence of partial occlusions. In 2010 20th ICPR, Vol. . External Links: Document Cited by: §I.
  • [40] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi (2017)

    Inception-v4, inception-resnet and the impact of residual connections on learning

    In AAAI, pp. 4278–4284. Cited by: §I, §IV-B.
  • [41] N. Ud Din, K. Javed, S. Bae, and J. Yi (2020) A novel gan-based network for unmasking of masked face. IEEE Access 8 (). External Links: Document Cited by: §I.
  • [42] M. Wang, R. Liu, H. Nada, N. Abe, H. Uchida, and T. Matsunami (2019) Improved knowledge distillation for training fast low resolution face recognition model. In ICCV Workshops, pp. 2655–2661. Cited by: §I.
  • [43] C. Wu and J. Ding (2018) Occluded face recognition using low-rank regression with generalized gradient direction. Pattern Recognit. 80, pp. 256–268. Cited by: §I.
  • [44] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23 (10). Cited by: §III-A.
  • [45] T. Zheng and W. Deng (2018-02) Cross-pose lfw: a database for studying cross-pose face recognition in unconstrained environments. Technical report Technical Report 18-01, Beijing University of Posts and Telecommunications. Cited by: §III-B, §III-B, §III-C.
  • [46] T. Zheng, W. Deng, and J. Hu (2017) Cross-age LFW: A database for studying cross-age face recognition in unconstrained environments. CoRR abs/1708.08197. Cited by: §III-B, §III-B, §III-C.
  • [47] W. Zheng, L. Yan, F. Wang, and C. Gou (2021) Learning from the web: webly supervised meta-learning for masked face recognition. In CVPR Workshops, pp. 4304–4313. Cited by: §I.
  • [48] Z. Zhu, G. Huang, J. Deng, Y. Ye, J. Huang, X. Chen, J. Zhu, T. Yang, J. Guo, J. Lu, D. Du, and J. Zhou (2021) Masked face recognition challenge: the webface260m track report. CoRR abs/2108.07189. Cited by: §I.