Heterogeneous Face Recognition via Face Synthesis with Identity-Attribute Disentanglement

06/10/2022
by   Ziming Yang, et al.
0

Heterogeneous Face Recognition (HFR) aims to match faces across different domains (e.g., visible to near-infrared images), which has been widely applied in authentication and forensics scenarios. However, HFR is a challenging problem because of the large cross-domain discrepancy, limited heterogeneous data pairs, and large variation of facial attributes. To address these challenges, we propose a new HFR method from the perspective of heterogeneous data augmentation, named Face Synthesis with Identity-Attribute Disentanglement (FSIAD). Firstly, the identity-attribute disentanglement (IAD) decouples face images into identity-related representations and identity-unrelated representations (called attributes), and then decreases the correlation between identities and attributes. Secondly, we devise a face synthesis module (FSM) to generate a large number of images with stochastic combinations of disentangled identities and attributes for enriching the attribute diversity of synthetic images. Both the original images and the synthetic ones are utilized to train the HFR network for tackling the challenges and improving the performance of HFR. Extensive experiments on five HFR databases validate that FSIAD obtains superior performance than previous HFR approaches. Particularly, FSIAD obtains 4.8 largest HFR database so far.

READ FULL TEXT VIEW PDF

page 1

page 2

page 8

page 12

02/23/2020

DotFAN: A Domain-transferred Face Augmentation Network for Pose and Illumination Invariant Face Recognition

The performance of a convolutional neural network (CNN) based face recog...
03/29/2018

Towards Open-Set Identity Preserving Face Synthesis

We propose a framework based on Generative Adversarial Networks to disen...
09/20/2020

DVG-Face: Dual Variational Generation for Heterogeneous Face Recognition

Heterogeneous Face Recognition (HFR) refers to matching cross-domain fac...
03/30/2022

Escaping Data Scarcity for High-Resolution Heterogeneous Face Hallucination

In Heterogeneous Face Recognition (HFR), the objective is to match faces...
12/02/2020

MAAD-Face: A Massively Annotated Attribute Dataset for Face Images

Soft-biometrics play an important role in face biometrics and related fi...
10/11/2020

Domain Agnostic Learning for Unbiased Authentication

Authentication is the task of confirming the matching relationship betwe...
03/24/2022

Learning Disentangled Representation for One-shot Progressive Face Swapping

Although face swapping has attracted much attention in recent years, it ...

I Introduction

In recent years, face recognition has made significant progress with deep convolution neural networks (CNNs) and has been widely applied in real-world scenarios like surveillance and payment

[49, 64, 56, 33, 29, 54]. Face recognition methods always assume that face images are captured with visible imaging (VIS) devices [16]. However, this assumption does not hold in many realistic scenarios where face images are captured by different sensors. For instance, near-infrared (NIR) sensors are universally adopted in authentication systems and video surveillance cameras. The large discrepancy between different domains degrades face recognition performance. It arises the demand for heterogeneous face recognition (HFR) that refers to identifying faces across different domains, such as NIR-VIS, Sketch-Photo, and Thermal-VIS. Generally, HFR is confronted with three major challenges: (i) Large cross-domain discrepancy. The large discrepancy between faces in different domains enlarges intra-class distance and thus worsens the performance of HFR [7]. (ii) Lack of heterogeneous face data. It is time-consuming and expensive to collect large-scale heterogeneous face images. The limited number of subjects and small-scale HFR dataset are prone to result in the over-fitting problem [20]. (iii) Large variation on facial attributes. Face images have various facial attributes including pose, complexion, expression, and illumination, which further increase intra-class distance and make it difficult for face matching [76].

Fig. 1: The pipeline of FSIAD. FSIAD consists of two main components, Identity-Attribute Disentanglement (IAD) and Face Synthesis Module (FSM). IAD: Given paired images and , an identity encoder extracts identity representations from . Meanwhile, attribute encoders and learn facial attribute representations from and . Solid and dotted arrows indicate the processing flows of and , respectively. FSM: A generator reconstructs and synthesizes images with different combinations of representations. In the training procedure, a discriminator supervises to generate high-fidelity images. The disentanglement loss makes identities uncorrelated with attributes. The reconstruction loss reduces the difference between reconstructed images and the corresponding input. The identity preserving loss keeps identities of consistent with those of . The attribute similarity loss and minimize the distance between attribute representations and structural features, respectively.

Over the last decade, HFR has attracted considerable attention of researchers to developing effective approaches [52, 30]. Existing HFR methods can be divided into three main categories: domain-invariant feature based methods, common subspace learning based methods, and image synthesis based methods. The domain-invariant feature based methods seek to extract the discriminative feature of each subject that is invariant across heterogeneous domains [39, 5, 73]. The common subspace learning based methods map heterogeneous face images into a subspace for face recognition [32, 7, 22]. The image synthesis based methods refer to the transformation of images from one modality into the others to recognize faces within the same modality [41, 42, 78]. Previous HFR methods mainly focus on reducing the domain discrepancy and have achieved promising performance. Nevertheless, they have neglected the variations of facial attributes in real-world applications.

To further tackle this issue, it is desirable to utilize image synthesis based methods to synthesize large numbers of images with diverse facial attributes. These synthetic images are used to train HFR models for promoting the generalization of face recognition [40]. Recently, [48] employs data augmentation to generate images with various face deformations for alleviating the negative impacts of facial attribute variations on face recognition. Inspired by [48], we propose a novel method called Face Synthesis with Identity-Attribute Disentangle (FSIAD) to synthesize abundant heterogeneous face images with diverse facial attributes. Rather than randomly sampling from the distributions of heterogeneous data [11], FSIAD disentangles facial identities and attributes, and then generates a large number of images with stochastic combinations of disentangled identities and attributes to augment the raw HFR databases. As depicted in Fig. 1, the scheme of FSIAD contains two components: Identity-Attribute Disentanglement (IAD) and Face Synthesis Module (FSM). First, IAD introduces an identity encoder and attribute encoders to extract the identity-related representations and identity-unrelated representations from images, respectively. These two representations are denoted as facial identities and attributes. To disentangle attributes and identities, we regularize facial attributes to be orthogonal to identities. It decreases the correlation between facial attributes and identities and facilitates attribute representation learning. Second, FSIAD utilizes FSM to synthesize images with the combinations of disentangled identities and attributes. Massive combinations of identities and attributes significantly increase the diversity in facial attributes of synthetic images. We propose an identity preserving constraint to ensure that the identities of synthetic images are consistent with those of original images. In addition, a discriminator is applied to promote FSIAD to generate high-fidelity images by discriminating between synthetic images and real ones. The large-scale heterogeneous faces synthesized by FSIAD are used for training HFR models to fundamentally supply sufficient heterogeneous faces, learn various facial attributes and reduce the cross-domain discrepancy.

To sum up, the main contributions of this work are as follows:

  • To improve the performance of HFR, we propose a Face Synthesis with Identity-Attribute Disentanglement (FSIAD) framework that naturally tackles three major challenges of HFR through data augmentation.

  • We devise an Identity-Attribute Disentanglement (IAD) module to decouple facial attributes and identities from face images. The facial attributes are constrained to be orthogonal to identities for decreasing their correlation.

  • We introduce a Face Synthesis Module (FSM) that generates large-scale images with the integration of disentangled facial identities and attributes to augment the HFR databases and enrich the diversity of facial attributes.

Ii Related Work

Heterogeneous face recognition methods can be mainly divided into three categories, including domain-invariant feature learning, common subspace learning, and image synthesis. In this section, we review previous HFR methods and the representative generative models for image synthesis.

Ii-a Heterogeneous Face Recognition

Ii-A1 Domain-invariant Feature Learning Methods

Domain-invariant feature learning methods aim to extract the identity-related features that are invariant across spectral domains. Conventional methods are mainly supported by hand-crafted features, such as Local Binary Patterns (LBP) [1], Difference-of-Gaussian (DoG) [45], Histograms of Oriented Gradients (HoG) [39], and Local Binary Pattern Histogram (LBPH) [15].

Deep learning has achieved great success in feature learning. Many efforts are devoted to extracting domain-invariant features with deep learning algorithms. The center loss [69] and triple loss [47] are utilized to reduce NIR-VIS discrepancy. [20]

proposes a Wasserstein CNN (W-CNN) to capture invariant deep features by minimizing the Wasserstein distance between NIR and VIS features. Based on

[20], [71] designs a Disentangled Variational Representation (DVR) framework to disentangle the identity information and within-person variations from heterogeneous faces. [5] and [4] learn global relationships between local features of heterogeneous faces to represent the domain-invariant identity information.

Domain-invariant feature learning methods provide solutions to extract representations that are rarely related to facial identities. However, they are prone to suffer from the over-fitting problem due to the small-scale heterogeneous face datasets [12].

Ii-A2 Common Subspace Learning Methods

Common subspace learning methods project faces of different domains into a compact latent space, where the distance between faces of the same subject is short. Many approaches of dimensionality reduction are used to map heterogeneous features to a low-dimensional space, including Linear Discriminant Analysis (LDA) [75, 60]

, Principal Component Analysis (PCA)

[74], Canonical Correlation Analysis (CCA) [41, 55], and Partial Least Squares (PLS) [59]. The emergence of deep learning has significantly promoted subspace learning. Invariant Deep Representation (IDR) [19] and Coupled Deep Learning (CDL) [72] introduce orthogonal constraint and relevance constraint to learn a shared feature space, respectively. [22] disentangles identity-related, modality-related, and residual features from heterogeneous faces for reducing variations of modality and irrelevance (e.g.pose and expression). Extended from [22], [23] further improves disentanglement with an orthogonal constraint and then learns residual-invariant representations by aligning high-level features of the non-neutral face and neutral face.

The common subspace learning methods intuitively project heterogeneous faces to a shared space through increasing intra-class compactness and inter-class separability. Unfortunately, since these methods require pairs of faces in different modalities for training, their performances are limited by the lack of paired heterogeneous faces.

Ii-A3 Image Synthesis Methods

Image synthesis methods mainly include conditional and unconditional approaches. For conditional approaches, images are transformed from one modality to another modality for matching faces in the same modality. Face photo-sketch synthesis [63, 66] offers the first insight into face recognition via generation. [65, 26, 31] perform cross-spectral image reconstruction with coupled or joint dictionary learning. The advanced deep generative models [14, 38] have made rapid progress in image synthesis. [2] and [61] extend CycleGAN [82] to handle heterogeneous image transformation. Later, [18] improves [61] by decomposing synthesis into texture inpainting and pose correction. Pose-preserving Cross-spectral Face Hallucination (PCFH) [77] and Pose Aligned Cross-spectral Hallucination (PACH) [10] align the poses and expressions of NIR faces to those of VIS faces for producing paired NIR-VIS faces.

Synthetic images produced from unconditional approaches are allowed to be inconsistent with original images. [11] and [12]

propose a Dual Variational Generation (DVG) framework to learn a joint distribution of paired heterogeneous faces and then generate diverse heterogeneous images from noise.

[83] and [24] employ knowledge distillation to use teacher networks pre-trained on abundant VIS images to train generators with insufficient paired heterogeneous faces. Heterogeneous Face Interpretable Disentangled Representation (HFIDR) [46] explicitly interprets semantic information of face representations to synthesize cross-modality face images. [9] and [8] synthesize multimodal faces from the predetermined visual descriptions of facial attributes.

Unlike DVR [71] that only disentangles the identity information and within-person variations, we perform disentanglement and image synthesis in a unified framework. Contrary to DVG [11] and DVG-Face [12] that generate the entire faces from noise and introduce external identity information from large-scale VIS faces to enrich the identity diversity of generated faces, FSIAD enrich the diversity in facial attributes of generated faces with the combinations of disentangled representations instead of external information and simultaneously tackle the challenges of HFR. Compared with HFIDR [46] that only interprets disentangled representations of identity and modality, FSIAD can separate representations of identity, modality, and facial attributes. Different from [9] and [8] that synthesize images from limited descriptions of facial attributes, FSIAD automatically extracts facial attributes without manual descriptions for synthesis. Rather than decreasing variations in domains and facial attributes to learn discriminative features [23], we increase variation in facial attributes of synthetic faces to improve the generalization of HFR models to diverse attributes.

In contrast to other categories, the largest advantage of image synthesis based methods is yielding sufficient heterogeneous faces. These methods intuitively solve the lack of paired HFR data. But there are two challenges of HFR still remain: large cross-domain discrepancy and large variation of facial attributes. The performances of these methods are strongly related to the quality of synthetic images. Meanwhile, face synthesis is an ill-posed problem that there exist multiple solutions for each input [76]. Our method tackles this problem by introducing comprehensive constraints to control the identities and facial attributes of synthetic faces, thereby avoiding the uncertainty of face synthesis.

Ii-B Face Generation

Powered by deep generative models including Generative Adversarial Networks (GANs)

[14] and Variational Anto-Encoders (VAEs) [38], many works have achieved outstanding performance on face generation. MUNIT [28]

learns content features and style features from images for multimodal image generation with unsupervised learning. Nirkin

et al. [50] use 3D Morphable face Models (3DMM) for face segmentation and manipulation. [43] introduces a FaceShifter algorithm for face swapping and tackles the occlusion challenge to generate high-fidelity images. [34, 35, 36] propose StyleGANs to automatically separate facial styles and stochastic effects, for improving the controllability of face synthesis. Based on [35], Nitzan et al. [51]

propose a novel face identity disentanglement framework to disentangle identity and attribute with weakly supervised learning. Zhu

et al. [81] formulate face swapping as an optimal transfer problem, and propose an Appearance Optimal Transfer (AOT) algorithm to synthesize realistic face images with large appearance discrepancies. Daniel and Tamar [6] introduce a Soft-IntroVAE to synthesize high-resolution face images through introspective variational inference. Zhang et al. [79] design a multi-identity face reenactment network FReeNet to transfer expressions among different subjects. InfoSwap [13] utilizes the information bottleneck to extract representations of identity and perceptual features for subject-agnostic face swapping.

Iii Method

In this section, we describe the proposed method – FSIAD in detail. As shown in Fig. 1, FSIAD contains two components: Identity-Attribute Disentanglement (IAD) and Face Synthesis Module (FSM). First of all, IAD is designed to disentangle identity representations and facial attribute representations from face images. The identity representations are supposed to be independent of facial attribute representations. Secondly, FSM aims to produce large-scale heterogeneous faces with disentangled representations for improving the performance of HFR.

Our method takes two pairs of heterogeneous face images and as input data, where and denote two different spectral domains. It is applicable to NIR-VIS, Thermal-VIS, and Sketch-Photo heterogeneous faces synthesis. On the one hand, the paired images and have the same identity, which is expected to be preserved during face synthesis. On the other hand, images are randomly sampled from HFR datasets and provide external attribute features for enriching the diversity in attributes of generated faces.

Iii-a Identity-Attribute Disentanglement (IAD)

The key idea of IAD is to decouple identities and attributes from faces. To extract the identity features from face images, we employ a pre-trained face recognition model LightCNN [70] as the identity encoder

. Latent vector

contains the information merely relevant to facial identity, which is a L2-normalized feature embedding extracted by . In addition, we design facial attribute encoders and based on VAEs as illustrated in Table I

. The Conv1 layer comprises a convolution layer, an instance normalization layer, and a Leaky ReLU activation layer. The FC layer denotes a fully connected layer. The parameters of

and are denoted as and , respectively. Taking NIR image as input data, attribute encoder computes the mean

and the standard deviation

for approximating the posterior distribution . The attribute representation is sampled via the re-parameterization trick, i.e., , where denotes the random noise that . Similarly, is sampled from the posterior distribution : .

Input Layer

Kernel/Stride/Padding

Output Output size
image Conv1 5 / 1 / 2 32128128
Conv1 3 / 2 / 1 646464
Conv1 3 / 2 / 1 1283232
Conv1 3 / 2 / 1 2561616
Conv1 3 / 2 / 1 51288
Conv1 3 / 2 / 1 51244
FC - , 256, 256
TABLE I: The structure of the attribute encoders and .

The training of IAD is divided into two steps, i.e., feature disentanglement and distribution learning. Feature disentanglement aims to decrease the correlation between identity and attribute representations. Hence we introduce a disentanglement objective function between identity representation and attribute representations {, }:

(1)

where

is the cosine similarity function. The identity representation

is orthogonal to attribute representations and when descents to 0. In other words, the learned facial attributes become independent to identity.

The second step is designed for attribute distribution learning. Motivated by VAEs [38] that are optimized through maximizing the evidence lower bound objective (ELBO), we adopt its formulation for our method as follows:

(2)

where the first term denotes the reconstruction objective of

, and the second term is a Kullback-Leibler divergence between the approximated posterior distribution

and the prior distribution . The ELBO of can be formed by simply replacing to of Eq. (2). Therefore, we define the distribution learning objective to minimize the Kullback-Leibler divergence:

(3)

where the prior distributions and

are assumed to obey the multivariate normal distribution

.

In brief, the loss function of IAD is formulated as:

(4)

where is a trade-off parameter.

Iii-B Face Synthesis Module (FSM)

As depicted in the previous section, the disentangled identities and attributes are obtained from IAD. We design an FSM to generate images with combinations of disentangled identities and attributes. Apart from the encoders , and shared with IAD, FSM also contains a generator and a discriminator . The structures of and are shown in TABLE II and III, respectively. The TransConv layer contains a transposed convolution layer, an Adaptive Instance Normalization (AdaIN) [27]

layer, a Leaky ReLU activation layer, and a residual block. The Tanh denotes the Tanh activation layer. The RefPad is the reflection padding operation. The Conv2 layer is comprised of a convolution layer, a batch normalization layer, and a ReLU activation layer. The Skip Connection layer adds the result

to the input .

Input Layer Kernel/Stride/Padding Output Output size
, FC - 8192
TransConv 4 / 2 / 1 25688
TransConv 4 / 2 / 1 1281616
TransConv 4 / 2 / 1 643232
TransConv 4 / 2 / 1 326464
TransConv 4 / 2 / 1 32128128
Conv1 3 / 1 / 1 32128128
Conv1 3 / 1 / 1 3128128
Tanh - image 3128128
TABLE II: The structure of the generator .
Input Layer Kernel/Stride/Padding Output Output size
image RefPad - 3134134
Conv2 7 / 1 / 0 64128128
Conv2 3 / 2 / 1 1286464
Conv2 3 / 2 / 1 2563232
ResBlock - 2563232
ResBlock - 2563232
ResBlock - 2563232
Sigmoid - 1
(a) The structure of .
Input Layer Kernel/Stride/Padding Output Output size
RefPad - 2563434
Conv2 3 / 1 / 0 2563232
RefPad - 2563434
Conv2 3 / 1 / 0 2563232
Skip Connection - 2563232
(b) The structure of ResBlock.
TABLE III: The network architecture of the Discriminator .

The training procedure of FSM is simultaneously conducted through two branches: reconstruction and integration. The first branch aims to reconstruct images with identities and attributes that are derived from the same subjects, , . The second branch integrates identities of and attributes of to synthesize images , , where and are attribute representations of reference images and , respectively.

Reconstruction. The reconstructed images {, } are supposed to be identical to original images {}. We use the reconstruction loss to reduce the squared Euclidean distances between original images and reconstructed ones:

(5)

Integration. In order to preserve identities of synthetic images, we extract the identity representations from synthetic images and . Then we utilize identity preserving objective to increase the similarity between identity representations. is formulates as:

(6)

In addition, the facial attributes of synthesized images {, } are expected to be similar to the reference images {, }. We take both latent representations and image contents into consideration in terms of similarity. From the perspective of latent representations, the distances between attribute representations of {, } and those of {, } ought to be short. Similar to , we define the attribute similarity objective as:

(7)

where and are attribute representations of and .

Attribute-related contents including pose, complexion, and illumination, are transferred from input images into the synthesized ones. The similarity of contents can be measured from two aspects: error metric [68] and structural similarity. Error metrics are prevailing approaches to evaluate the difference between images, which are calculated between images pixel by pixel. Owing to the advantage of color and illumination consistency, -norm loss function is a predominant error metric to regularize networks for image generation. However, networks tend to generate a complete copy of the targeted image when only constrained by error metrics. Besides, error metrics are not sensitive to the pixel dependencies that contain critical structural features of objects in the visual scene [67]. It results in the low perceptual quality of synthetic images. Therefore, we employ the Multi-Scale Structural Similarity index (MS-SSIM) [67] as structural similarity loss to increase the structural similarity between input images and synthesized images. Specifically, MS-SSIM aims to measure the difference between structural information of images, which is related to the attributes of objects in the view and independent to extrinsic factors such as illumination, contrast, and noise [68]. MS-SSIM is highly adapted for human visual perception and is generally used for image synthesis. The higher MS-SSIM means the smaller difference between objects in different images. We propose a similarity objective with an combination of a -norm loss and MS-SSIM111https://github.com/VainF/pytorch-msssim:

(8)

where -, means the concatenation of synthesized images, i.e., .

is a trade-off hyperparameter for tuning the ratio of

to -norm loss, which is set to 0.84 according to the empirical investigation [80].

According to Eq. (6-8), we summarize the loss function of integration as follows:

(9)

For the sake of further improvement on image synthesis, generator G is optimized in an adversarial manner. We impose a discriminator D to differentiate real images from synthetic images that generated by G. Meanwhile, a generator G seeks to generate photo-realistic images for confusing D. The adversarial loss is formulated as:

(10)

Hereby, we define loss function of FSM as :

(11)

where and are trade-off parameters. According to Eq. (4, 11), the overall loss of FSIAD is summarized as:

(12)

Iii-C Heterogeneous Face Recognition

We utilize FSIAD to synthesize a large number of heterogeneous faces for data augmentation. The HFR network learns abundant features of facial attributes from synthetic faces and alleviates the degradation caused by variations in pose, expression, and other extrinsic factors. Both real images and augmented images are jointly used to train the HFR network.

Basically, HFR network is trained on a pair of real images . We define HFR objective with the cross-entropy loss to measure the classification error:

(13)

where is the number of paired real images, is the identity label of -th paired real images .

In addition, the HFR network is supposed to diminish the intra-class discrepancies and extract discriminative representations. The augmented images are applied to reduce discrepancies in domains and attributes. Since paired augmented images are generated from the same subject, the identities of are expected to be closed to each other [11]. To shorten the intra-class distance, we formulate the intra-class loss as:

(14)

On the whole, the loss function for training HFR network is defined as:

(15)

where is a trade-off parameter.

For comprehensive elaboration of our work, we introduce the generic training strategy in algorithm 1.

0:  Source images =. Reference images =. A pre-trained identity encoder . Attribute encoders . A generator . A discriminator . A heterogeneous face recognition network . The number of synthesized images .
0:  The parameters of , , and : , , , and .
1:  for  to T do
2:     // IAD component.
3:     .
4:     ; .
5:     ; .
6:     Update , by Eq. (4).
7:     // FSM component.
8:     Reconstruction: ; .
9:     Integration: ; .
10:     Fix , , :
11:        Update by Eq. (10).
12:     Fix :
13:        Update , , by Eq. (9).
14:  end for
15:  // HFR network.
16:  Initialize by a pre-trained model.
17:  for  to  do
18:     Randomly sample = and =.
19:     .
20:     =; =.
21:     ; .
22:     Update by Eq. (15).
23:  end for
24:  return  , , , , .
Algorithm 1 Training strategy of FSIAD.

Iv Experiments

In this section, we perform extensive experiments to evaluate our proposed FSIAD qualitatively and quantitatively on five heterogeneous face recognition databases, including CASIA NIR-VIS 2.0 [44], BUAA-VisNir [25], Oulu-CASIA NIR-VIS [3], Tufts Face [53], and LAMP-HQ [76]. To demonstrate the superior advantage of our proposed FSIAD, we make comparisons with state-of-the-art methods. Finally, ablation studies on the effectiveness of different loss functions are conducted.

Iv-a Databases and Protocols

Iv-A1 Casia Nir-Vis 2.0

the most challenging public HFR database. It contains 725 subjects and each subject has 550 NIR and 122 VIS images with large within-class variations including pose, expression, and illumination. We conduct experiments with 10-fold cross-validation. For each fold, the training set consists of about 6,100 NIR and 2,500 VIS images from 360 identities. For evaluation, the testing set is composed of over 6,000 NIR and 358 VIS images from 358 identities who are excluded from those in the training set.

Iv-A2 BUAA-VisNir

a standard HFR database. It contains 150 subjects and each subject has 9 pairs of NIR-VIS images, including one frontal view, four different other views, and four expressions. Since the NIR and VIS images are captured simultaneously with a multi-spectral sensor, paired NIR-VIS images are identical except for the spectral domain. The training set consists of 900 images from 50 subjects and the testing set consists of 1,800 images from the remaining 100 subjects.

Iv-A3 Oulu-CASIA NIR-VIS

contains 80 subjects. The paired NIR-VIS images of each subject are captured with six different expressions (anger, disgust, fear, happiness, sadness, and surprise) and three different illuminations (normal indoor, weak, and dark). Following the protocol introduced in [58], we select 40 subjects and each subject has 48 paired NIR-VIS images as the training set and testing set. Both the training set and the testing set comprise 20 subjects.

Iv-A4 Tufts Face

a Thermal-VIS HFR database. It contains 1,582 paired Thermal-VIS images of 113 subjects. Each subject has 14 pairs of Thermal-VIS images with 9 different yaw angles and 5 different expressions. Since this database has not designed a protocol for evaluation, we divide it into a training set with 85 subjects and a testing set with the rest 28 subjects. The training set is composed of 1,190 pairs of Thermal-VIS images, while the testing set has 28 VIS gallery images and 275 thermal probe images.

Iv-A5 Lamp-Hq

the latest and largest HFR database. It is featured with large-scale, high-resolution, and wide-diversity (e.g., age, race, and accessories), which contains 56,788 NIR and 16,828 VIS images from 573 subjects. Each subject has 66 paired NIR-VIS images that are captured with three distinct expressions and three yaw angles in five illumination scenes. The evaluation is conducted by 10-fold experiments. For each fold, the training dataset consists of approximately 29,000 NIR and 8,800 VIS images from about 300 subjects. For testing, the gallery set contains the remaining 273 individuals and each individual has one VIS image, while the probe set has about 27,000 NIR images from the same individuals.

Iv-B Experimental Settings

Implementation details. We firstly transform the size of input images to . A face recognition model LightCNN [70] pre-trained on the MS-Celeb-1M dataset [17] is adopted as the identity encoder to extract identity features from face images. The normalized latent vector output from the penultimate fully connected layer in LightCNN is utilized as identity feature representation. The attribute encoders , , generator (or decoder) , and discriminator are built with the architectures that are described in TABLE I, II and III

, respectively. Our proposed network is implemented with the deep learning framework Pytorch. One Nvidia RTX GPU is used for acceleration. For optimization on training, we employ an Adam

[37] optimizer that is configured with a learning rate of 2e-4, coefficients =0.5 and =0.99 that are used to compute the running averages of gradient and its square. The trade-off parameters , , , and in Eq.(4,11,15) are set to 2, 5, 1, and 0.001, respectively.

For fair comparisons, we follow the protocols of [46, 11, 71, 10, 20] to use LightCNN [70]

as the HFR model. The pre-trained LightCNN model is imported to initialize the parameters of the network. In the experiments, we utilize FSIAD to synthesize 100,000 pairs of heterogeneous faces as augmented data. Then we fine-tune the LightCNN network with the combination of the original HFR dataset and augmented data. For optimization of fine-tuning, we use the Stochastic Gradient Descent algorithm (SGD) with the following configurations: momentum factor=0.9, initial learning rate=1e-3, and weight decay=1e-4. To tune the hyper-parameters of our proposed loss functions, we split the training data of each HFR database into a training set and a validation set by a ratio of 9:1.

Iv-C Qualitative Analyses

In order to analyze the effectiveness of face augmentation, we conduct qualitative experiments on the CASIA NIR-VIS 2.0 database [44] to evaluate our proposed FSIAD with the comparison of state-of-the-art methods MUNIT [28] and FaceShifter [43]. Both the disentanglement and generation of face augmentation are evaluated in the qualitative experiments. We firstly decouple face images into identity representations and attribute representations. Then we produce images of resolution with stochastic combinations of identities and attributes. The results generated by the aforementioned methods are displayed in Fig. 2. Intuitively, although MUNIT produces clear results, it fails to disentangle identities from attributes. The synthetic images of MUNIT are almost equal to the source images and have low similarity in facial attributes compared with the reference images. Additionally, we can observe that FaceShifter can swap identity features from the subject of the source image to that of the reference image. However, the results of FaceShifter are mixed with artifacts and lack the smoothness of facial textures. Although FaceShifter [43] has gained outstanding performance of face generation on the large-scale VIS databases, the insufficient HFR data hinders it from generating high-quality heterogeneous faces. In contrast to these methods, FSIAD achieves better performance in feature disentanglement and generation. For one thing, the identities of source images are disentangled from other facial attributes, which are well preserved in the synthetic images. For another, our method can well blend the identities with facial attributes and prevent the emergence of artifacts and rough textures. Moreover, facial attributes such as expression, pose, and complexion are transferred from reference images to the generated images.

Fig. 2: Experimental results on feature disentanglement and generation with FSIAD and state-of-the-art methods MUNIT and FaceShifter.

For further analysis, we visualize the results of reconstruction and integration to demonstrate the ability of our method. Fig. 3 illustrates that the reconstructed images are high-quality and realistic. Besides, we synthesize images with the integration of identity representations and attribute representations. As shown in Fig. 4, the synthesized results maintain the identities of source images and have similar facial attributes with reference images. In the meantime, we explore cross-domain synthesis that source and reference images are derived from different spectral domains. Fig. 5 reveals that FSIAD is also applicable to cross-domain synthesis and has outstanding abilities in feature disentanglement and face generation. Note that the source images are different from reference images in both spectral domains and subjects, which raises the difficulty in face synthesis.

Fig. 3: An example of face reconstruction on the CASIA NIR-VIS 2.0 database.
Fig. 4: A grid of synthetic faces. The images in the top row and left most column are source images and reference images, respectively. The rest images are synthesized by FSIAD with identities of source images and attributes of reference images.
Fig. 5: The visual results of cross-domain synthesis. In contrast to Fig. 4, the source images and reference images are not only derived from different subjects, but also belong to different spectral domains.

Iv-D Quantitative Analyses

Quantitative analyses are carried out to evaluate the performance of our proposed method with comparison to the state-of-the-art methods. In this section, quantitative analyses are divided into three parts, including quantitative evaluations on image synthesis, heterogeneous face recognition experiments and ablation studies.

Iv-D1 Quantitative evaluations on image synthesis

We conduct quantitative evaluations to compare the effectiveness and efficiency of MUNIT [28], FaceShifter [43] and our proposed FSIAD. Three metrics are employed for evaluations: Fréchet Inception Distance (FID) [21], Structural Similarity Index (SSIM) [68] and inference speed. FID is a metric to measure the distance between the distribution of real images and that of synthetic images. The lower FID reveals that the model achieves better performance to synthesize high-fidelity images like real images. In the experiments, 10,000 pairs of NIR-VIS images are randomly sampled from the CASIA NIR-VIS 2.0 dataset. Each pair contains source images and reference images . Then 10,000 pairs of NIR-VIS synthetic images are generated by the aforementioned methods. Furthermore, we utilize a pre-trained Inception V3 network [62] to map the real images and the synthetic images into 192-dimensional feature vectors for computing FID [57]. As illustrated in TABLE IV, our proposed method achieves the best FID values compared with MUNIT and FaceShifter. In other words, our method is preferable to generate realistic and high-quality images for face augmentation.

Method FID
NIR VIS
MUNIT[28] 17.4735 17.5492
FaceShifter[43] 11.5227 9.0366
FSIAD 3.2995 2.9839
TABLE IV: The Fréchet Inception Distances between distributions of real images on the CASIA NIR-VIS 2.0 dataset and the synthetic images generated by different methods.

Apart from FID, we quantitatively evaluate the performance of feature disentanglement. To assess the similarity of facial attributes between references and synthetic images, we employ the SSIM to measure similarity in structural information between images. Structural information is defined as the attributes of objects in the scene, which is highly adapted for human visual perception [68]. We measure the SSIM score between references and synthetic images. The higher SSIM score suggests that facial attributes of synthetic images are more similar to those of references. As depicted in TABLE V, FSIAD performs better than state-of-the-art methods and achieves the best ability in feature disentanglement.

Method SSIM
NIR VIS
MUNIT[28] 0.2572 0.1310
FaceShifter[43] 0.2294 0.1623
FSIAD 0.4139 0.3617
TABLE V: The quantitative results of attribute similarity on the CASIA NIR-VIS 2.0 dataset. SSIM denotes the Structural Similarity index.

To further assess the efficiency of the aforementioned methods, we measure the time consumed by synthesizing 12,480 pairs of images with these methods. Inference speed is measured by the frames per second (FPS) score that equals a ratio of the number of synthesized images to the time spent on synthesis. As shown in TABLE VI, the proposed FSIAD is the most efficient method and particularly outperforms FaceShifter by 11 times.

Method Time(s) Speed(FPS)
MUNIT[28] 65.4009 190.823
FaceShifter[43] 415.9640 30.002
FSIAD 36.7743 339.367
TABLE VI: The inference speeds of different methods. The number of synthesized images is 12,480.

Iv-D2 Heterogeneous face recognition experiments

In order to validate the effectiveness of FSIAD on heterogeneous face recognition, we conduct extensive experiments on five HFR databases, including CASIA NIR-VIS 2.0 [44], BUAA-VisNir [25], Oulu-CASIA NIR-VIS [3], Tufts Face [53] and LAMP-HQ [76]. Our method is compared with the state-of-the-art methods TRIVET [47], IDR [19], ADFL [61], W-CNN [20], PACH [10], DVR [71], DVG [11], HFIDR [46], and OMDRA [23]. Since our method utilizes LightCNN [70] as HFR model, the LightCNN only trained with original HFR dataset is selected as a baseline method. Interestingly, we also train LightCNN [70] with the synthetic face images generated by MUNIT [28] and FaceShifter [43] to explore the effectiveness of these two methods. We denote LightCNN models trained with synthetic faces generated by these two methods as LightCNN+MUNIT and LightCNN+FS, respectively. The experiments are conducted by following the protocols as described in Section IV-A. The specific experimental analyses are provided as follows.

(a) CASIA-NIR-VIS 2.0
(b) BUAA-VisNir
(c) Oulu-CASIA NIR-VIS
Fig. 6: The ROC curves of different methods on the CASIA-NIR-VIS 2.0, BUAA-VisNir, and Oulu-CASIA NIR-VIS datasets.

CASIA NIR-VIS 2.0 database. We conduct 10-fold cross validation on the CASIA NIR-VIS 2.0 database. The Rank-1 accuracy, Verification Rate(VR)@False Accept Rate(FAR)=0.1% and VR@FAR=0.01% are employed for testing. TABLE VII and Fig. 6(a) show that most methods gain satisfactory performance of HFR and achieve Rank-1 accuracy over 90%. Although HFIDR(LightCNN-9) [46] has the lowest Rank-1 accuracy, it can dramatically improve by simply replacing the backbone from LightCNN-9 to LightCNN-29. Compared with TRIVET [47], IDR [19], LightCNN [70] and ADFL [61], advanced methods including W-CNN [20], DVR [71], HFIDR [46], OMDRA [23], PACH [10], DVG [11] effectively address the over-fitting problem and obtain better performance. Our proposed FSIAD outperforms the state-of-the-art methods and demonstrates its superiority. FSIAD surpasses PACH by 1.6% of VR@FAR=0.1%, which reveals unconditional face generation provides sufficient heterogeneous faces and fundamentally deals with a key obstacle of HFR. Furthermore, FSIAD outperforms DVG [11] by 0.4% in terms of VR@FAR=0.01%. It shows that enriching the diversity in facial attributes is conducive to improving HFR. We also find that synthetic images generated by MUNIT [28] and FaceShifter [43] facilitate the performance of LightCNN and exceed state-of-the-art methods OMDRA [23] and HFIDR [46]. However, the results of LightCNN+MUNIT and LightCNN+FS fail to surpass DVG due to the poor capabilities of face generation and feature disentanglement. Moreover, FSIAD has the smallest standard deviation in the performance of HFR among state-of-the-art methods. Experimental results substantiate that our method effectively tackles the challenges of HFR and facilitates the performance of HFR.

Method
Rank-1
(%)
VR@FAR
=0.1%(%)
VR@FAR
=0.01%(%)
TRIVET[47] 95.70.5 91.01.3 74.50.7
IDR[19] 97.30.4 95.70.7 -
ADFL[61] 98.20.3 97.20.5 -
W-CNN[20] 98.70.3 98.40.4 94.30.4
PACH[10] 98.90.2 98.30.2 -
DVR[71] 99.70.1 99.60.3 98.60.3
DVG[11] 99.80.1 99.80.1 98.80.2
HFIDR(LightCNN-9)[46] 87.50.0 - -
HFIDR(LightCNN-29)[46] 98.60.0 - -
OMDRA[23] 99.60.1 99.40.2 -
LightCNN[70] 96.70.2 94.80.4 88.50.2
LightCNN+MUNIT[28] 99.60.0 99.50.0 98.10.1
LightCNN+FS[43] 99.70.0 99.60.0 98.10.2
FSIAD 99.90.0 99.90.0 99.20.1
TABLE VII: The 10-fold experimental results of recognition on the CASIA NIR-VIS 2.0 database.

BUAA-VisNir database. The Rank-1 accuracy, VR@FAR=1%, and VR@FAR=0.1% are used for testing on the BUAA-VisNir database. As illustrated in TABLE VIII and Fig. 6(b), we observe the results of most state-of-the-art methods including TRIVET [47], IDR [19], ADFL [61], W-CNN [20], PACH [10], DVR [71], and DVG [11] are lower than 99% in terms of VR@FAR=1%. The possible reason is that BUAA-VisNir is a small-scale HFR dataset and contains images of limited subjects. These methods except DVG suffer from the over-fitting problem and get unsatisfactory performance. Although DVG generates sufficient heterogeneous faces as auxiliary training data, it does not take various facial appearances into consideration and leads to degradation due to variations in the pose and expression of faces on this dataset. LightCNN+MUNIT [28], LightCNN+FS [43] and the proposed FSIAD enrich the diversity of facial attributes in generated images and achieve outstanding performance, whose results are higher than 99.5% in terms of VR@FAR=1%. OMDRA [23] learns residual-independent identity representations and obtains the best results. The results of FSIAD are close to those of OMDRA. FSIAD reaches 99.7% in terms of VR@FAR=1%, which is second to OMDRA (99.9%).

Method
Rank-1
(%)
VR@FAR
=1%(%)
VR@FAR
=0.1%(%)
TRIVET[47] 93.9 93.0 80.9
IDR[19] 94.3 93.4 84.7
ADFL[61] 95.2 95.3 88.0
W-CNN[20] 97.4 96.0 91.9
PACH[10] 98.6 98.0 93.5
DVR[71] 99.2 98.5 96.9
DVG[11] 99.3 98.5 97.3
OMDRA[23] 100.0 99.9 99.7
LightCNN[70] 96.5 95.4 86.7
LightCNN+MUNIT[28] 99.4 99.5 98.7
LightCNN+FS[43] 99.5 99.5 98.8
FSIAD 99.8 99.7 99.1
TABLE VIII: The results of recognition on the BUAA-VisNir database.

Oulu-CASIA NIR-VIS database. The Rank-1 accuracy, VR@FAR=1%, and VR@FAR=0.1% are used for testing. TABLE IX and Fig. 6(c) report that all methods confront degradation of performance on the Oulu-CASIA NIR-VIS dataset compared with CASIA NIR-VIS 2.0 and BUA-VisNir datasets. Since this dataset contains more types of expressions and illuminations but has fewer subjects than those of the former two datasets, it makes HFR more difficult. The results of TRIVET [47], IDR [19], ADFL [61], W-CNN [20] are lower than 80% in terms of VR@FAR=0.1%. These methods are weak in generalization for HFR and fail to solve the difficulty of insufficient subjects on this dataset. Besides, DVR [71], PACH [10], DVG [11] have not considered feature disentanglement and can not adapt to variations in facial attributes for better performance. HFIDF [46], OMDRA [23], LightCNN+MUNIT [28], LightCNN+FS [43] and FSIAD achieve prominent results and gain 100% in terms of Rank-1 accuracy. It is notable that LightCNN+MUNIT and LightCNN+FS improve LightCNN [70] from 65.1 to 81.0% and 87.6% in terms of VR@FAR=0.1%, respectively. This significant improvement indicates that the diverse facial attributes of augmented heterogeneous faces exert a positive influence on HFR. OMDRA achieves the best performance among other methods. FSIAD obtains comparable results to those of OMDRA, where the difference of VR@FAR=0.1% is 0.2%.

Method
Rank-1
(%)
VR@FAR
=1%(%)
VR@FAR
=0.1%(%)
TRIVET[47] 92.2 67.9 33.6
IDR[19] 94.3 73.4 46.2
ADFL[61] 95.5 83.0 60.7
W-CNN[20] 98.0 81.5 54.6
DVR[71] 100.0 97.2 84.9
PACH[10] 100.0 97.9 88.2
DVG[11] 100.0 97.5 90.6
HFIDF(LightCNN-9)[46] 100.0 - -
HFIDF(LightCNN-29)[46] 100.0 - -
OMDRA[23] 100.0 98.5 92.2
LightCNN[70] 96.7 92.4 65.1
LightCNN+MUNIT[28] 100.0 97.2 81.0
LightCNN+FS[43] 100.0 97.0 87.6
FSIAD 100.0 98.1 92.0
TABLE IX: The results of recognition on the Oulu-CASIA NIR-VIS database.

Tufts Face database. We compare our method with LightCNN and DVG on the Tufts Face database. TABLE X demonstrates that it is challenging to match faces between thermal and VIS domains due to the lack of facial textures and geometries and the large cross-domain discrepancy. Owing to the small-scale training data, LightCNN [70] gets poor results on the Tufts Face dataset and reaches 15.3% in terms of Rank-1 accuracy. Benefited from synthetic faces, DVG [11] increases the Rank-1 accuracy to 51.6%. It is noted that there are variations in expressions, angles, and accessories such as sunglasses. FSIAD further improves the performance of HFR and reaches 57.1% in terms of Rank-1 accuracy. The results reveal that augmentation of facial attributes is conducive to Thermal-VIS face recognition.

Method Rank-1(%) VR@FAR=1%(%)
LightCNN[70] 15.3 6.1
DVG[11] 51.6 31.6
FSIAD 57.1 35.7
TABLE X: The results of recognition on the Tufts Face database.

LAMP-HQ. We conduct 1-fold and 10-fold experiments on the LAMP-HQ database to validate the effectiveness of our method. Four metrics are used for evaluations, including Rank-1 accuracy, VR@FAR=1%, VR@FAR=0.1% and VR@FAR=0.01%. The results of the 1-fold experiment are reported in TABLE XI and Fig. 7. Conditional synthesis based methods ADFL [61] and PACH [10] have poor generalization abilities to deal with various facial attributes and get results close to those of baseline LightCNN [70]. DVG [11] takes the advantage of unconditional face generation to synthesize sufficient heterogeneous faces as training data. Unfortunately, since DVG is not able to generate faces with diverse facial attributes, large variations in facial attributes hinder the enhancement of its performance. DVG exceeds LightCNN by 2.1% and 2.7% in terms of Rank-1 accuracy and VR@FAR=1%, respectively. FSIAD has made significant progress and outperforms all the state-of-the-art methods. Especially, FSIAD improves Rank-1 accuracy and VR@FAR=0.01% from 98.3% to 98.8% and from 88.2% to 93.0%, respectively. These results prove that the diversity in facial attributes of training data has a critical impact on the improvement of HFR.

Method
Rank-1
(%)
VR@FAR
=1%(%)
VR@FAR
=0.1%(%)
VR@FAR
=0.01%(%)
LightCNN[70] 96.2 96.1 85.3 69.3
ADFL[61] 95.8 91.5 71.0 -
PACH[10] 96.9 93.9 78.7 -
DVG[11] 98.3 98.8 96.0 88.2
FSIAD 98.8 99.1 97.9 93.0
TABLE XI: The 1-fold experimental results on the LAMP-HQ database. Results of ADFL[61] and PACH[10] are cited from [76].
Fig. 7: The ROC curves of LightCNN, DVG, the proposed FSIAD and its five variants on the LAMP-HQ database.

The results of 10-fold experiments are presented in TABLE XII and Fig. 8. We observe that conditional synthesis based methods ADFL [61] and PACH [10] have similar performance with LightCNN [70]. It reveals the limited paired heterogeneous faces hinder the improvement of HFR. Compared with ADFL and PACH, DVG gains progress and improves VR@FAR=1% from 95.5% to 99.0%. The results of DVG suggest that increasing heterogeneous faces by unconditional synthesis boosts the performance of HFR. FSIAD exceeds state-of-the-art methods, whose Rank-1 accuracy, VR@FAR=1%, VR@FAR=0.1%, and VR@FAR=0.01% are further improved by 0.4%, 0.2%, 0.9, and 4.0%, respectively. Moreover, Fig. 8 illustrates FSIAD also has stable performance and achieves the minimal standard deviation of results. Experimental results indicate it is fundamental to enrich the attribute diversity of synthetic heterogeneous faces to facilitate HFR. FSIAD produces sufficient heterogeneous faces with abundant facial attributes, which improve the generalization ability of the HFR network and substantially alleviate the degradation of performance caused by large variations in attributes.

Method
Rank-1
(%)
VR@FAR
=1%(%)
VR@FAR
=0.1%(%)
VR@FAR
=0.01%(%)
LightCNN[70] 95.80.1 95.50.3 82.42.3 62.510.4
ADFL[61] 95.10.5 92.10.9 73.32.2 -
PACH[10] 95.40.5 93.10.4 75.31.7 -
DVG[11] 98.30.1 99.00.1 96.40.2 88.61.5
FSIAD 98.70.1 99.20.1 97.30.2 92.61.2
TABLE XII: The 10-fold experimental results on the LAMP-HQ database. Results of ADFL[61] and PACH[10] are cited from [76].
Fig. 8: The box plots of LightCNN, DVG and the proposed FSIAD on the LAMP-HQ database.

Iv-E Ablation Studies

To investigate the effectiveness of the proposed loss functions in FSIAD, we conduct ablation studies on the LAMP-HQ database. Since we follow the experimental settings of [46, 11, 71, 10, 20] to adopt LightCNN [70] as the HFR network, we initialize its parameters with a model pre-trained on the MS-Celeb-1M dataset [17] as a baseline (BL). Then the baseline model is fine-tuned on the LAMP-HQ dataset and denoted as LightCNN in TABLE XIII. For ablation studies, we construct five variants and each variant is trained without a specific loss function.

Qualitative and quantitative evaluations are conducted. We firstly train variants on the LAMP-HQ dataset and then synthesize faces from disentangled representations of identities and attributes. Fig. 9 illustrates the synthetic faces generated by five variants. Obviously, the faces of FSIAD w/o are worst and lose most of the information of identity. Without the identity preserving loss , it is difficult for FSIAD to synthesize faces whose identities are consistent with those of source faces. Hence the results of inferior quality indicate that identity preserving loss plays an elementary role in face synthesis. As for facial attributes, we find that the facial attributes are dissimilar with those of reference images when the attribute loss is removed. For example, we discover that complexions of synthetic faces in the first and second rows are different from those of their corresponding reference faces. Therefore, the attribute loss can supervise FSIAD to align facial attributes of synthetic faces to those of references. However, only relying on is not enough for attribute similarity constraint. We observe that the facial textures are blurry and ragged when FSIAD is trained without the structural similarity loss . As we can see in Eq. (8), is composed of a -norm loss and a multi-scale structural similarity loss . On the one hand, the -norm loss contributes to pixel consistency for face synthesis, and makes global features including color and illumination of synthetic faces similar to those of reference faces. On the other hand, the loss further regularizes the local features such as eye, mouth, texture, and contour to close with those of references. Consequently, similarities of either attribute representations or structural information can not be dispensed with. The faces generated by FSIAD w/o are prototypes of synthetic results. But these faces are full of artifacts since the generator lacks the supervision of adversarial discriminator. It indicates that optimizes FSIAD to refine synthetic images and improve fidelity. Compared with the previous results, FSIAD w/o produces images of better quality. However, these synthetic faces exist blending boundaries and incoherent textures, especially the first and second images. This phenomenon is caused by the lack of feature disentanglement. Without , the attribute representations learned from attribute encoders are mixed up with the identity features of reference faces. It is hard for a generator to produce faces with confused representations and thus result in fuzzy synthetic faces. Compared with variants, FSIAD takes the advantage of our proposed loss functions and successfully produces high-quality faces with the combinations of identity representations and facial attribute representations. The synthetic faces of FSIAD have smooth textures, coherent contours but no noise or artifact. Qualitative ablation studies reveal that all the proposed loss functions are essential for FSIAD to produce high-quality faces and implement heterogeneous face augmentation.

Fig. 9: The qualitative results of ablation studies on the LAMP-HQ database. The first and second columns are source images and reference images. The images in the rightmost column are synthesized by FSIAD, while the rest columns are synthesized by the ablation versions without loss functions , , , , and , respectively.

Apart from qualitative analyses, we also conduct quantitative ablation studies on the HFR performance of these variants. As demonstrated in TABLE XIII and Fig. 7, the baseline (BL) of ablation studies performs worst on the LAMP-HQ dataset because it is only pre-trained on the VIS faces. After BL is fine-tuned on LAMP-HQ, it achieves better performance and improves the VR@FAR=1% from 92.5% to 96.1%. To reveal the effectiveness of the synthetic faces, we construct a variant FSIAD w/o by setting the to 0 in Eq. (15). It is equivalent to LightCNN, hence its results are identical to those of LightCNN. Owing to the enrichment of facial attributes, all variants of FSIAD outperform LightCNN and DVG. However, FSIAD w/o neglects the identity preservation and causes inconsistency of identity between source faces and synthetic faces. Hence, it directly damages face recognition and slightly improves VR@FAR=0.1% to 96.5%. Due to lacking feature disentanglement of identity and facial attributes, FSIAD w/o has a flaw in synthesis and causes incoherent textures in the generated faces. It hinders FSIAD from facilitating the performance of HFR and its results are close to those of FSIAD w/o . Without , FSIAD lacks the structural information of face from the reference image and synthesizes irregular faces. It results in the unsatisfactory performance of HFR. Compared with FSIAD w/o , FSIAD w/o has better ability in face synthesis and improves VR@FAR=0.1% to 97.0%. FSIAD w/o can basically accomplish the goal of FSIAD that produces faces from identities of source and attributes of reference, and so achieves the best VR@FAR=1% among the aforementioned variants. But its synthetic faces are of low quality and filled with artifacts and thus lead to the medium performance of HFR. FSIAD outperforms the variants and remarkably improves VR@FAR=0.01% from 90.9% to 93.0%. Only when all the proposed loss functions are activated can FSIAD exploit its optimal performance to achieve the best results. In summary, the quantitative and qualitative analyses indicate that all the proposed loss functions in FSIAD are indispensable, and they jointly supervise FSIAD to produce high-quality heterogeneous faces and facilitate the performance of HFR.

Method
Rank-1
(%)
VR@FAR
=1%(%)
VR@FAR
=0.1%(%)
VR@FAR
=0.01%(%)
BL 94.5 92.5 77.3 60.3
LightCNN[70] 96.2 96.1 85.3 69.3
DVG[11] 98.3 98.8 96.0 88.2
w/o 96.2 96.1 85.3 69.3
w/o 98.5 98.8 96.5 89.9
w/o 98.5 98.9 96.6 89.6
w/o 98.5 98.9 96.6 89.0
w/o 98.6 98.9 97.0 90.9
w/o 98.6 99.0 96.8 90.7
FSIAD 98.8 99.1 97.9 93.0
TABLE XIII: The quantitative results of ablation studies on the LAMP-HQ database.

V Conclusions

In this paper, we propose a novel method Face Synthesis with Identity-Attribute Disentanglement (FSIAD) to augment heterogeneous face images for facilitating the performance of HFR. FSIAD consists of Identity-Attribute Disentanglement (IAD) and Face Synthesis Module (FSM) components. The IAD component is designed to decouple faces into representations of identities and attributes, where attribute representations are regularized to be uncorrelated with identities. Then FSM synthesizes abundant heterogeneous faces from the combinations of disentangled representations of identities and attributes, which augment the insufficient training data of HFR and increase diversity in facial attributes. We train the HFR network with both original HFR data and synthetic heterogeneous faces for improving its performance on face recognition. Through disentanglement and synthesis, FSIAD naturally tackles three major challenges of HFR. Extensive experiments on five HFR databases reveal that our method is superior to previous methods and yields state-of-the-art performance.

Acknowledgment

The authors sincerely thank the associate editor and the reviewers for their professional comments and suggestions.

References

  • [1] T. Ahonen, A. Hadid, and M. Pietikainen (2006) Face description with local binary patterns: application to face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 28 (12), pp. 2037–2041. External Links: Document Cited by: §II-A1.
  • [2] H. B. Bae, T. Jeon, Y. Lee, S. Jang, and S. Lee (2020) Non-visual to visual translation for cross-domain face recognition. IEEE Access 8, pp. 50452–50464. Cited by: §II-A3.
  • [3] J. Chen, D. Yi, J. Yang, G. Zhao, S. Z. Li, and M. Pietikainen (2009) Learning mappings for face synthesis from near infrared to visual light images. In

    IEEE Conference on Computer Vision and Pattern Recognition

    ,
    Vol. , pp. 156–163. Cited by: §IV-D2, §IV.
  • [4] M. Cho, T. Chung, T. Kim, and S. Lee (2019) NIR-to-vis face recognition via embedding relations and coordinates of the pairwise features. In International Conference on Biometrics, Vol. , pp. 1–8. External Links: Document Cited by: §II-A1.
  • [5] M. Cho, T. Kim, I. Kim, K. Lee, and S. Lee (2021) Relational deep feature learning for heterogeneous face recognition. IEEE Transactions on Information Forensics and Security 16 (), pp. 376–388. External Links: Document Cited by: §I, §II-A1.
  • [6] T. Daniel and A. Tamar (2021)

    Soft-introvae: analyzing and improving the introspective variational autoencoder

    .
    In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4391–4400. Cited by: §II-B.
  • [7] T. de Freitas Pereira, A. Anjos, and S. Marcel (2019) Heterogeneous face recognition using domain specific units. IEEE Transactions on Information Forensics and Security 14 (7), pp. 1803–1816. External Links: Document Cited by: §I, §I.
  • [8] X. Di and V. M. Patel (2020) Facial synthesis from visual attributes via sketch using multiscale generators. IEEE Transactions on Biometrics, Behavior, and Identity Science 2 (1), pp. 55–67. External Links: Document Cited by: §II-A3, §II-A3.
  • [9] X. Di and V. M. Patel (2021) Multimodal face synthesis from visual attributes. IEEE Transactions on Biometrics, Behavior, and Identity Science 3 (3), pp. 427–439. External Links: Document Cited by: §II-A3, §II-A3.
  • [10] B. Duan, C. Fu, Y. Li, X. Song, and R. He (2020) Cross-spectral face hallucination via disentangling independent factors. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. , pp. 7927–7935. External Links: Document Cited by: §II-A3, §IV-B, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-E, TABLE XI, TABLE XII, TABLE VII, TABLE VIII, TABLE IX.
  • [11] C. Fu, X. Wu, Y. Hu, H. Huang, and R. He (2019) Dual variational generation for low shot heterogeneous face recognition. In Advances in Neural Information Processing Systems, pp. 2674–2683. Cited by: §I, §II-A3, §II-A3, §III-C, §IV-B, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-E, TABLE X, TABLE XI, TABLE XII, TABLE XIII, TABLE VII, TABLE VIII, TABLE IX.
  • [12] C. Fu, X. Wu, Y. Hu, H. Huang, and R. He (2021) DVG-face: dual variational generation for heterogeneous face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (), pp. 1–1. External Links: Document Cited by: §II-A1, §II-A3, §II-A3.
  • [13] G. Gao, H. Huang, C. Fu, Z. Li, and R. He (2021) Information bottleneck disentanglement for identity swapping. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3404–3413. Cited by: §II-B.
  • [14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680. Cited by: §II-A3, §II-B.
  • [15] D. Goswami, C. H. Chan, D. Windridge, and J. Kittler (2011) Evaluation of face recognition system in heterogeneous environments (visible vs nir). In IEEE International Conference on Computer Vision Workshops, Vol. , pp. 2160–2167. External Links: Document Cited by: §II-A1.
  • [16] G. Guo and N. Zhang (2019) A survey on deep learning based face recognition. Computer Vision and Image Understanding 189, pp. 102805. External Links: ISSN 1077-3142 Cited by: §I.
  • [17] Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao (2016) MS-celeb-1m: a dataset and benchmark for large-scale face recognition. In Proceedings of the European Conference on Computer Vision, pp. 87–102. Cited by: §IV-B, §IV-E.
  • [18] R. He, J. Cao, L. Song, Z. Sun, and T. Tan (2020) Adversarial cross-spectral face completion for nir-vis face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (5), pp. 1025–1037. External Links: Document Cited by: §II-A3.
  • [19] R. He, X. Wu, Z. Sun, and T. Tan (2017) Learning invariant deep representation for nir-vis face recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 2000–2006. Cited by: §II-A2, §IV-D2, §IV-D2, §IV-D2, §IV-D2, TABLE VII, TABLE VIII, TABLE IX.
  • [20] R. He, X. Wu, Z. Sun, and T. Tan (2019) Wasserstein cnn: learning invariant features for nir-vis face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (7), pp. 1761–1773. External Links: Document Cited by: §I, §II-A1, §IV-B, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-E, TABLE VII, TABLE VIII, TABLE IX.
  • [21] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, Vol. , pp. 6629–6640. Cited by: §IV-D1.
  • [22] W. Hu and H. Hu (2021) Dual adversarial disentanglement and deep representation decorrelation for nir-vis face recognition. IEEE Transactions on Information Forensics and Security 16 (), pp. 70–85. External Links: Document Cited by: §I, §II-A2.
  • [23] W. Hu and H. Hu (2021) Orthogonal modality disentanglement and representation alignment network for nir-vis face recognition. IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1. Cited by: §II-A2, §II-A3, §IV-D2, §IV-D2, §IV-D2, §IV-D2, TABLE VII, TABLE VIII, TABLE IX.
  • [24] W. Hu, W. Yan, and H. Hua (2021) Dual face alignment learning network for nir-vis face recognition. IEEE Transactions on Circuits and Systems for Video Technology (), pp. 1–1. External Links: Document Cited by: §II-A3.
  • [25] D. Huang, J. Sun, and Y. Wang (2012) The buaa-visnir face database instructions. School Comput. Sci. Eng., Beihang Univ., Beijing, China, Tech. Rep. IRIP-TR-12-FR-001 3. Cited by: §IV-D2, §IV.
  • [26] D. Huang and Y. F. Wang (2013) Coupled dictionary and feature space learning with applications to cross-domain image synthesis and recognition. In IEEE International Conference on Computer Vision, pp. 2496–2503. Cited by: §II-A3.
  • [27] X. Huang and S. Belongie (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In IEEE International Conference on Computer Vision, pp. 1501–1510. Cited by: §III-B.
  • [28] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018)

    Multimodal unsupervised image-to-image translation

    .
    In Proceedings of the European Conference on Computer Vision, pp. 172–189. Cited by: §II-B, §IV-C, §IV-D1, §IV-D2, §IV-D2, §IV-D2, §IV-D2, TABLE IV, TABLE V, TABLE VI, TABLE VII, TABLE VIII, TABLE IX.
  • [29] J. Huo, Y. Gao, Y. Shi, W. Yang, and H. Yin (2018) Heterogeneous face recognition by margin-based cross-modality metric learning. IEEE Transactions on Cybernetics 48 (6), pp. 1814–1826. External Links: Document Cited by: §I.
  • [30] Y. Jin, J. Lu, and Q. Ruan (2015) Coupled discriminative feature learning for heterogeneous face recognition. IEEE Transactions on Information Forensics and Security 10 (3), pp. 640–652. External Links: Document Cited by: §I.
  • [31] F. Juefei-Xu, D. K. Pal, and M. Savvides (2015) NIR-vis heterogeneous face recognition via cross-spectral joint dictionary learning and reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 141–150. Cited by: §II-A3.
  • [32] M. Kan, S. Shan, H. Zhang, S. Lao, and X. Chen (2016) Multi-view discriminant analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (1), pp. 188–194. External Links: Document Cited by: §I.
  • [33] A. Kantarcı and H. K. Ekenel (2019) Thermal to visible face recognition using deep autoencoders. In International Conference of the Biometrics Special Interest Group, Vol. , pp. 1–5. External Links: Document Cited by: §I.
  • [34] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. , pp. 4396–4405. External Links: Document Cited by: §II-B.
  • [35] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. , pp. 8107–8116. External Links: Document Cited by: §II-B.
  • [36] T. Karras et al. (2021) Alias-free generative adversarial networks. arXiv preprint arXiv:2106.12423. Cited by: §II-B.
  • [37] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV-B.
  • [38] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §II-A3, §II-B, §III-A.
  • [39] B. Klare, Z. Li, and A. K. Jain (2011) Matching forensic sketches to mug shot photos. IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (3), pp. 639–646. External Links: Document Cited by: §I, §II-A1.
  • [40] A. Kortylewski, A. Schneider, T. Gerig, B. Egger, A. Morel-Forster, and T. Vetter (2018) Training deep face recognition systems with synthetic data. arXiv preprint arXiv:1802.05891. Cited by: §I.
  • [41] Z. Lei, Q. Bai, R. He, and S. Z. Li (2008)

    Face shape recovery from a single image using cca mapping between tensor spaces

    .
    In IEEE Conference on Computer Vision and Pattern Recognition, Vol. , pp. 1–7. External Links: Document Cited by: §I, §II-A2.
  • [42] J. Lezama, Q. Qiu, and G. Sapiro (2017) Not afraid of the dark: nir-vis face recognition via cross-spectral hallucination and low-rank embedding. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 6628–6637. Cited by: §I.
  • [43] L. Li, J. Bao, H. Yang, D. Chen, and F. Wen (2020) Advancing high fidelity identity swapping for forgery detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5074–5083. Cited by: §II-B, §IV-C, §IV-D1, §IV-D2, §IV-D2, §IV-D2, §IV-D2, TABLE IV, TABLE V, TABLE VI, TABLE VII, TABLE VIII, TABLE IX.
  • [44] S. Z. Li, D. Yi, Z. Lei, and S. Liao (2013) The casia nir-vis 2.0 face database. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, Vol. , pp. 348–353. Cited by: §IV-C, §IV-D2, §IV.
  • [45] S. Liao, D. Yi, Z. Lei, R. Qin, and S. Z. Li (2009) Heterogeneous face recognition from local structures of normalized appearance. In Advances in Biometrics, pp. 209–218. External Links: ISBN 978-3-642-01793-3 Cited by: §II-A1.
  • [46] D. Liu, X. Gao, C. Peng, N. Wang, and J. Li (2021) Heterogeneous face interpretable disentangled representation for joint face recognition and synthesis.

    IEEE Transactions on Neural Networks and Learning Systems

    (), pp. 1–15.
    External Links: Document Cited by: §II-A3, §II-A3, §IV-B, §IV-D2, §IV-D2, §IV-D2, §IV-E, TABLE VII, TABLE IX.
  • [47] X. Liu, L. Song, X. Wu, and T. Tan (2016) Transferring deep representation for nir-vis heterogeneous face recognition. In International Conference on Biometrics, Vol. , pp. 1–8. External Links: Document Cited by: §II-A1, §IV-D2, §IV-D2, §IV-D2, §IV-D2, TABLE VII, TABLE VIII, TABLE IX.
  • [48] M. Luo, J. Cao, X. Ma, X. Zhang, and R. He (2021) FA-gan: face augmentation gan for deformation-invariant face recognition. IEEE Transactions on Information Forensics and Security 16 (), pp. 2341–2355. External Links: Document Cited by: §I.
  • [49] I. Masi, Y. Wu, T. Hassner, and P. Natarajan (2018) Deep face recognition: a survey. In SIBGRAPI Conference on Graphics, Patterns and Images, Vol. , pp. 471–478. External Links: Document Cited by: §I.
  • [50] Y. Nirkin, I. Masi, A. Tran Tuan, T. Hassner, and G. Medioni (2018) On face segmentation, face swapping, and face perception. In IEEE International Conference on Automatic Face Gesture Recognition, Vol. , pp. 98–105. External Links: Document Cited by: §II-B.
  • [51] Y. Nitzan, A. Bermano, Y. Li, and D. Cohen-Or (2020) Face identity disentanglement via latent space mapping. ACM Transactions on Graphics 39 (6), pp. 1–14. Cited by: §II-B.
  • [52] S. Ouyang, T. Hospedales, Y. Song, X. Li, C. C. Loy, and X. Wang (2016) A survey on heterogeneous face recognition: sketch, infra-red, 3d and low-resolution. Image and Vision Computing 56, pp. 28–48. Cited by: §I.
  • [53] K. Panetta, Q. Wan, S. Agaian, S. Rajeev, S. Kamath, R. Rajendran, S. P. Rao, A. Kaszowska, H. A. Taylor, A. Samani, and X. Yuan (2018) A comprehensive database for benchmarking imaging systems. IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (3), pp. 509–520. External Links: Document Cited by: §IV-D2, §IV.
  • [54] C. Peng, N. Wang, J. Li, and X. Gao (2019) Re-ranking high-dimensional deep local representation for nir-vis face recognition. IEEE Transactions on Image Processing 28 (9), pp. 4553–4565. External Links: Document Cited by: §I.
  • [55] N. Rasiwasia, J. Costa Pereira, E. Coviello, G. Doyle, G. R.G. Lanckriet, R. Levy, and N. Vasconcelos (2010) A new approach to cross-modal multimedia retrieval. In Proceedings of the ACM International Conference on Multimedia, pp. 251–260. Cited by: §II-A2.
  • [56] B. S. Riggan, N. J. Short, M. S. Sarfraz, S. Hu, H. Zhang, V. M. Patel, S. Rasnayaka, J. Li, T. Sim, S. M. Iranmanesh, and N. M. Nasrabadi (2018) Icme grand challenge results on heterogeneous face recognition: polarimetric thermal-to-visible matching. In IEEE International Conference on Multimedia Expo Workshops, Vol. , pp. 1–4. External Links: Document Cited by: §I.
  • [57] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and X. Chen (2016) Improved techniques for training gans. In Advances in Neural Information Processing Systems, Vol. , pp. 2234–2242. Cited by: §IV-D1.
  • [58] M. Shao and Y. Fu (2017) Cross-modality feature learning through generic hierarchical hyperlingual-words. IEEE Transactions on Neural Networks and Learning Systems 28 (2), pp. 451–463. External Links: Document Cited by: §IV-A3.
  • [59] A. Sharma and D. Jacobs (2011) Bypassing synthesis: pls for face recognition with pose, low-resolution and sketch. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600. Cited by: §II-A2.
  • [60] A. Sharma, A. Kumar, H. Daume, and D. W. Jacobs (2012) Generalized multiview analysis: a discriminative latent space. In IEEE Conference on Computer Vision and Pattern Recognition, Vol. , pp. 2160–2167. Cited by: §II-A2.
  • [61] L. Song, M. Zhang, X. Wu, and R. He (2018) Adversarial discriminative heterogeneous face recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 7355–7363. Cited by: §II-A3, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-D2, TABLE XI, TABLE XII, TABLE VII, TABLE VIII, TABLE IX.
  • [62] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recognition, Vol. , pp. 2818–2826. External Links: Document Cited by: §IV-D1.
  • [63] X. Tang and X. Wang (2003) Face sketch synthesis and recognition. In IEEE International Conference on Computer Vision, pp. 687–694. Cited by: §II-A3.
  • [64] Y. Tsai, H. Hsu, C. Hou, and Y. F. Wang (2014) Person-specific domain adaptation with applications to heterogeneous face recognition. In IEEE International Conference on Image Processing, Vol. , pp. 338–342. External Links: Document Cited by: §I.
  • [65] S. Wang, L. Zhang, Y. Liang, and Q. Pan (2012)

    Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis

    .
    In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2216–2223. Cited by: §II-A3.
  • [66] X. Wang and X. Tang (2009) Face photo-sketch synthesis and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 31 (11), pp. 1955–1967. External Links: Document Cited by: §II-A3.
  • [67] Z. Wang, E.P. Simoncelli, and A.C. Bovik (2003) Multiscale structural similarity for image quality assessment. In The Asilomar Conference on Signals, Systems Computers, pp. 1398–1402. Cited by: §III-B.
  • [68] Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13 (4), pp. 600–612. Cited by: §III-B, §IV-D1, §IV-D1.
  • [69] Y. Wen, K. Zhang, Z. Li, and Y. Qiao (2016) A discriminative feature learning approach for deep face recognition. In Proceedings of the European Conference on Computer Vision, pp. 499–515. Cited by: §II-A1.
  • [70] X. Wu, R. He, Z. Sun, and T. Tan (2018) A light cnn for deep face representation with noisy labels. IEEE Transactions on Information Forensics and Security 13 (11), pp. 2884–2896. Cited by: §III-A, §IV-B, §IV-B, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-E, TABLE X, TABLE XI, TABLE XII, TABLE XIII, TABLE VII, TABLE VIII, TABLE IX.
  • [71] X. Wu, H. Huang, V. M. Patel, R. He, and Z. Sun (2019) Disentangled variational representation for heterogeneous face recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 9005–9012. Cited by: §II-A1, §II-A3, §IV-B, §IV-D2, §IV-D2, §IV-D2, §IV-D2, §IV-E, TABLE VII, TABLE VIII, TABLE IX.
  • [72] X. Wu, L. Song, R. He, and T. Tan (2018) Coupled deep learning for heterogeneous face recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, Cited by: §II-A2.
  • [73] S. Yang, K. Fu, X. Yang, Y. Lin, J. Zhang, and C. Peng (2020) Learning domain-invariant discriminative features for heterogeneous face recognition. IEEE Access 8 (), pp. 209790–209801. External Links: Document Cited by: §I.
  • [74] D. Yi, Z. Lei, and S. Z. Li (2015) Shared representation learning for heterogeneous face recognition. In IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, pp. 1–7. External Links: Document Cited by: §II-A2.
  • [75] D. Yi, R. Liu, R. Chu, Z. Lei, and S. Z. Li (2007) Face matching between near infrared and visible light images. In International Conference on Biometrics, pp. 523–530. Cited by: §II-A2.
  • [76] A. Yu, H. Wu, H. Huang, Z. Lei, and R. He (2021) LAMP-hq: a large-scale multi-pose high-quality database and benchmark for nir-vis face recognition. International Journal of Computer Vision, pp. 1–17. Cited by: §I, §II-A3, §IV-D2, TABLE XI, TABLE XII, §IV.
  • [77] J. Yu, J. Cao, Y. Li, X. Jia, and R. He (2019) Pose-preserving cross spectral face hallucination. In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 1018–1024. Cited by: §II-A3.
  • [78] H. Zhang, B. S. Riggan, S. Hu, N. J. Short, and V. M. Patel (2019) Synthesis of high-quality visible faces from polarimetric thermal faces using generative adversarial networks. International Journal of Computer Vision 127 (6), pp. 845–862. Cited by: §I.
  • [79] J. Zhang, X. Zeng, M. Wang, Y. Pan, L. Liu, Y. Liu, Y. Ding, and C. Fan (2020) FReeNet: multi-identity face reenactment. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5326–5335. Cited by: §II-B.
  • [80] H. Zhao, O. Gallo, I. Frosio, and J. Kautz (2017) Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging 3 (1), pp. 47–57. Cited by: §III-B.
  • [81] H. Zhu, C. Fu, Q. Wu, W. Wu, C. Qian, and R. He (2020) AOT: appearance optimal transport based identity swapping for forgery detection. In Advances in Neural Information Processing Systems, Vol. , pp. 21699–21712. Cited by: §II-B.
  • [82] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision, pp. 2223–2232. Cited by: §II-A3.
  • [83] M. Zhu, J. Li, N. Wang, and X. Gao (2020) Knowledge distillation for face photo-sketch synthesis. IEEE Transactions on Neural Networks and Learning Systems (), pp. 1–14. External Links: Document Cited by: §II-A3.