Facial Expression Restoration Based on Improved Graph Convolutional Networks

10/23/2019 ∙ by Zhilei Liu, et al. ∙ 0

Facial expression analysis in the wild is challenging when the facial image is with low resolution or partial occlusion. Considering the correlations among different facial local regions under different facial expressions, this paper proposes a novel facial expression restoration method based on generative adversarial network by integrating an improved graph convolutional network (IGCN) and region relation modeling block (RRMB). Unlike conventional graph convolutional networks taking vectors as input features, IGCN can use tensors of face patches as inputs. It is better to retain the structure information of face patches. The proposed RRMB is designed to address facial generative tasks including inpainting and super-resolution with facial action units detection, which aims to restore facial expression as the ground-truth. Extensive experiments conducted on BP4D and DISFA benchmarks demonstrate the effectiveness of our proposed method through quantitative and qualitative evaluations.



There are no comments yet.


page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Facial restoration aims to recover the valuable missing information of faces caused by low resolution, occlusion, large pose, etc, which has gained increasing attention in the field of face recognition, especially with the emergence of convolution neural networks (CNN) 

[21, 11] and generative adversarial networks (GAN) [6]. Many sub tasks of facial restoration have achieved great breakthroughs, consisting of face completion [15, 27], face super-resolution or hallucination [1, 12], and face frontal view synthesis[8]. Most of these previous works just conduct these face restoration tasks independently, and what’s more, only facial identity restoration is considered, without taking facial expression information restoration into consideration. Recently, some studies [22]

try to jointly deal with these ill situations with the help of deep learning and GAN. And three kinds of ill facial images is shown in Fig. 

1 including both low resolution and occlusion faces, which give us the necessity to address both low resolution and occlusion jointly. While face super-resolution, the model takes more relations on intra-patches into account, but the relations on inter-patch during face inpainting. Concerning that graph can have intra-patch and inter-patch relationships with edges, we attempt to jointly address low resolution and partial occlusion.

Facial expression restoration is beneficial to the study of facial emotion analysis, which is easily affected by challenging environment, i.e. low resolution, occlusion, etc. In the field of facial expression analysis, facial action units (AUs) refer to a unique set of basic facial muscle actions at certain location defined by Facial Action Coding System (FACS) [3], which is one of the most comprehensive and objective systems for describing facial expressions. Considering facial structure and patterns of facial expression are relatively fixed, it should be beneficial for facial expression restoration if taking their relations of different AUs into consideration under occlusion and low resolution situations. However, in literature it is rare to see such facial expression restoration study by exploring the relations of different facial regions under different facial expressions.

Figure 1: Example images showing face images with low resolution, partial occlusion and both in the wild.

In this paper, we propose a novel facial expression restoration framework by exploiting the correlations of different facial AUs. Our idea appears that firstly restoring the whole face and then detecting AUs to verify whether the facial expression appears or not, which can be called facial expression restoration. In order to learn the features of facial occluded patches from other unoccluded patches, we propose an improved graph convolutional network (IGCN), of which the structure is shown in Fig. 3. In addition, IGCN also can help to improve the resolution of unoccluded face patches from other visible face patches, by exploring the correlations of different facial components. In the next, with the help of the proposed IGCN, a Region Relation Modeling Block (RRMB) is built to capture the facial features with different scales for face restoration. Given more finer facial division, more accurate relations of different face patches can be built with the help of our proposed framework. With more accurate adjacency matrix in the proposed IGCN, our proposed model can well restore the feature map in deep networks with visible patches’ features. We can also use IGCN to build AU detector by exploring these correlations among AUs to help generator to restore more accurate facial expressions. Last but not least, a discriminator is designed to help generator to generate realistic faces, and additional perceptual loss is helpful to improve the quality of the generated faces to some extent.

The contributions of this paper are threefold. First, a novel end-to-end facial expression restoration framework is proposed by jointly addressing face in-painting and face super-resolution. Second, an IGCN is proposed for facial patch relation modeling, and a RRMB is built with the aid of the proposed IGCN. Third, we exploit facial action units detector in generative model to improve the facial expression restoration capability of the generator.

2 Related Works

Our proposed framework is closely related to existing image restoration methods, facial action units detection methods, and graph convolutional network related studies, since we aim to study facial expression restoration by modeling AU relations with the aid of GCN based methods.

2.1 Image restoration

Recently, image restoration has attracted more attentions due to the existence of generative adversarial network(GAN) [6], which generates an image from a random vector and uses a discriminator to distinguish the real image from generated image. In order to improve the generated images’ quality, many works use perceptual loss to supervise model learning [2]. Also, conditional GAN is proposed to limit the generated images’ distribution [17]. Image restoration consists of image super-resolution or hallucination [1, 12], image completion [15, 27], face frontal view synthesis [8], image denoise [18] and etc, image derain [25], image dehaze [4], image deblurring [13] and shadow removal[24]

. Many topics lack the true datasets. For image super-resolution, it usually uses bicubic interpolation to synthesis low-resolution image as

[12]. For image completion, it usually uses a binary mask to synthesis masked image as [27] and etc. We follow this setting for ill image synthesis in this paper. Also, [22] tries to jointly deal with face hallucination and deblurring. The situation of both problems co-occurring is common in the wild environment, which is depicted in Fig. 1. Here, we try to jointly deal with face inpainting and super-resolution for facial expression restoration concerning on relations on inter and intra patches. Considering that face image has structure information, [1] achieves amazing success with the aids of face parsing and facial landmarks information, which motivate us to restore face with facial action units information.

2.2 Action units detection

Automatic facial AUs detection plays an important role in describing facial actions. To recognize facial action units under complex environment, many works have been devoted to explore various features and classifier.

[26] jointly detects facial action units and landmarks, which want to recognize AUs with the help of landmarks. [9] uses convolutional networks to capture deep representations to recognize AUs via deep learning model. [23] designs AUs-related region, Zhao etal.[29] exploits patches centered at facial landmarks, Li etal.[14] exploits hand-crafted heatmaps centered at facial landmarks based on Manhattan distance and [30] proposes deep region layer to help detect AUs. These defined regions help attention these regions while model learning. [19] also uses AUs co-existence relations to help recognize AUs. These methods achieve good performances for AUs detection, which motivate us to exploit the relations among AUs-related face patches. With the relations, improved graph convolutional network well fuses the features of different face patches to detect AUs, whcih is the supervisory information to help restore facial expression.

2.3 Graph convolutional networks

Recently, there has been a rich line of research on graph neural networks [5][10]

proposes graph convolutional networks (GCN), which is inspired by the first order graph Laplacian methods. GCN mainly achieves promising performance on graph node classification task. Adjacency matrix is defined by the links between nodes of the graph. The transformation on nodes is linear transformation without learning trainable filters. Using the relations among nodes, graph convolutional networks can embed the features of relational nodes and itself. We improve conventional graph convolutional network with the tensor-inputs and standard convolutional layer instead of linear transformation in conventional graph convolutional network.

3 Facial Expression Restoration based on IGCN

In this section, the proposed model is introduced in Sec. 3.1 at first. Then, the details of improved graph convolutional networks (IGCN) are explained in Sec. 3.2 and the region relation modeling block (RRMB) is explained in Sec. 3.

3.1 Proposed Model

Figure 2: Framework of the proposed facial expression restoration network. During training, generator generates restored facial expression images with the supervision of pre-trained AU classifier and the adversarial loss from discriminator. During testing, only the generator is adopted to restore the ill facial images.

The structure of our proposed method is shown in Fig. 2, which consists of a generator to restore the whole face image, a discriminator to justify whether the generated face is real or fake, a classifier to recognize the facial action units (AUs) to supervise the generated face image. To make full use of unoccluded face patches, we jointly deal with face completion and super-resolution problem. For face completion, it models the relations between unoccluded face patches and occluded patches. For face super-resolution, it models the relations among different unoccluded face patches. This aims to ensure the global harmony of generated face images.

The losses of our proposed model consist of three parts: the loss for generator learning, the loss for discriminator learning, the loss for AUs classifier learning. For generator, to help learn the generator network, we use pixel loss , which is defined as


where is the generated facial expression image by generator, is the ground-truth facial image. Besides, we use the pre-trained 19-layer VGG to compute perceptual loss to gain more facial details [21]. The perceptual loss is defined as


where is the ground-truth’s feature map obtained by the -th convolution layer before the

-th max-pooling layer in VGG-19, and

is the generated face’s. Adversarial loss is used to improve the quality and reality of generated face image after restoration, the loss of discriminator is defined as


where is the discriminator to discriminate the ground-truth face from the generated face. In order to retain the facial expression, AUs are one way to convey the facial action, such as six basic facial expressions. AU classifier is used to help generator learn the facial action units’ distributions. The loss of AU classifier is defined as


where is the AU classifier to recognize facial action units.

are ground-truth’s logits of the last fully connection (FC) layer before activation of AU classifier,

are generated face’s logits of the last FC layer before activation of AU classifier. The overall loss of the proposed facial expression restoration framework is


where , , and

are trade-off parameters. General GAN loss for learning discriminator and cross-entropy loss for learning classifier. This max-min game will help to generate realistic face image. Note that for each AU, we should calculate the cross-entropy loss for AU classifier, because it is a multi-label task. The activation function used in this paper is sigmoid function.

3.2 Improved Graph Convolutional Networks

Figure 3: Structure of the IGCN

with stride

in the left. represents the link between -th patch and -th patch, represents adjacency matrix. The right is the structure of RRMB, which consists of IGCN , , . represents splitting only 1 patch for the inputs, represents splitting 4 patches for the inputs, and represents splitting 64 patches for the inputs.

The features of the nodes are vectors in conventional graph convolutional network [10], which is called as non-euclidean data. For face image, every patch of face image is associated with other patches and is euclidean data. In order to use the face patches as nodes directly via modeling the relations among different facial patches, an improved graph convolutional network is proposed, of which the whole structure is shown in Fig. 3. Due to the ability of graph convolutional network, We can use unoccluded regions to complete the occluded regions via pre-defined relations, such as using unoccluded left eyes to restore occluded right eyes, also can using unoccluded region to enhance the quality of other unoccluded regions.

Firstly, we split the feature of face image into face patches with certain order position ID. For each face patch, we use convolutional layer to transform its representation. Here, it should be noted that conventional graph convolutional networks use vectors as features, and use linear transformation layer to capture representations. Different from conventional GCNs, our proposed IGCN uses 4-D tensor of face patches as input features, and uses convolution layer to capture representations. The weights of all convolutional layers for each patch are shared under one layer of IGCN. According to symmetrical adjacency matrix, we get every face patch feature after sum operation. Lastly, we convert face patch features into a feature map according the origin position ID. Note that we also can use deconvolutional layer in IGCN. Adjacency matrix is predefined via facial structure. IGCN can be defined as:


where F is the stacked patches features, is the normalized adjacency matrix and

represents tensor product. The adjacency matrixis defined by the correlations of facial structure, such as the symmetry of left face and right face and AUs correlation. Supposing that two patches have relation, then the link between the two patches is suppose to be 1, the opposite is 0. Here we define the relation by cosine similarity between two patches and the co-existence and exclusive relation between two patches consisting of two AUs according landmarks.

3.3 Region Relation Modeling Block

Region Relation Modeling Block (RRMB) is designed to model the relations of different face patches. This multi-scale structure design is popular in image feature representation learning area. In order to capture different scales’ features, we use three scales, such as splitting patch, patches, patches. While splitting patch in IGCN , which is same with standard convolutional layer. This scale is to capture global image-level features. The second scale is splitting patches, IGCN is to ensure stable features during flipped situation. This scale setting is exploited to capture object-level features. The third scale is splitting 8*8 patches, IGCN 8*8 is to construct associations between relational spatial patches, such as eyes and mouth. This scale setting is exploited to capture patch-level features. All these scales features are summed pixel-wisely to get the final output features.

Figure 4: Sample distributions of BP4D dataset (left) and DISFA dataset (right). 1 represents AU appearance, 0 represents AU disappearance.

4 Experiments

4.1 Datasets and Settings

Datasets: Our proposed facial expression restoration network is evaluated on two wildely used datastes for facial expression analysis: BP4D [28] and DISFA [16]. The settings of two datasets are similar with [20]. For BP4D, we split the dataset to training/testing sets according to subject. There are 28 subjects in training set and 13 subjects in testing set. Each set contains 12 AUs with AU labels of occurrence or absence. Total of 100760 frames are used for training and 45809 frames are used for testing. For DISFA, the processing of the dataset is same as BP4D, there are 18 subjects in training set and 9 subjects in testing set. Each set contains 8 AUs with AU labels of occurrence or absence. Total of 87209 frames are adopted as training set and 43605 frames are used for testing. Note that the color and background of face image are large of differences in DISFA dataset, which is difficult for model to learn well results. The sample distributions of two datasets are shown in Fig. 4, which illustrates the extreme unbalance situation of labels.

Preprocessing: For each face image, we perform similarity transformation including rotation, uniform scaling, and translation to obtain a face. This transformation is shape-preserving and brings no change to the expression. The input ill face images are produced by resizing high resolution face image to via bicubic interpolation method and added a random binary mask, of which size is one fourth of the input size.

Implementation details: We firstly pre-train the AUs classifier, then jointly learn the generator and discriminator, about learning one time of discriminator every three times of generator. For AUs classifier learning, we get the good metrics, which are little lower than the state of the arts. The settings of trade-off parameters are , , . We use Adam for optimization. The learning rate is , the batch size is , and the kernel size is .

4.2 Visual results

Figure 5: Facial expression restoration results on test datasets, top four rows show the comparison on BP4D and others on DISFA. Zoom in for better view on details.

We aim to jointly address face inpainting and super-resolution problem for faial expression restoration. General face inpainting methods are not proper to deal with this case, because the numbers of downsample and upsamle layers are same. Here we aim to verify our proposed method’s effectiveness in two dataset with SRGAN [12], which is notable on image super-resolution area and when we use IGCN , the proposed model is similar with residual block[7], which is the main component for SRGAN. Comparison with SRGAN and proposed model without AUs detection is shown in Fig. 5. In order to observe the difference between different methods, we emphasize the difference of the eye area and mouth area. Obviously, in the results of first row, SRGAN method generates tooth but our proposed method generates closed mouth, which is similar with ground-truth corresponding to AU 25, and in the results of third row, SRGAN generates closed eyes but our proposed method generates opened eyes corresponding to AU 43. For quality and reality, the results of SRGAN have virtual streak and blur, such as the first row. It is worth noting that the shown results are from 13 subjects of BP4D dataset and 9 subjects of DISFA dataset for testing respectively. It is observed that our proposed method outperforms SRGAN in the aspects of reality and quality. And also, our proposed method can well retain the facial action or expression after restoration.

F1-score Accuracy
BP4D SRGAN Ours- Ours Ground-Truth SRGAN Ours- Ours Ground-Truth
Table 1: F1-score and accuracy for 12 AUs on BP4D. Ours is the total loss to learn the proposed model, ours- lacks the loss of AUs classifier.
F1-score Accuracy
DISFA SRGAN Ours- Ours Ground-Truth SRGAN Ours- Ours Ground-Truth
Table 2: F1-score and accuracy for 8 AUs on DISFA. Ours is the total loss to learn the proposed model, ours- lacks the loss of AUs classifier.

4.3 Quantity results

In order to investigate the effectiveness of AU classifier in our framework for facial expression restoration, Table. 1 and Table. 2 present the AU detection results on our restored facial expression images compared with corresponding ground truth images in terms of F1-score and accuracy, where ”” is the proposed framework without AU classifier. Note that AU classifier is trained on training dataset but the quantity results are compared in results of different model on test dataset.

The results shown in Table. 1 demonstrates that our proposed method outperforms SRGAN in BP4D dataset, even ”” brings significant increments of and respectively for average f1-score and accuracy than SRGAN. It is also observed that learning framework with AU classifier gain a few increments of and for average f1-score and accuracy. The large gaps between our proposed method and SRGAN are associated with the distribution of BP4D. Due to the unbalanced distribution of status for each AU as is shown in Fig. 4, our proposed facial expression restoration method is inclined to learn the ‘0’ status for each AU. For accuracy, ours result is similar with ground-truth, even better than ground-truth. And the AU classifier is strong to make the logits of generated face image incline to the distribution of the AUs, which results in the our proposed model getting little higher accuracy than ground-truth.

Similar results in DISFA dataset are shown in Table. 2, from which it can be observed that our proposed method with or without AUs classifier outperforms SRGAN. Specifically, our proposed method increase and for average f1-score and accuracy than SRGAN. For accuracy, our proposed method has a few improvements, it is also associated with AUs occur distribution in the dataset. For most face images, AUs do not occur in DISFA dataset as shown in Fig. 4, so AU classifier always recognizes ‘0’ status. On the other hand, lower f1-score also tell us that model learns the nature situation easier than activate situation for each AU.

SRGAN Ours- Ours SRGAN Ours- Ours
Table 3: SSIM and PSNR on BP4D and DISFA.

When it comes to image restoration task, many works often compare the structural similarity(SSIM) and Peak Signal to Noise Ratio(PSNR). The results can be observed in Table. 

3. Our proposed method outperforms and for SSIM, and for PSNR respectively in BP4D and DISFA dataset, which demonstrate the effectiveness of our proposed method on facial expression restoration. Note that there are few improvements in DISFA datasets due to its extreme unbalance distribution.

5 Conclusion

In this paper, we have proposed a novel facial expression restoration method by integrating a region relation modeling block with the aid of an improved graph convolution network to model the relations among different facial regions. The proposed method is beneficial to facial expression analysis under challenging environments, i.e. low resolution and occlusion. Extensive qualitative and quantitative evaluations conducted on BP4D and DISFA have demonstrated the effectiveness of our method for facial expression restoration. The proposed framework is also promising to be applied for other face restoration tasks and other multi-task problems, i.e. face recognition, facial attribute analysis, etc.


This work is supported by the National Natural Science Foundation of China under Grants of 41806116 and 61503277. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research.


  • [1] Y. Chen, Y. Tai, X. Liu, C. Shen, and J. Yang (2018) Fsrnet: end-to-end learning face super-resolution with facial priors. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 2492–2501. Cited by: §1, §2.1.
  • [2] M. Cheon, J. Kim, J. Choi, and J. Lee (2018) Generative adversarial network-based image super-resolution using perceptual content losses. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 0–0. Cited by: §2.1.
  • [3] R. Ekman (1997) What the face reveals: basic and applied studies of spontaneous expression using the facial action coding system (facs). Oxford University Press, USA. Cited by: §1.
  • [4] D. Engin, A. Genç, and H. Kemal Ekenel (2018) Cycle-dehaze: enhanced cyclegan for single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 825–833. Cited by: §2.1.
  • [5] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017) Neural message passing for quantum chemistry. In

    Proceedings of the 34th International Conference on Machine Learning-Volume 70

    pp. 1263–1272. Cited by: §2.3.
  • [6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1, §2.1.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §4.2.
  • [8] R. Huang, S. Zhang, T. Li, and R. He (2017) Beyond face rotation: global and local perception gan for photorealistic and identity preserving frontal view synthesis. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2439–2448. Cited by: §1, §2.1.
  • [9] P. Khorrami, T. Paine, and T. Huang (2015) Do deep neural networks learn facial action units when doing expression recognition?. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 19–27. Cited by: §2.2.
  • [10] T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §2.3, §3.2.
  • [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
  • [12] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017) Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690. Cited by: §1, §2.1, §4.2.
  • [13] L. Li, J. Pan, W. Lai, C. Gao, N. Sang, and M. Yang (2018) Learning a discriminative prior for blind image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6616–6625. Cited by: §2.1.
  • [14] W. Li, F. Abtahi, Z. Zhu, and L. Yin (2017) Eac-net: a region-based deep enhancing and cropping approach for facial action unit detection. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pp. 103–110. Cited by: §2.2.
  • [15] Y. Li, S. Liu, J. Yang, and M. Yang (2017) Generative face completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3911–3919. Cited by: §1, §2.1.
  • [16] S. M. Mavadati, M. H. Mahoor, K. Bartlett, P. Trinh, and J. F. Cohn (2013) Disfa: a spontaneous facial action intensity database. IEEE Transactions on Affective Computing 4 (2), pp. 151–160. Cited by: §4.1.
  • [17] M. Mirza and S. Osindero (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: §2.1.
  • [18] N. Muhammad, N. Bibi, A. Jahangir, and Z. Mahmood (2018)

    Image denoising with norm weighted fusion estimators

    Pattern Analysis and Applications 21 (4), pp. 1013–1022. Cited by: §2.1.
  • [19] G. Peng and S. Wang (2018) Weakly supervised facial action unit recognition through adversarial training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2188–2196. Cited by: §2.2.
  • [20] Z. Shao, Z. Liu, J. Cai, and L. Ma (2018) Deep adaptive attention for joint facial action unit detection and face alignment. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 705–720. Cited by: §4.1.
  • [21] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §1, §3.1.
  • [22] Y. Song, J. Zhang, L. Gong, S. He, L. Bao, J. Pan, Q. Yang, and M. Yang (2018) Joint face hallucination and deblurring via structure generation and detail enhancement. International Journal of Computer Vision, pp. 1–16. Cited by: §1, §2.1.
  • [23] S. Taheri, Q. Qiu, and R. Chellappa (2014) Structure-preserving sparse decomposition for facial expression analysis. IEEE Transactions on Image Processing 23 (8), pp. 3590–3603. Cited by: §2.2.
  • [24] J. Wang, X. Li, and J. Yang (2018) Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1788–1797. Cited by: §2.1.
  • [25] Y. Wang, X. Zhao, T. Jiang, L. Deng, Y. Chang, and T. Huang (2018) Rain streak removal for single image via kernel guided cnn. arXiv preprint arXiv:1808.08545. Cited by: §2.1.
  • [26] Y. Wu and Q. Ji (2016) Constrained joint cascade regression framework for simultaneous facial action unit recognition and facial landmark detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3400–3408. Cited by: §2.2.
  • [27] R. A. Yeh, C. Chen, T. Yian Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do (2017)

    Semantic image inpainting with deep generative models

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493. Cited by: §1, §2.1.
  • [28] X. Zhang, L. Yin, J. F. Cohn, S. Canavan, M. Reale, A. Horowitz, and P. Liu (2013) A high-resolution spontaneous 3d dynamic facial expression database. In 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–6. Cited by: §4.1.
  • [29] K. Zhao, W. Chu, F. De la Torre, J. F. Cohn, and H. Zhang (2015) Joint patch and multi-label learning for facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2207–2216. Cited by: §2.2.
  • [30] K. Zhao, W. Chu, and H. Zhang (2016) Deep region and multi-label learning for facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3391–3399. Cited by: §2.2.