In this work, we address the problem of registering a detailed 3D mesh template to a human face on an image. This registered mesh can be used for the virtual try-on of lipstick or puppeteering of virtual avatars where the accuracy of lip and eye contours is critical to realism.
In contrast to methods that use a parametric model of the human face, we directly predict the positions of face mesh vertices in 3D. We base our architecture on earlier efforts in this field  that use a two stage architecture involving a face detector followed by a landmark regression network. However, using a single regression network for the entire face leads to degraded quality in regions that are perceptually more significant (e.g. lips, eyes).
One possible way to alleviate this issue is a cascaded approach: use the initial mesh prediction to produce tight crops around these regions and pass them to specialized networks to produce higher quality landmarks. While this directly addresses the problem of accuracy, it introduces performance issues, e.g. relatively large separate models that use the original image as input, and additional synchronization steps between the GPU and CPU that are very costly on mobile phones. In this paper, we show that it is possible for a single model to achieve the same quality as the cascaded approach by employing region-specific heads that transform the feature maps with spatial transformers , while being up to 30 percent faster during inference. We term this architecture as attention mesh. An added benefit is that it is easier to train and distribute since it is internally consistent compared to multiple disparate networks that are chained together.
We use an architecture similar to one described in , where the authors build a network that is robust to the initialization provided by different face detectors. Despite the differing goals of the two papers, it is interesting to note that both suggest that a combination of using spatial transformers with heads corresponding to salient face regions produces marked improvements over a single large network. We provide the details of our implementation for producing landmarks corresponding to eyes, irises, and lips, as well as quality and inference performance benchmarks.
2 Attention mesh
The model accepts a image as input. This image is provided by either the face detector or via tracking from a previous frame. After extracting a feature map, the model splits into several sub-models (Figure 2). One submodel predicts all 478 face mesh landmarks in 3D and defines crop bounds for each region of interest. The remaining submodels predict region landmarks from the corresponding feature maps that are obtained via the attention mechanism.
We concentrate on three facial regions with key contours: the lips and two eyes (Figure 1). Each eye submodel predicts the iris as a separate output after reaching the spatial resolution of . This allows the reuse of eye features while keeping dynamic iris independent from the more static eye landmarks.
Individual submodels allow us to control the network capacity dedicated to each region and boost quality where necessary. To further improve the accuracy of the predictions, we apply a set of normalizations to ensure that the eyes and lips are aligned with the horizontal and are of uniform size.
We train the attention mesh network in two phases. First, we employ ideal crops from the ground truth with slight augmentations and train all submodels independently. Then, we obtain crop locations from the model itself and train again to adapt the region submodels to them.
Several attention mechanisms (soft and hard) have been developed for visual feature extraction[2, 4]. These attention mechanisms sample a grid of 2D points in feature space and extract the features under the sampled points in a differentiable manner (e.g
. using 2D Gaussian kernels or affine transformations and differentiable interpolations). This allows to train architectures end-to-end and enrich the features that are used by the attention mechanism. Specifically, we use a spatial transformer module to extract region features from the feature map. The spatial transformer is controlled by an affine transformation matrix (Equation 1
) and allows us to zoom, rotate, translate, and skew the sampled grid of points.
This affine transformation can be constructed either via supervised prediction of matrix parameters, or by computing them from the output of the face mesh submodel.
Our dataset contains 30K in-the-wild mobile camera photos taken with numerous camera sensors and in varied conditions. We used manual annotation with special emphasis on consistency for salient contours to obtain the ground truth mesh vertex coordinates in 2D. The coordinate was approximated using a synthetic model.
To evaluate our unified approach, we compare it against the cascaded model which consists of independently trained region-specific models for the base mesh, eyes and lips that are run in succession.
Table 1 demonstrates that the attention mesh runs faster than the cascade of separate face and region models on a typical modern mobile device. The performance has been measured using the TFLite GPU inference engine . An additional speed-up is achieved due to the reduction of costly CPU-GPU synchronizations, since the whole attention mesh inference is performed in one pass on the GPU.
|Model||Inference Time (ms)|
|Eye & iris||4.70|
|Cascade (sum of above)||22.4|
A quantitative comparison of both models is presented in Table 2. As the representative metric, we employ the mean distance between the predicted and ground truth locations of a specific subset of the points, normalized by 3D interocular distance (or the distance between the corners in the case of lips and eyes) for scale invariance. The attention mesh model outperforms the cascade of models on the eye regions and demonstrates comparable performance on the lips region.
The performance of our model enables several real-time AR applications like virtual try-on of makeup and puppeteering.
Accurate registration of the face mesh is critical to applications like AR makeup where even small errors in alignment can drive the rendered effect into the ”uncanny valley” . We built a lipstick rendering solution (Figure 4
) on top of our attention mesh model by using the contours provided by the lip submodel. A/B testing on 10 images and 80 people showed that 46% of AR samples were actually classified as real and 38% of real samples — as AR.
Our model can also be used for virtual puppeteering and facial triggers. We built a small fully connected model that predicts 10 blend shape coefficients for the mouth and 8 blend shape coefficients for each eye. We feed the output of the attention mesh submodels to this blend shape network. In order to handle differences between various human faces, we apply Laplacian mesh editing to morph a canonical mesh into the predicted mesh . This lets us use the blend shape coefficients for different human faces without additional fine-tuning. We demonstrate some results in Figure 5.
We present a unified model that enables accurate face mesh prediction in real-time. By using a differentiable attention mechanism, we are able to devote computational resources to salient face regions without incurring the performance penalty of running independent region-specific models. Our model and demos will soon be available in MediaPipe (https://github.com/google/mediapipe).
-  Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3D faces. In Proceedings of 36th Internaional Conference and Exhibition on Computer Graphics and Interactive Techniques, pages 187–194, 1999.
-  Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
-  Jianwei Hu, Ligang Liu, and Guozhao Wang. Dual laplacian morphing for triangular meshes. Computer Animation and Virtual Worlds, 18(4‐5):271–277, 2007.
-  Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems, pages 2017–2025, 2015.
-  Yury Kartynnik, Artsiom Ablavatski, Ivan Grishchenko, and Matthias Grundmann. Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs. arXiv preprint arXiv:1502.04623, July 2019.
-  Juhyun Lee, Nikolay Chirkov, Ekaterina Ignasheva, Yury Pisarchyk, Mogan Shieh, Fabio Riccardi, Raman Sarokin, Andrei Kulik, and Matthias Grundmann. On-device neural net inference with mobile gpus. arXiv preprint arXiv:1907.01989, 2019.
-  J. Lv, X. Shao, J. Xing, C. Cheng, and X. Zhou. A deep regression architecture with two-stage re-initialization for high performance facial landmark detection. In , pages 3691–3700, 2017.
-  Jun’ichiro Seyama and Ruth S. Nagayama. The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoper. Virtual Environ., 16(4):337–351, Aug. 2007.