End-to-End Face Parsing via Interlinked Convolutional Neural Networks

02/12/2020 ∙ by Zi Yin, et al. ∙ Tsinghua University 18

Face parsing is an important computer vision task that requires accurate pixel segmentation of facial parts (such as eyes, nose, mouth, etc.), providing a basis for further face analysis, modification, and other applications. In this paper, we introduce a simple, end-to-end face parsing framework: STN-aided iCNN (STN-iCNN), which extends interlinked Convolutional Neural Network (iCNN) by adding a Spatial Transformer Network (STN) between the two isolated stages. The STN-iCNN uses the STN to provide a trainable connection to the original two-stage iCNN pipe-line, making end-to-end joint training possible. Moreover, as a by-product, STN also provides more precise cropped parts than the original cropper. Due to the two advantages, our approach significantly improves the accuracy of the original model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Related Works

Face Parsing

. Most existing deep learning methods for face parsing can be divided into region-based methods and global methods.

Global methods directly perform semantic segmentation on the entire image. Early approaches include epitome model [warrell2009labelfaces] and exemplar-based method [smith2013exemplar]. The success of deep convolutional neural network (CNN) models has brought drastic advances in computer vision tasks [NIPS2012_4824], many face parsing methods using CNN have been proposed. Jackson [jackson2016cnn] used extra landmarks as guidance, using boundary cues to locate facial parts. Liu [liu2015multi] used a hybrid framework which contains CRF and CNN to model pixel-wise likelihoods and label dependencies jointly. Zhou [zhou2017face] proposed an architecture which combine fully convolutional network [long2015fully], super-pixel information, and CRF model together. Wei [wei2017learning] proposed a CNN network framework that can adaptively adjust the receptive fields in the middle layer and obtain better receptive fields on face parsing tasks. These models can usually be trained end-to-end. However, their performances can be improved, as the optimization cannot focus on each individual part separatly.

Region-based approaches independently predict pixel-level labels for each part by training separate models for each facial component. Luo [luo2012hierarchical] proposed a hierarchical structure to treat each detected facial parts separately. Liu [liu2017face] achieved state-of-the-art accuracy while maintaining a very fast running speed by combining a shallow CNN with a spatially variable RNN. In the work of Zhou [zhou2015interlinked], region positioning and face parsing were accomplished with the same network structure, without the need for additional landmark detection or extra annotations. They divided the processing into two isolated stages trained independently. They designed the first stage to get a rough mask of the whole image which was used to calculate the coordinates of the facial parts. The second stage was designed to perform fine labeling for each facial parts individually. Finally, the outputs of the second stage were remapped back according to the coordinates, thereby achieving a complete face segmentation process.

On the other hand, Lin [lin2019face] used a hybrid meth-od of global and region-based approaches, they solved the problem of the variable shape and size of hair by using tanh-warping and then performed segmentation globally using FCN. Moreover, similarly to Mask R-CNN [he2017mask], they used RoI-Align for region-based segmentation of the facial parts.

General Semantic Segmentation. Face parsing is essentially a specific semantic segmentation for face. In recent years, general semantic segmentation or instance segmentation has achieved remarkable results. Many deep learning methods have been proposed to solve these problems. Fully convolutional neural networks (FCN) replace the last few fully connected layers to convolutional layers to enable efficient end-to-end learning and prediction [long2015fully]. Based on the FCN, many other improvements, such as SegNet [badrinarayanan2017segnet], Unet [ronneberger2015u] , CRFasRNN [zhu2016adversarial], DeepLab [chen2017deeplab] are proposed. Compared with general semantic segmentation tasks, face parsing has only a few specific semantics (such as nose, eyes, mouth etc.), and these semantics have extremely related position and size relationships. Applying such models directly to face parsing is often incapable of utilizing these context relations [lin2019face].

2 Method

2.1 Overall Pipeline

The work of Zhou [zhou2015interlinked] was used as our baseline method. As shown in Fig. 1a, the baseline method is divided in two steps. The first step is to detect and crop the face parts. The second step is to label separately the cropped parts. Because the cropping method used in this process is not differentiable, these two stages cannot be jointly trained. This limits the performance of the system. Our proposed method solves this problem by adding a spatial transformer network (STN) between the two steps of the baseline method. STN replaces the original cropper with a differentiable spatial transformer, allowing the model to be trained end-to-end.

As shown in Fig. 1b, for each input image, the image is first resized and passed to the iCNN model which performs coarse segmentation. Then, the predicted rough mask is sent to STN, and the localization network of the STN predicts the transformer parameter matrix . After that, The cropped-out parts are sent to the features segmentation model. Then with as a parameter, the grid transformer crops corresponding parts from the original image. Finally, the inverse grid transformer remaps all the partial predictions into a final whole prediction.

Given an image , it is first squarely resized into , where is the number of channels in the original image, . is then processed by an iCNN network which performs coarse labeling to get a rough prediction :

(1)

where .

The rough prediction is then processed by the localized network as part of the STN to obtain the transformation parameter matrix :

(2)

where , is the number of individual components. Given , the grid transformer crops the individual parts from the original image :

(3)

where , , , represents the height and width of the cropped patches, . Each patch is sent to the same iCNN to predict the pixel-wise labels :

(4)

where .

The previous steps were sufficient for training. As    is used for loss computation. The ground truth of    is cropped from the input labels by .

The next steps are necessary to assemble the partial predictions and compute the score used in testing. This can be done by applying reverse grid transformer to remap all the partial predictions back to original positions:

(5)

where .

Finally, Softmax and channel-wise argmax are applied to get the final prediction :

(6)

The pipeline process having been described. It is seen that there are two major modules, iCNN and STN, whose architectures are detailed in subsequent subsections.

2.2 Interlinked CNN

Figure 2: The structure of iCNN. L: The number of label channels. Blue arrows: downsampling (max pooling). Orange arrows: upsampling (nearest neighbor). All convolutional layers share the same parameters: .

iCNN is a CNN structure proposed by Zhou [zhou2015interlinked] for semantic segmentation. It is composed of four groups of fully convolutional networks with interlink structures, which can pass information from coarse to fine. The structure of iCNN model is illustrated in Fig. 2. There are four CNNs that each use different sizes of filter. Each CNN uses only convolutional layers with no downsampling to maintain image size throughout the networks. In-between convolutional layers, there are interlinked layers. Each interlinked layer is applied on 2 layers at the same depth, with one layer having twice the length of the other (vertical neighbors on the figure). The smaller feature map is upsampled and concatenated with the feature map of the larger feature map similarly, the larger feature map is downsampled to be concatenated with the feature map of the smaller feature map.

In this work, we also use iCNN for both coarse and fine labeling stages. Without changing the original iCNN structure, we added Batch Normalization and ReLU activation to the network. Additionally, we also used larger pictures as inputs, from

to .

2.3 Spatial Transformer Network

The key to end-to-end training in our method is to connect the isolated gradient flow between the two training stage. To achieve this, a differentiable cropper needs to be implemented. Inspired by Tang [tang2019improving], we use a modified version of STN [jaderberg2015spatial] to perform positioning and region-based feature learning. As described by Jaderberg [jaderberg2015spatial], Spatial Transform Network (STN) is composed of Localization Net, Grid Generator and Sampler. In this paper, for the sake of simplicity, the combination of Grid Generator and Sampler corresponds to Grid Transformer.

Figure 3: The structure of Localization Network in STN Module. This 9-layer network is a simplified version of VGG16. Each convolutional layer (white) includes a convolution, Batch Normalization and ReLU non-linear activation. After every two convolutional layer, an average pooling (red) is applied. Finally, a fully connected layer is applied with ReLU activation (blue). For all convolutional layers: . For all pooling layers:.
Localization Network

The 9-layer localization network we used is simplified from VGG16 [simonyan2014very]. This network can be replaced with another convolutional neural network structure to obtain better performance. As Fig. 3

shows, we first use 8 convolutional layers to perform feature extraction and map it to a

transform matrix through a fully connected layer.

Grid Transformer

The grid transformer samples the relevant parts of an image into a regular grid of pixels , forming an output feature map , where and are the height and width of the grid, is the number of channels. We use 2D affine transformation for :

(7)

where , are the source coordinates and target coordinates of the -th pixel. In order for STN to do crop operations, we constrain as follows:

(8)

which allows cropping, translation, and isotropic scaling by varying , , and . These parameters are predicted from the rough mask by localization network .

2.4 Loss Function

The average binary cross entropy loss has been used as a criterion for both coarse and fine segmentation:

(9)

where is the number of parts, is the prediction and

is the target ground truth.The loss function

of the entire system is defined as follows:

(10)

where is parts predicton, is the binary ground truth cropped from label . is used to optmizie the whole system.

2.5 Implementation Details

2.5.1 Preprocessing

Batch Processing

In order to batch process images with different sizes, The input image is resized to the same size using bilinear interpolation and padding, respectively. The interpolated image is sent to the rough labeling network. The padded image is sent to the STN for cropping, because the padding operation can retain the original image information.

Data Augmentation

We perform data augmentation during data preprocessing, using a total of 4 random operations:

  1. Random rotation, range from -15to 15

  2. Random shift, horizontal shift is randomly sampled in the range and vertical shift is randomly sampled in the range , where , represents width and height of an image.

  3. Random scale, the scale is randomly sampled from the range .

  4. Gaussian random noise, this operation is not performed on labels.

Each image is augmented into 5 images and each augmentation takes from 0 to 4 of the operations at random. An image will be augmented into with and each will have of the 4 operations applied randomly. For instance, an image will be augmented into 4 different images by applying sets of operations (a), (b, d), (a, c, d), and (a, b, c, d).

2.5.2 Training

Figure 4: Pre-training Pipeline.

We divide the training process of the entire system into pre-training and end-to-end training.

Pre-training

For better results, we pre-trained the system. There are two modules that need to be pre-trained, iCNN for coarse segmentation, and localization network for parts localization. We perform the pre-training operation in two steps: first is pre-trained, then using the parameters of to pre-train . As shown in Fig. 4, the input of is the resized image , and the output is the rough prediction . Its optimization goal is CrossEntropy loss between and resized label :

(11)

where is resized label.

The input of is , and the output is transformer matrix , its optimization goal is Smooth L1 loss between and , the formulation of Smooth L1 is:

(12)

where is given by:

(13)

It uses a squared term if the absolute element-wise error falls below and an term otherwise. is the groundtruth of , which can be generated from original label . The details of the generation of are as follows.

Given binary label , the central coordinates of parts can be calculated. And with the cropping window size fixed to , can be calculated by equation (14).

(14)
End-to-end training

With pre-trained parameters loaded, we perform end-to-end training on the whole framework:

(15)

where represents the whole proposed framework. The optimization goal of the system is the cross-entropy loss between parts prediction and partial labels cropped from . At this stage, the learning rate of the pre-trained networks is lower than that of other networks.

Optimization

We update the network parameters using the Stochastic Gradient Descent algorithm.

3 Experiments

3.1 Dataset and Evaluation Metric

Dataset

To have comparable results with Zhou [zhou2015interlinked], we performed all our experiments on the HELEN Dataset [smith2013exemplar]. The HELEN dataset contains images. Each image is annotated with binary masks labeled by categories: background, face skin, eyes (left, right), eyebrows (left, right), nose, upper-lip, lower-lip, inner mouth and hair. Like [zhou2015interlinked, liu2015multi, yamashita2015cost, wei2017learning, lin2019face], we split the dataset into training, validation, and test set with respectively , , and samples.

Evaluation metric

This paper uses the

score as the evaluation metric. The

 score is the harmonic mean of the precision and recall, and reaches its best value at 1 (perfect precision and recall) and worst at 0. The metric is formulated as follows:

(16)
(17)
(18)

where TP denotes True Positive predictions, FP False Positive predictions, and FN False Negative predictions.

3.2 Comparison with the Baseline Model

Methods eyes brows nose I-mouth U-lip L-lip mouth skin overall
iCNN [zhou2015interlinked] 0.778 0.920 0.777 0.808 0.889 - 0.845
iCNN (Ours) 0.863 0.790 0.936 0.812 0.772 0.830 0.908 - 0.865
STN-iCNN* 0.891 0.845 0.956 0.853 0.792 0.834 0.920 - 0.893
STN-iCNN -
* denotes training without end-to-end
Table 1: scores of different models.

We used the results in Zhou [zhou2015interlinked] as the baseline, and compared them with our reimplemented iCNN results and STN-iCNN results. The comparison results are shown in Table 1, where STN-iCNN* represents the results of STN-iCNN before end-to-end training.

As shown in the Table 1, the model’s results have improved significantly even before end-to-end training. This is because the Localization Network in STN has a deep CNN layer, so it can learn the context relationship of the semantic parts from the rough mask. In the case where the rough mask is not complete, the accurate transformer matrix can still be predicted. Therefore, STN is able to crop more accurately than the original cropper, which improves the overall performance.

As shown in Fig. 5, we performed a comparison experiment on two different cropping methods. In this experiment, we selected some images and randomly cover part of their facial components (such as left eyebrows, right eye, mouth, etc.) with their background information. We then sent the images to the rough labeling model to get the incomplete rough segmentation results, see row 2 in Fig. 5. Based on the rough results, we used the baseline method and STN method to crop unresized images and compare their cropping results. The experimental results are shown in the last two rows of Fig. 5. The results show that the STN method can work normally even if the rough mask is partially missing.

Figure 5: Comparison of two cropping methods, the Baseline method and STN method.

3.3 Sensitivity of the hyperparameters

The size after resizing

The baseline input size for model K is , taking into account the smaller size of eyes and eyebrows features, the input size was changed to in our model. This change had limited effect on the baseline method, but improved our approach significantly, see Fig. 6.

Figure 6: Comparison of the effect of on the baseline model and our model.
Figure 7:

Comparison between two cropping methods on the even size patch and on the odd size. The blue one is the baseline cropper, and the green one is STN method.

The size of cropped patches

The size of the cropped patch should be odd rather than even. This is to ensure the integer grid coordinates during grid sampling so that the grid transformer is equal to the cropper of baseline method in the crop operation. As shown in Fig. 7, it can be seen that when an integer gird cannot be found, STN performs bilinear interpolation while the baseline cropper makes one-pixel offset, thus leads to an unequal result. We choose and compare the cropped results of the baseline cropper with STN as shown in Fig. 8.

Figure 8: Parts cropped by two different crop methods.The first row is the input image, and the second row is parts cropped by the baseline cropper. The third row is parts cropped by STN. The last row shows the pixel-wise difference between the two pictures, which is all zero.

3.4 Comparison with State-of-the-art Methods

Methods eyes brows nose I-mouth U-lip L-lip mouth skin overall
Simth [smith2013exemplar] 0.785 0.722 0.922 0.713 0.651 0.700 0.857 0.882 0.804
Zhou [zhou2015interlinked] 0.778 0.920 0.777 0.808 0.889 - 0.845
Liu [liu2015multi] 0.768 0.713 0.909 0.808 0.623 0.694 0.841 0.910 0.847
Liu [liu2017face] 0.868 0.770 0.930 0.792 0.743 0.817 0.891 0.921 0.886
Wei [wei2017learning] 0.847 0.786 0.937 - - - 0.915 0.915 0.902
iCNN (Ours) -
STN-iCNN -
Table 2: Comparison with State-of-the-art Methods

After selecting the appropriate hyperparameters, we completed end-to-end training on the STN-iCNN proposed in this paper, and compared its test results with state-of-the-art. As can be seen from Table

2, the performance of the original iCNN model has been greatly improved by the proposed STN-iCNN. It is worth mentioning that because our model cannot handle hair, and we cannot determine the effect of hair on the overall score of our model, so we did not compare it with the results of Lin [lin2019face].

4 Conclusion

We introduced the STN-iCNN, an end-to-end framework for face parsing, which is a non-trivial extension of the two-stage face parsing pipeline presented in [zhou2015interlinked]. By adding STN to the original pipeline, we provide a trainable connection between the two isolated stage of the original method, and successfully achieve end-to-end training. The end-to-end training can help two labeling stages optimize towards a common goal and improve each other, so the trained model can achieve better results. Moreover, the addition of STN also greatly improves the accuracy of facial component positioning and cropping, which is also important for overall accuracy. Experiments show that our method can greatly improve the accuracy of the original model in face parsing.

STN-iCNN can be regarded as a region-based end-to-end semantic segmentation method that does not require extra position annotation. In addition to face parsing tasks, It might be extended to general semantic segmentation, which is a valuable direction.

Acknowledgements  This work was supported by the National Natural Science Foundation of China under Grant Nos. U19B2034, 51975057, 601836014. The first author would like to thank Haoyu Liang, Aminul Huq for providing useful suggestions.

References