1 Related Works
. Most existing deep learning methods for face parsing can be divided into region-based methods and global methods.
Global methods directly perform semantic segmentation on the entire image. Early approaches include epitome model [warrell2009labelfaces] and exemplar-based method [smith2013exemplar]. The success of deep convolutional neural network (CNN) models has brought drastic advances in computer vision tasks [NIPS2012_4824], many face parsing methods using CNN have been proposed. Jackson [jackson2016cnn] used extra landmarks as guidance, using boundary cues to locate facial parts. Liu [liu2015multi] used a hybrid framework which contains CRF and CNN to model pixel-wise likelihoods and label dependencies jointly. Zhou [zhou2017face] proposed an architecture which combine fully convolutional network [long2015fully], super-pixel information, and CRF model together. Wei [wei2017learning] proposed a CNN network framework that can adaptively adjust the receptive fields in the middle layer and obtain better receptive fields on face parsing tasks. These models can usually be trained end-to-end. However, their performances can be improved, as the optimization cannot focus on each individual part separatly.
Region-based approaches independently predict pixel-level labels for each part by training separate models for each facial component. Luo [luo2012hierarchical] proposed a hierarchical structure to treat each detected facial parts separately. Liu [liu2017face] achieved state-of-the-art accuracy while maintaining a very fast running speed by combining a shallow CNN with a spatially variable RNN. In the work of Zhou [zhou2015interlinked], region positioning and face parsing were accomplished with the same network structure, without the need for additional landmark detection or extra annotations. They divided the processing into two isolated stages trained independently. They designed the first stage to get a rough mask of the whole image which was used to calculate the coordinates of the facial parts. The second stage was designed to perform fine labeling for each facial parts individually. Finally, the outputs of the second stage were remapped back according to the coordinates, thereby achieving a complete face segmentation process.
On the other hand, Lin [lin2019face] used a hybrid meth-od of global and region-based approaches, they solved the problem of the variable shape and size of hair by using tanh-warping and then performed segmentation globally using FCN. Moreover, similarly to Mask R-CNN [he2017mask], they used RoI-Align for region-based segmentation of the facial parts.
General Semantic Segmentation. Face parsing is essentially a specific semantic segmentation for face. In recent years, general semantic segmentation or instance segmentation has achieved remarkable results. Many deep learning methods have been proposed to solve these problems. Fully convolutional neural networks (FCN) replace the last few fully connected layers to convolutional layers to enable efficient end-to-end learning and prediction [long2015fully]. Based on the FCN, many other improvements, such as SegNet [badrinarayanan2017segnet], Unet [ronneberger2015u] , CRFasRNN [zhu2016adversarial], DeepLab [chen2017deeplab] are proposed. Compared with general semantic segmentation tasks, face parsing has only a few specific semantics (such as nose, eyes, mouth etc.), and these semantics have extremely related position and size relationships. Applying such models directly to face parsing is often incapable of utilizing these context relations [lin2019face].
2.1 Overall Pipeline
The work of Zhou [zhou2015interlinked] was used as our baseline method. As shown in Fig. 1a, the baseline method is divided in two steps. The first step is to detect and crop the face parts. The second step is to label separately the cropped parts. Because the cropping method used in this process is not differentiable, these two stages cannot be jointly trained. This limits the performance of the system. Our proposed method solves this problem by adding a spatial transformer network (STN) between the two steps of the baseline method. STN replaces the original cropper with a differentiable spatial transformer, allowing the model to be trained end-to-end.
As shown in Fig. 1b, for each input image, the image is first resized and passed to the iCNN model which performs coarse segmentation. Then, the predicted rough mask is sent to STN, and the localization network of the STN predicts the transformer parameter matrix . After that, The cropped-out parts are sent to the features segmentation model. Then with as a parameter, the grid transformer crops corresponding parts from the original image. Finally, the inverse grid transformer remaps all the partial predictions into a final whole prediction.
Given an image , it is first squarely resized into , where is the number of channels in the original image, . is then processed by an iCNN network which performs coarse labeling to get a rough prediction :
The rough prediction is then processed by the localized network as part of the STN to obtain the transformation parameter matrix :
where , is the number of individual components. Given , the grid transformer crops the individual parts from the original image :
where , , , represents the height and width of the cropped patches, . Each patch is sent to the same iCNN to predict the pixel-wise labels :
The previous steps were sufficient for training. As is used for loss computation. The ground truth of is cropped from the input labels by .
The next steps are necessary to assemble the partial predictions and compute the score used in testing. This can be done by applying reverse grid transformer to remap all the partial predictions back to original positions:
Finally, Softmax and channel-wise argmax are applied to get the final prediction :
The pipeline process having been described. It is seen that there are two major modules, iCNN and STN, whose architectures are detailed in subsequent subsections.
2.2 Interlinked CNN
iCNN is a CNN structure proposed by Zhou [zhou2015interlinked] for semantic segmentation. It is composed of four groups of fully convolutional networks with interlink structures, which can pass information from coarse to fine. The structure of iCNN model is illustrated in Fig. 2. There are four CNNs that each use different sizes of filter. Each CNN uses only convolutional layers with no downsampling to maintain image size throughout the networks. In-between convolutional layers, there are interlinked layers. Each interlinked layer is applied on 2 layers at the same depth, with one layer having twice the length of the other (vertical neighbors on the figure). The smaller feature map is upsampled and concatenated with the feature map of the larger feature map similarly, the larger feature map is downsampled to be concatenated with the feature map of the smaller feature map.
2.3 Spatial Transformer Network
The key to end-to-end training in our method is to connect the isolated gradient flow between the two training stage. To achieve this, a differentiable cropper needs to be implemented. Inspired by Tang [tang2019improving], we use a modified version of STN [jaderberg2015spatial] to perform positioning and region-based feature learning. As described by Jaderberg [jaderberg2015spatial], Spatial Transform Network (STN) is composed of Localization Net, Grid Generator and Sampler. In this paper, for the sake of simplicity, the combination of Grid Generator and Sampler corresponds to Grid Transformer.
The 9-layer localization network we used is simplified from VGG16 [simonyan2014very]. This network can be replaced with another convolutional neural network structure to obtain better performance. As Fig. 3
shows, we first use 8 convolutional layers to perform feature extraction and map it to atransform matrix through a fully connected layer.
The grid transformer samples the relevant parts of an image into a regular grid of pixels , forming an output feature map , where and are the height and width of the grid, is the number of channels. We use 2D affine transformation for :
where , are the source coordinates and target coordinates of the -th pixel. In order for STN to do crop operations, we constrain as follows:
which allows cropping, translation, and isotropic scaling by varying , , and . These parameters are predicted from the rough mask by localization network .
2.4 Loss Function
The average binary cross entropy loss has been used as a criterion for both coarse and fine segmentation:
where is the number of parts, is the prediction and
is the target ground truth.The loss functionof the entire system is defined as follows:
where is parts predicton, is the binary ground truth cropped from label . is used to optmizie the whole system.
2.5 Implementation Details
In order to batch process images with different sizes, The input image is resized to the same size using bilinear interpolation and padding, respectively. The interpolated image is sent to the rough labeling network. The padded image is sent to the STN for cropping, because the padding operation can retain the original image information.
We perform data augmentation during data preprocessing, using a total of 4 random operations:
Random rotation, range from -15to 15
Random shift, horizontal shift is randomly sampled in the range and vertical shift is randomly sampled in the range , where , represents width and height of an image.
Random scale, the scale is randomly sampled from the range .
Gaussian random noise, this operation is not performed on labels.
Each image is augmented into 5 images and each augmentation takes from 0 to 4 of the operations at random. An image will be augmented into with and each will have of the 4 operations applied randomly. For instance, an image will be augmented into 4 different images by applying sets of operations (a), (b, d), (a, c, d), and (a, b, c, d).
We divide the training process of the entire system into pre-training and end-to-end training.
For better results, we pre-trained the system. There are two modules that need to be pre-trained, iCNN for coarse segmentation, and localization network for parts localization. We perform the pre-training operation in two steps: first is pre-trained, then using the parameters of to pre-train . As shown in Fig. 4, the input of is the resized image , and the output is the rough prediction . Its optimization goal is CrossEntropy loss between and resized label :
where is resized label.
The input of is , and the output is transformer matrix , its optimization goal is Smooth L1 loss between and , the formulation of Smooth L1 is:
where is given by:
It uses a squared term if the absolute element-wise error falls below and an term otherwise. is the groundtruth of , which can be generated from original label . The details of the generation of are as follows.
Given binary label , the central coordinates of parts can be calculated. And with the cropping window size fixed to , can be calculated by equation (14).
With pre-trained parameters loaded, we perform end-to-end training on the whole framework:
where represents the whole proposed framework. The optimization goal of the system is the cross-entropy loss between parts prediction and partial labels cropped from . At this stage, the learning rate of the pre-trained networks is lower than that of other networks.
We update the network parameters using the Stochastic Gradient Descent algorithm.
3.1 Dataset and Evaluation Metric
To have comparable results with Zhou [zhou2015interlinked], we performed all our experiments on the HELEN Dataset [smith2013exemplar]. The HELEN dataset contains images. Each image is annotated with binary masks labeled by categories: background, face skin, eyes (left, right), eyebrows (left, right), nose, upper-lip, lower-lip, inner mouth and hair. Like [zhou2015interlinked, liu2015multi, yamashita2015cost, wei2017learning, lin2019face], we split the dataset into training, validation, and test set with respectively , , and samples.
This paper uses the
score as the evaluation metric. The
where TP denotes True Positive predictions, FP False Positive predictions, and FN False Negative predictions.
3.2 Comparison with the Baseline Model
|* denotes training without end-to-end|
We used the results in Zhou [zhou2015interlinked] as the baseline, and compared them with our reimplemented iCNN results and STN-iCNN results. The comparison results are shown in Table 1, where STN-iCNN* represents the results of STN-iCNN before end-to-end training.
As shown in the Table 1, the model’s results have improved significantly even before end-to-end training. This is because the Localization Network in STN has a deep CNN layer, so it can learn the context relationship of the semantic parts from the rough mask. In the case where the rough mask is not complete, the accurate transformer matrix can still be predicted. Therefore, STN is able to crop more accurately than the original cropper, which improves the overall performance.
As shown in Fig. 5, we performed a comparison experiment on two different cropping methods. In this experiment, we selected some images and randomly cover part of their facial components (such as left eyebrows, right eye, mouth, etc.) with their background information. We then sent the images to the rough labeling model to get the incomplete rough segmentation results, see row 2 in Fig. 5. Based on the rough results, we used the baseline method and STN method to crop unresized images and compare their cropping results. The experimental results are shown in the last two rows of Fig. 5. The results show that the STN method can work normally even if the rough mask is partially missing.
3.3 Sensitivity of the hyperparameters
The size after resizing
The baseline input size for model K is , taking into account the smaller size of eyes and eyebrows features, the input size was changed to in our model. This change had limited effect on the baseline method, but improved our approach significantly, see Fig. 6.
The size of cropped patches
The size of the cropped patch should be odd rather than even. This is to ensure the integer grid coordinates during grid sampling so that the grid transformer is equal to the cropper of baseline method in the crop operation. As shown in Fig. 7, it can be seen that when an integer gird cannot be found, STN performs bilinear interpolation while the baseline cropper makes one-pixel offset, thus leads to an unequal result. We choose and compare the cropped results of the baseline cropper with STN as shown in Fig. 8.
3.4 Comparison with State-of-the-art Methods
After selecting the appropriate hyperparameters, we completed end-to-end training on the STN-iCNN proposed in this paper, and compared its test results with state-of-the-art. As can be seen from Table2, the performance of the original iCNN model has been greatly improved by the proposed STN-iCNN. It is worth mentioning that because our model cannot handle hair, and we cannot determine the effect of hair on the overall score of our model, so we did not compare it with the results of Lin [lin2019face].
We introduced the STN-iCNN, an end-to-end framework for face parsing, which is a non-trivial extension of the two-stage face parsing pipeline presented in [zhou2015interlinked]. By adding STN to the original pipeline, we provide a trainable connection between the two isolated stage of the original method, and successfully achieve end-to-end training. The end-to-end training can help two labeling stages optimize towards a common goal and improve each other, so the trained model can achieve better results. Moreover, the addition of STN also greatly improves the accuracy of facial component positioning and cropping, which is also important for overall accuracy. Experiments show that our method can greatly improve the accuracy of the original model in face parsing.
STN-iCNN can be regarded as a region-based end-to-end semantic segmentation method that does not require extra position annotation. In addition to face parsing tasks, It might be extended to general semantic segmentation, which is a valuable direction.
Acknowledgements This work was supported by the National Natural Science Foundation of China under Grant Nos. U19B2034, 51975057, 601836014. The first author would like to thank Haoyu Liang, Aminul Huq for providing useful suggestions.