Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning

10/04/2017 ∙ by Dongyu Zhang, et al. ∙ Queen Mary University of London IEEE NetEase, Inc 0

Sketch portrait generation benefits a wide range of applications such as digital entertainment and law enforcement. Although plenty of efforts have been dedicated to this task, several issues still remain unsolved for generating vivid and detail-preserving personal sketch portraits. For example, quite a few artifacts may exist in synthesizing hairpins and glasses, and textural details may be lost in the regions of hair or mustache. Moreover, the generalization ability of current systems is somewhat limited since they usually require elaborately collecting a dictionary of examples or carefully tuning features/components. In this paper, we present a novel representation learning framework that generates an end-to-end photo-sketch mapping through structure and texture decomposition. In the training stage, we first decompose the input face photo into different components according to their representational contents (i.e., structural and textural parts) by using a pre-trained Convolutional Neural Network (CNN). Then, we utilize a Branched Fully Convolutional Neural Network (BFCN) for learning structural and textural representations, respectively. In addition, we design a Sorted Matching Mean Square Error (SM-MSE) metric to measure texture patterns in the loss function. In the stage of sketch rendering, our approach automatically generates structural and textural representations for the input photo and produces the final result via a probabilistic fusion scheme. Extensive experiments on several challenging benchmarks suggest that our approach outperforms example-based synthesis algorithms in terms of both perceptual and objective metrics. In addition, the proposed method also has better generalization ability across dataset without additional training.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 5

page 6

page 7

page 9

page 10

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Sketch portrait generation has widespread utility in many applications [1, 2, 3]. For example, in the law enforcement, when it is impossible to get the photo of criminal, a sketch portrait drawn based on the description of eyewitness may help the policemen to quickly identify the suspect by utilizing automatical sketch-based retrieval in the mug-shot database. In digital entertainment, people like to render their photos into sketch style and use them as the avatars on social media for enjoyment.

Fig. 1: Illustration results of existing methods and the proposed approach.

Despite the widespread applications of sketch portrait, it remains a challenging problem to generate vivid and detail-preserved sketch because of the great difference between photo and sketch. To the best of our knowledge, most of existing approaches generate sketch portraits based on the synthesis of training examples. Given a photo patch, these methods find similar patches in the training set and use their corresponding sketch patches to synthesize the sketch of input photo. Although impressive results have been received, there remains several issues in these methods. As shown in Fig. 1, the synthesis results of non-facial factors of these example-based methods are not satisfied, such as hairpins and glasses [1, 3]. Because of the great variations in appearance and geometry of these decorations, it is easy to involve artifacts in the synthesis results. Besides some methods [2, 3] average the candidate sketches to generate smoothed results. They may produce acceptable sketches for face part, but always fail to preserve textural details, such as the hair region. Finally, the performance of these example-based methods are only acceptable when training and test samples originate from the same dataset, however, this situation is rarely happened in practice.

Aiming at alleviating the aforementioned problems, we propose to learn sketch representations directly from raw pixels of input photos, and develop a decompositional representation learning framework to generate an end-to-end photo-sketch mapping through structure and textural decomposition. Given an input photo, our method first roughly decompose it into different regions according to their representational contents, such as face, hair and background. Then we learn structural representation and textural representation from different parts respectively. The structural representation learning mainly focuses on the facial part, while the textural representation learning mainly targets on preserving the fine-grained details of hair regions. Finally, the two representations are fused to generate the final sketch portrait via a probabilistic method.

Fig. 2:

Illustration of the pipeline of sketch portraits generation via the proposed framework. Our approach feeds an input photo into the branched fully convolutional network to produce a structural sketch and a textural sketch, respectively. Guided by the parsing maps, the two sketches are fused to get the final result via a probability fusion method.

Specifically, in the training stage, we first adopt a pre-trained parsing network (P-Net) to automatically output a probability parsing map, which assigns a three-dimensional vector to each pixel of input photo to indicate its probability belonging to face, hair, and background. With the probability parsing map we can easily obtain the face regions and hair regions. We then utilize a branched fully convolutional network (BFCN), which includes a structural branch and a textural branch, to learn the structural representation and textural representation respectively. We select patches of face part when training the structural branch and adopt mean square error (MSE) as its objective function.

For the textural branch, we feed it with patches selected from hair regions. As to the loss function of textural branch, we do not use MSE which is used in the training of structural branch. The reason is that different from structural regions, textural regions usually possess periodic and oscillatory natures [4, 5, 6], and a point-to-point matching, such as MSE, is not effective enough to measure the similarity of two similar textural regions. Thus, directly applying MSE for textural branch learning can not well preserve the fine-grained textural details. To solve this problem, we propose a sorted matching mean square error (SM-MSE) for the training of textural branch of BFCN. SM-MSE can be regarded as applying an ascending sort operator before calculating MSE. Compared with MSE, it can effectively evaluate the similarity of two textural patterns. The detail of SM-MSE is described in Section III.

In the testing stage, given an input photo, we first use BFCN to learn its structural representation and textural representation. Then, the two representations are fused to generate final sketch portrait guided by the probability parsing maps. The pipeline of generating sketch portraits via BFCN is illustrated in Fig. 2.

The key contribution of this work is a task-driven deep learning method that achieves a new state-of-the-art performance for personal sketch portrait generation. Our framework is capable of learning the photo-sketch mapping in an end-to-end way, unlike the traditional approaches that usually require elaborately collecting a dictionary of examples or carefully tuning features/components. Moreover, the proposed SM-MSE metric is very effective to measure texture patterns during the representation learning, improving the expression of sketch portraits through capturing textural details.

The remainder of this paper is organized as follows. Section II reviews related works about sketch synthesis and convolutional neural networks. Section III describes the proposed decompositional representation learning framework for sketch portrait generation in detail. Extensive experimental results are provided in Section IV. Finally, Section V concludes this paper.

Ii Related work

In this section, we first review the example-based sketch synthesis methods proposed in previous work. Then, we discuss different strategies which produce dense sketch outputs via neural networks.

Ii-a Sketch Portrait Generation via Synthesis-by-Exemplar

Most works in sketch portrait generation focus on two kinds of sketches, namely profile sketches [7] and shading sketches [8]. Compared with the former, the shading sketches can not only use lines to reflect the overall profiles, but also capture the textural parts via shading. Thus, the shading sketches are more challenge to be modeled. We mainly study the automatic generation of shading sketches in this paper.

In most previous works, sketch portrait generation is usually modeled as a synthesis problems with assumption that similar photo images have similar sketch images. Tang and Wang [8]

proposed a sketch portrait generation method based on eigen transformation (ET). For each test photo image, this method searches similar photo images in a prepared training set, and then uses the corresponding sketch images to synthesize the sketch. The photo-to-sketch mapping is approximated as linear transform in ET-based method. However, this assumption may be too strong, especially when the hair regions are included. Liu et al.

[9] proposed a nonlinear method using locally linear embedding (LLE), which partitions the image into several overlapping patches and synthesizes each of these patches separately. Recent works also partition the images into patches for further synthesizing. To fulfill the smoothness requirement between neighboring patches, Wang and Tang proposed a multiscale Markov Random Fields (MRF) model [1]. But it is too computationally intensive to be applied in realtime situations. To reduce the synthesized artifacts, Song et al. [2] improved the LLE-based method [9] by considering synthesis as an image denoising processing. However, the high-frequency information is suppressed in their results. To enhance the generalization ability, Zhang et al. [3] designed a method called sparse representation-based greedy search (SRGS), which searches candidates globally under a time constraint. However, their results are inferior in preserving clear structures.

Several methods add a refinement step to recover vital details of the input photo to improve the visual quality and face recognition performance. Zhang et al.

[10] applied a support vector regression (SVR) based model to synthesize the high-frequency information. Similarly, Gao et al. [11]

proposed a method called SNS-SRE with two steps, i.e., sparse neighbor selection (SNS) to get an initial estimation and sparse representation based enhancement (SRE) for further improvement. Nevertheless, these post processing steps may brought in side effects, e.g., the results of SNS-SRE are out of sketch styles and become more likely to be natural gray level images.

Ii-B Dense Predictions via Convolutional Neural Networks

The convolutional neural network (CNN) has been widely used in computer vision. Its typical structure contained a series of convolutional layers, pooling layers and full connected layers. Recently, CNN has achieved great success in large scale object localization

[12, 13], detection [14], recognition [15, 16, 17, 18] and classification [19, 20].

Researchers also adopted CNNs to produce dense predictions. An intuitive strategy is to attach the output maps to the topmost layer for directly learning a global predictions. For examples, Wang et al. [21] adopted these strategy for generic object extraction, and Luo et al. [22] applied a similar configuration for pedestrian parsing. Nevertheless, this strategy often produces coarse outputs, since the parameters in networks grow dramatically when enlarging the output maps. To produce finer outputs, Eigen et al. [12] applied another network which refined coarse predictions via information from local patches in the depth prediction task. A similar idea was also proposed by Wang et al. [23], which separately learns global and local processes and uses a fusion network to fuse them into the final estimation of the surface normal. Surprisingly, the global information can be omitted in some situations, e.g., Dong et al. [24, 25]

applied a CNN only included three convolutional layers for image super resolution. Though this network has a small receptive field and is trained on local patch samples, it works well for the strict alignment of samples in this specific task.

Fig. 3: The architecture of Branched Fully Convoluational Neural Network. A photo and global prior are taken as the input. They are fed into three shared convolutional layers with the kernel sizes , and , and then they pass through two branches with additional three convolutional layers with the kernel sizes , and . The two output layers are connected with specific objective functions for predictions of structures and textures, respectively.

Iii Sketch generation via Decompositional Representation Learning

In this paper, we propose a representation learning framework for an end-to-end photo-sketch mapping via structure and texture decomposition. Given an image, it can be decomposed into structural components and textural components [26]. The geometric and smoothly-varying component, referred to as structural component or cartoon, is composed of object hues and boundaries, while the texture is an oscillatory component capturing details and noise. Thus, in the proposed framework, we separately learns the structural and textural representations of photo portrait.

In the training stage, by using a probability parsing map, a photo is automatically decomposed into different semantic parts, i.e., face, hair, and background. Then, we utilize a branched fully convolutional network (BFCN) to learn the structural and textural representation respectively. Patches from face region are fed to BFCN to train the structural branch, while patches from hair region are fed into BFCN to train its textural branch, respectively. In the test stage, given a test photo, BFCN automatically learns a structure-preserved sketch and a texture-preserved sketch, which are further fused to generate the final sketch portrait via a probabilistic method.

In the following, we will first introduce the probability parsing map, and then describe the architecture and the specific training strategy of BFCN. The probabilistic fusion method is presented at the end of this section.

Iii-a Probability Parsing map

Inspired by previous works [27, 28]

, we design a fully convolutional network pre-trained on Helen dataset to automatically parse a face photo into semantic regions of face, hair and background. This network is called parsing net (P-Net), which consists of eight convolutional layers with ReLUs as activation functions. The first three convolutional layers are followed by pooling layers and local response normalization layers

[19]. An average probability map of the face, hair, and background, is also adopted as nonparametric priors to provide a global regularization. In the inference stage, we feed this network with the full-size photo. Then P-Net generates three maps of the size

, corresponding to the probability distributions of face, hair and background of pixels in the photo respectively.

We adopt a softmax classifier on the top of P-Net to learn the probabilistic parsing probability maps. For an input image

, we use to denote its ground truth probability parsing map. For each pixel , and its receptive field is denoted as . Let denote the parameters of P-Net. Then the topmost output of P-Net can be denoted as .

Thus the predictions of softmax classifier can be formulated as

(1)

where indicating the class labels of , i.e., face, hair and background, denotes the weight of softmax classifier, and denotes the weight for the -th class. Thus, for a single image and its corresponding probability parsing map , we can formulate the objective of P-Net as

(2)

where is the indicator function.

Iii-B Branched Fully Convolutional Network

We utilize a branched fully convolutional neural network, i.e., BFCN, to learn the structural and textural representations of photo portrait respectively. The architecture of BFCN is shown in Fig. 3. BFCN consists of six convolutional layers of rectified linear functions (ReLUs [29]) as the activation functions. We share the features of first three layers in BFCN for computational efficiency, and adopt two sibling output layers to produce the structural and textural predictions. As the receptive field of BFCN is small, it may fail to predict satisfactory results via small local information. Thus we add a nonparametric prior to provide a global regularization as introduced in previous work [28]. More precisely, we average of all the aligned ground truth sketches to get an average sketch portrait and attach it after color channels as the network input. Though we only feed BFCN with patches in the training stage, this network can be fed with full size images in the testing time due to the translation invariance of the convolutional operator.

There are two sibling branches in BFCN, i.e., structural branch and textural branch. In the training stage, patches from face part are fed to structural branch to learn the structural representations, while patches from hair region are fed into textural branch for textural representation learning. We adopt different objective functions to train the two branches. Let denotes the total objective function of BFCN. Then, can be formulated as

(3)

where denotes the structural objective function, denotes the textural objective function, and is a scaling factor to balance the two objective function terms. In the following, we describe the definition of and and the training strategies respectively.

Iii-B1 Structural branch training

Patches from the face regions are fed to BFCN for the structural representation, and we apply MSE as the objective function of structural branch. Let denote a structural training patch pair, and and denote the parameters in the shared layers and the structural branch. The structural objective function can be formulated as

(4)

where denotes the structural prediction of , and denotes the total number of training photo patch set . The in Eq. (4) can be formulated as

(5)

where denotes the -th ground truth pixel of a structural sketch patch , and denotes the corresponding prediction.

In the training set, each photo and its corresponding sketch are cropped into small patches in the same size to form the training photo-sketch patch pairs. However, as the photo and its corresponding sketch are only roughly aligned by facial landmarks, there are a lot of structurally unaligned patch pairs [1]. Those unaligned patch pairs will greatly degrade the visual quality of final results. Thus, it is necessary to filter them before structural representation learning.

We assume that a photo patch and a sketch are aligned if they have high structural similarity. To measure their structural similarity, we first utilize the Sobel operator to exact the edge maps of two patches, and then adopt the Structural Similarity (SSIM) [30] index to evaluate the similarity between the two edge maps. Then, we filter out the patch pairs with SSIM indexes lower than a threshold (e.g., in this paper).

Iii-B2 Textural branch training

Patches from hair regions are fed to BFCN for textural representation. Portrait textures usually contain fine-scale details with periodic and oscillatory natures. For example, the patches in Fig. 4 (a) and 4(b) have visible point-by-point difference, but they are in the same texture pattern. In this situation, directly applying a point-to-point objective function, e.g., mean square error (MSE), is difficult to evaluate the similarity of these similar textural patterns. Although extensive studies have been made on metrics of texture similarity [31, 32, 33, 34], and many metrics has been proposed, they are difficult to be integrated into the neural network. For examples, the formulation of STSIM [33] is quite complex and hard to calculate the derivatives for back-propagation algorithm.

Fig. 4: Illustration of sorted matching. After applying the sort operator, two chessboard texture patterns in (a) and (b) become identical in (c); (d) Comparison of MSE and SM-MSE on textural pattern measurement

To deal with this situation, we design a Sorted Matching-Mean Square Error (SM-MSE) metric for textural representation learning. SM-MSE can be viewed as adding an extra ascending sort operator before comparing two textural patches using MSE. We give an intuitive example of the comparison of adopting MSE and SM-MSE in Fig. 4

(d). We crop two close patches on the hair regions. Generally, those two patches are in the similar textural pattern. We apply the MSE and SM-MSE to evaluate the similarity of these patches respectively. As we can see, the result of SM-MSE is much smaller than those of directly applying MSE. Thus, by using SM-MSE, the similarity of two textural patches can be easily measured. Besides, it is very straightforward to integrate SM-MSE into BFCN. We only need to mark down the index of each pixel before applying the sort operator, and then networks can find paths for back-propagating the derivatives, which is analogous to implement the back-propagation of the max pooling operator.

To train the textural branch of BFCN, we mainly adopt the combination of SM-MSE and MSE. Let denote a training patch pair for textural representation learning, denote the parameters in shared layers and denote parameters in the textural branch, respectively. Then the textural objective function can be formulated as

(6)

where denotes the textural prediction of , is used to balance the and term. The term can be regarded as a regularizer. Then, the and in Eq. 6 can be formulated as

(7)
(8)

where denotes the -th ground truth pixel of a textural sketch patch , and denotes its prediction. The and are obtained by applying the ascending sort operator on and . denotes the -th sorted ground truth pixel of a textural sketch patch , and denotes the -th sorted prediction.

Iii-C Probabilistic Fusion

By using the parsing maps, we propose a probabilistic fusion scheme to fuse the structural and textural sketches directly to generate sketch portrait in the inference stage. The fusion process is guided by the probability parsing map of test photo of size . Let , , denote the probabilities of pixels in belongs to face, hair and background respectively. We can obtain a binary map which indicates whether pixels in belongs to the hair or not, which can be formulated as

(9)

where denotes the indicator function. We then use to fuse the structural sketch and textural sketch as

(10)

where denotes the final sketch portrait.



Fig. 5: Comparison of different fusion strategies: (a) results of direct fusion, and (b) results of soft fusion.

However, the above fusion process does not consider the border effect between the face and hair. Thus it may bring artifacts into final fusion results as shown in Fig. 5(a). We can find sudden change between the border of face and hair. To overcome this problem, we propose a soft fusion strategy. Instead of using the binary labels, the soft fusion adopt probability parsing maps to evaluate a weighted average between the structure-preserved sketch and texture-preserved sketch as:

(11)

where refers to element-wise product. By using soft fusion, the border between face and hair can be greatly smoothed. A slice of samples of soft fusion are shown in Fig. 5(b). Compared with Fig. 5(a), we can see the border effects have been well removed.

Iii-D Implementation details

We adopt the Caffe

[35] toolbox to implement both BFCN and P-Net. For BFCN, the training samples are first cropped into size of to exclude the influence of the black regions around the borders. Then, we crop the photo and its corresponding sketch into overlapping

patches to avoid overflow while keeping a high computational efficiency. In the training stage, filter weights of the two networks are initialized by drawing random numbers from a Gaussian distribution with zero mean and standard deviation 0.01, and the bias are initialized by zero. We set

and for the hyper-parameters of the objective function in Eq. (4) and Eq. (6). With the learning rate set as

, BFCN needs about 150 epoches to converge. For the P-Net, it requires about 100 epoches to converge with learning rate

.

In the inference stage, we adopt the

photos as input. In order to avoid the border effect, we do not use any paddings in the BFCN. Thus, the generated results will be shrunk to the size

. Compared to most previous methods, our approach is very efficient (over 10 fps when processing aligned photos on a powerful GPU).

Iv Experimental Result

In this section, we first introduce the datasets and implementation setting. Then, we conduct considerable experiments to show performance of our approach. The comparison results with some of existing methods are also discussed in this section.

Iv-a Dataset Setup

For the sake of comparison with existing methods, we take the CUHK Face Sketch (CUFS) dataset [1] for experimental study. The total samples of CUFS dataset is 606, which includes 188 samples from the Chinese University of Hong Kong (CUHK) student dataset, 123 samples from the AR dataset [36], and 295 samples from the XM2VTS dataset [37]. For each sample, there is a sketch drawn by an artist based on a photo taken in a frontal pose, under the normal lighting condition. Some samples from the CUFS dataset are shown in Fig. 6. We take the 88 samples in CUHK student dataset as the training set, while the rest 518 samples are used as the testing set, including 123 samples from AR dataset, 295 samples from XM2VTS dataset and the reset 100 samples in CUHK student dataset.



Fig. 6: Samples from the CUFS dataset. The samples are taken from the CUHK student dataset (the first row), the AR dataset (the second row), and the XM2VTS dataset (the last row).

We adopt the Helen dataset [38] and its additional annotations [39] to train the P-Net. We manually choose 280 samples in a roughly frontal pose assuming that the photos have been aligned by the landmarks.

Iv-B Photo-to-sketch Generation

In this subsection, we evaluate the proposed framework on the CUFS dataset. We also compare our method with six recently proposed example-based synthesis methods, including Multiple Representations-based method (MR) [40], Markov random field (MRF) [1], Markov weight field (MWF) [41], spatial sketch denoising (SSD) [2], and sparse representation-based greedy search (SRGS) [3].

Fig. 7: Comparison of sketches generated by different methods. (a) Input Photo, (b) MR [40], (c) MRF [1], (d) MWF [41], (e) SRGS [3], (f) SSD [2], (g) Our method.

The comparison results are shown in Fig. 7. The first column corresponds to the input photos from CUHK, AR and XM2VTS, and the rest columns correspond to the sketches generated by MR [40], MRF [1], MWF [41], SRGS [3], SSD [2] and our method respectively. We can see that the visual effects of competing methods are not satisfactory. First, these methods can not handle decorations well, such as the hair pin in the first example and the glasses in the third and sixth examples. Besides, only our result exactly keeps the pigmented naevus in the input photo of the second row. Second, the competing methods can not preserve the fine-grained textural detail well. Especially when there are many texture regions in the sketch, e.g., the mustache and the hair regions. Compared with other methods, our approach can not only catch the significant characteristics of input photo portrait, but also preserve the fine-scale texture details to make the sketch portraits more vivid.

Another superiority of the proposed method is its generalization ability. In Fig. 7, the results of the first two rows are more or less acceptable, while the rest results of other methods, i.e., images from the third row to the last row, are much worse in visual quality. This is because that the first two test photos are selected from CUHK student dataset, which shares the same distribution with the training samples, while the rest examples are taken from the AR and XM2VTS datasets, with different distributions from CUHK student dataset. Nevertheless, our method performs well on all input photos, showing its excellent generalization performance.

Besides, the proposed decompositional representation learning based model can produce clearer structure and handle the non-facial factors better. For example, in Fig. 7

, the results produced by our method have clearer and sharper outliers of face, and preserve subtle structure of eyebrow, eyes, nose, lips, nose and ears. Take ears as example. The results generated by our method are satisfying, with fairly perfect shape and subtle detail preserved, while those produced by other methods are nearly unrecognizable. Meanwhile, only SRGS

[3] and our methods can produce the non-facial factors, such as hairpin. However, SRGS loses much fine-grained textural detail, such as the hair region of samples in Fig. 7(e). In contrast, our method performs well in handling the fine-scale textural detail which makes our result much more vivid than those of SRGS.

Fig. 8: Comparison on subjective voting. More people prefer the results generated by our approach.

Referring to [2, 11], we adopt subjective voting for the sketch image quality assessment. We present the candidate photos and the corresponding sketches produced by our method and other methods, including MR [40], MRF [1], MWF [41], SSD [2] and SRGS [3], and shuffle them. We invited 20 volunteers to select the results that they prefer. The result is shown in Fig. 8, in which the blue bars refer to the percentage of votes selecting other methods, while the orange bars indicate the vote rate of our method. The statistic results show that much more people prefer our method. Specifically, for the CUHK dataset, our approach obtain over a half of all the votes. For other datasets, our superiority becomes more obvious, reaching 91% and 78% in AR and XM2VTS datasets, respectively.

(a) Rank-1 Cumulative Match Score
(b) Rank-10 Cumulative Match Score
Fig. 9: Comparison on the Rank-1 and Rank-10 Cumulative Match Score of sketch-based face recognition task. Best view in color.

Iv-C Sketch-based Face Recognition

The performance on sketch-based face recognition [8] can also be used to evaluate the quality of sketch portraits. In this subsection, we will show that the generated sketches of our proposed approach can not only get a high visual quality, but also can significantly reduce the modality difference between photos and sketches, which means our model can perform well on sketch-based face recognition task.

The procedures of a sketch-based face recognition can be concluded in two steps : (a) convert photos in testing set into corresponding sketches; (b) define a feature or transformation to measure the distance between the query sketch and the generated sketches.

We adopt PCA for face feature extraction and cosine similarity for distance measurement. Following the same protocol in

[8], we compare our approach with previous methods on cumulative match score (CMS). The CMS measures the percentage of ‘the correct answer is in the top matches’, where is called the rank. We merge the total 518 samples from the CUHK, AR and XM2VTS datasets together to form a challenging sketch based recognition test set. In Fig. 9(a), we plot the Rank-1 recognition rates of the comparison methods. The result of our method can get an accuracy of 78.7% for the first match when using an 100-dimension PCA-reduced features, which is much better than the second place method (SRGS method [3], 53.2%). When the feature dimensions increase to 250, the Rank-1 CMS of our method also increases to 80.1%. As shown in Fig. 9(b), our method can reach to a accuracy of 93.2% in ten guesses, while the best result of other methods is around 85%.

Iv-D Robustness to Lighting and Pose Variations

The lighting and pose variations are also challenging in the sketch generation problem [42]. Some of previous methods only work under well constrained conditions and often fail when there are variations of lighting and pose. For example, Fig. 10(b) shows the samples of sketches synthesized by MRF [1] methods with lighting and pose variations. The results of the first and second rows are obtained under dark frontal lighting and dark side lighting, while the results of the third and fourth rows are synthesized under pose variations in range of . The results show that MRF often lose some details under lighting and pose variations. For example, in the sketch of the forth row of Fig. 10(b), the profile and ear is missed, and the sketch in the second row is dramatically confused. Zhang et al. [42] further improved MRF (named as MRF+ in this paper) to handle the lighting and pose variations. However, MRF+ involves much additional operations which make it rather complicated and inefficient. The results of the MRF+ are shown in Fig. 10(c). We can see that the visual effect of the MRF+ is improved, however, the results still lack some details, e.g., part of the ear marked in the forth row of Fig. 10(c).

Our proposed method learns the sketch from the raw pixels of photo portrait, and it is rather robust to the pose variation as shown in the third and forth row of Fig. 10(d) and (e). Besides, we can adopt a simple strategy to handle the lighting variation. Specifically, we first translate the input photos to HSV colors pace, and then randomly multiple the index of V channel by a factor in the range during the training. The sketch results are shown in the first and second row of Fig. 10(e). Compared with the corresponding sketches of Fig. 10(d) , the visual effects are marginally improved.

(a) Photos
(b) MRF
(c) MRF+
(d) Ours
(e) Ours+
Fig. 10: Comparison of the robustness to lighting and pose variations of different methods.

Iv-E Portrait-to-sketch Generation in the Wild

In this section, we conduct experiments to explore generation ability of our model on an unconstrained environment. We select some generated sketch portraits and show them in Fig. 11 with corresponding intermediate results. It indicates that the representation learned by our model is more general and more robust to handle the complex background (e.g., the left arm of the woman in the first row, and the batten behind the man in the third row).

Fig. 11: Results generated by our framework in unconstrained environment. (a) Input portraits; (b) aligned portraits; (c) parsing map; (d) structural sketches; (e) textural sketches; (f) fused sketches.

Iv-F Analysis and Discussion

We also analysis the effectiveness of the decompositional representation learning and parsing maps in the proposed method. Besides, we also discuss some considerations when designing the probabilistic fusion and the architecture of BFCN.

Iv-F1 The effectiveness of decompositional representation learning

We conduct experiments to verify the effectiveness of decompositional representation learning on handling the structures and textures. Specifically, we disable the structurally unaligned filter in the data preparing stage, and set to remove term in Eq. (6) when training the BFCN. Under this setting, the two branches of BFCN are trained equally with the same loss function. Then we retrain the model under this condition. The results are depicted in the second column of Fig. 12. For comparison, we also depict the result with normal setting in the third column. Obviously, the sketches in the third column are more attractive. The textures are much clearer, since SM-MSE metric can correctly evaluate similar textures to learn a better representation. Meanwhile, the structures are sharper, since the structurally unaligned filter only retains the aligned patch pairs, which help to capture the main structures and suppress the noises.

(a)
(b)
(c)
Fig. 12: Comparison on models trained without/with decompositional representation learning (DRL). (a) Input photos; (b) Results without DRL; (c) Results with DRL.
(a)
(b)
(c)
Fig. 13: Comparison results of model trained without/with the nonparametric prior. (a) Input photos; (b) Results without global prior; (c) Results with global prior.

Iv-F2 The effectiveness of nonparametric prior in training BFCN

As we mentioned in Section III, in the training of BFCN, we add the average of ground truth of sketch as nonparametric prior to provide a global regularization to our model. Here, we evaluate the role of this nonparametric prior via comparing the sketches generated by the models with and without this prior respectively. The comparison results are presented in Fig. 13. We can see that after embedding the nonparametric prior into our model, some mistakes caused by the locally predictions are corrected and the sketches are more lively.

Iv-F3 Shared vs. unshared parameters of shallow layers

The low-level feature learned by SRCNN [24] is likely to be edges, which can be shared in most of the computer vision tasks. Inspired by previous works [24, 43], we share parameters of the first three convolutional layers (called shallow layers) of BFCN and we find that this strategy is both effective and efficient. For comparison, we retrain a model without sharing the parameters, i.e., we adopt two isolated networks to learn the structures and textures. Experimental results show that sharing the shallow layers is much more efficient. As shown in TABLE I, if we don’t share the weights, testing procedure will be significantly slowed down by over 110%, since most of the computational cost comes from the shallow convolutional layers. Besides, we also compared the computation cost of proposed BFCN with other methods, i.e., MRF [1], SSD [2], SRGS [3], MR [40], MWF [41] to evaluate its efficiency. For fair comparison, all of these methods are run on a PC with Intel Core i7 3.4GHz CPU without GPU acceleration. The comparison results are list in Table II show that our method is much more efficient than other methods.

Unshared Shared
Time(ms) 63.0 29.8
TABLE I: Inference time for single image of unshared and shared parameters of shallow layers (On a NVIDIA Titan Black GPU).
MRF[1] SSD[2] SRGS[3] MR[40] MWF[41] Our
Time(s) 155 4 4 600 40 1.2
TABLE II: Comparison of inference time of single face image of different methods.

V Conclusion and future work

In this paper, we propose a novel decompositional representation learning framework to learn from the raw pixels of input photo for an end-to-end sketch portrait generation. We utilize a BFCN to map the photo into structural and textural components to generate a structure-preserved sketch and a texture-preserved sketch respectively. The two sketches are fused together to generate the final sketch portrait via a probabilistic method. Experimental results on several challenging benchmarks show the proposed method outperforms existing example-based synthesis algorithms in terms of both perceptual and objective metrics. Besides, the proposed approach also has favorable generalization ability across different datasets without additional training.

Currently, in the training BFCN, a face image and its corresponding sketch are roughly aligned by eyes. Then patches of face image and its corresponding sketch patches are fed into BFCN to train a photo-sketch generation model. In other words, the performance of BFCN is partially rely on the face alignment algorithm. If the face images have large pose variations or drastic lighting change, the results of current face alignment method may be not good. Thus the sketches generated by BFCN may be not satisfied. In the future, we will design more robust face alignment algorithm to replace current strategy, and make the BFCN more robust to the pose and lighting variations.

References

  • [1] X. Wang and X. Tang, “Face photo-sketch synthesis and recognition,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 31, no. 11, pp. 1955–1967, 2009.
  • [2] Y. Song, L. Bao, Q. Yang, and M.-H. Yang, “Real-time exemplar-based face sketch synthesis,” in Proceedings of European Conference on Computer Vision, 2014, pp. 800–813.
  • [3] S. Zhang, X. Gao, N. Wang, J. Li, and M. Zhang, “Face sketch synthesis via sparse representation-based greedy search,” Image Processing, IEEE Transactions on, vol. 24, no. 8, pp. 2466–2477, 2015.
  • [4]

    M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and texture image inpainting,”

    Image Processing, IEEE Transactions on, vol. 12, no. 8, pp. 882–889, 2003.
  • [5] J.-F. Aujol, G. Gilboa, T. Chan, and S. Osher, “Structure-texture image decomposition modeling, algorithms, and parameter selection,” International Journal of Computer Vision, vol. 67, no. 1, pp. 111–136, 2006.
  • [6] S. Zhang, X. Gao, N. Wang, and J. Li, “Robust face sketch style synthesis,” Image Processing, IEEE Transactions on, vol. 25, no. 1, pp. 220–232, 2016.
  • [7] Z. Xu, H. Chen, S.-C. Zhu, and J. Luo, “A hierarchical compositional model for face representation and sketching,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 30, no. 6, pp. 955–969, 2008.
  • [8] X. Tang and X. Wang, “Face sketch recognition,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 14, no. 1, pp. 50–57, 2004.
  • [9] Q. Liu, X. Tang, H. Jin, H. Lu, and S. Ma, “A nonlinear approach for face sketch synthesis and recognition,” in

    Computer Vision and Pattern Recognition, IEEE Computer Society Conference on

    , vol. 1.   IEEE, 2005, pp. 1005–1010.
  • [10] J. Zhang, N. Wang, X. Gao, D. Tao, and X. Li, “Face sketch-photo synthesis based on support vector regression,” in IEEE International Conference on Image Processing, Sept 2011, pp. 1125–1128.
  • [11] X. Gao, N. Wang, D. Tao, and X. Li, “Face sketch–photo synthesis and retrieval using sparse representation,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 22, no. 8, pp. 1213–1226, 2012.
  • [12] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “Overfeat: Integrated recognition, localization and detection using convolutional networks,” in International Conference on Learning Representations (ICLR 2014).   CBLS, April 2014.
  • [13] T. Chen, L. Lin, L. Liu, X. Luo, and X. Li, “Disc: Deep image saliency computing via progressive representation learning,” IEEE transactions on neural networks and learning systems, vol. 27, no. 6, pp. 1135–1149, 2016.
  • [14] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov, “Scalable object detection using deep neural networks,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
  • [15] L. Lin, K. Wang, W. Zuo, M. Wang, J. Luo, and L. Zhang, “A deep structured model with radius–margin bound for 3d human activity recognition,” International Journal of Computer Vision, pp. 1–18, 2015.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in Computer Vision–ECCV 2014.   Springer, 2014, pp. 346–361.
  • [17] L. Lin, G. Wang, W. Zuo, X. Feng, and L. Zhang, “Cross-domain visual matching via generalized similarity measure and feature learning,” IEEE transactions on Pattern Analysis and Machine Intelligence, 2016.
  • [18]

    R. Zhang, L. Lin, R. Zhang, W. Zuo, and L. Zhang, “Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification,”

    IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 4766–4779, 2015.
  • [19]

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in

    Advances in neural information processing systems, 2012, pp. 1097–1105.
  • [20] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision–ECCV 2014.   Springer, 2014, pp. 818–833.
  • [21] L. Lin, X. Wang, W. Yang, and J.-H. Lai, “Discriminatively trained and-or graph models for object shape detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 5, pp. 959–972, 2015.
  • [22] P. Luo, X. Wang, and X. Tang, “Pedestrian parsing via deep decompositional network,” in Computer Vision (ICCV), IEEE International Conference on, 2013, pp. 2648–2655.
  • [23] X. Wang, D. Fouhey, and A. Gupta, “Designing deep networks for surface normal estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 539–547.
  • [24] C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Computer Vision–ECCV 2014.   Springer, 2014, pp. 184–199.
  • [25] ——, “Image super-resolution using deep convolutional networks,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 38, no. 2, pp. 295–307, 2016.
  • [26] V. Le Guen, “Cartoon+ texture image decomposition by the tv-l1 model,” Image Processing On Line, vol. 4, pp. 204–219, 2014.
  • [27] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3431–3440.
  • [28] S. Liu, J. Yang, C. Huang, and M.-H. Yang, “Multi-objective convolutional learning for face labeling,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  • [29]

    V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in

    Proceedings of the 27th International Conference on Machine Learning (ICML-10)

    , 2010, pp. 807–814.
  • [30] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” Image Processing, IEEE Transactions on, vol. 13, no. 4, pp. 600–612, 2004.
  • [31] W. Zuo, L. Zhang, C. Song, D. Zhang, and H. Gao, “Gradient histogram estimation and preservation for texture enhanced image denoising,” Image Processing, IEEE Transactions on, vol. 23, no. 6, pp. 2459–2472, June 2014.
  • [32] J. Chen, T. N. Pappas, A. Mojsilovic, and B. Rogowitz, “Adaptive perceptual color-texture image segmentation,” Image Processing, IEEE Transactions on, vol. 14, no. 10, pp. 1524–1536, 2005.
  • [33] J. Zujovic, T. N. Pappas, and D. L. Neuhoff, “Structural texture similarity metrics for image analysis and retrieval,” Image Processing, IEEE Transactions on, vol. 22, no. 7, pp. 2545–2558, 2013.
  • [34] F. Wang, W. Zuo, L. Zhang, D. Meng, and D. Zhang, “A kernel classification framework for metric learning,” IEEE transactions on neural networks and learning systems, vol. 26, no. 9, pp. 1950–1962, 2015.
  • [35] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the ACM International Conference on Multimedia.   ACM, 2014, pp. 675–678.
  • [36] A. M. Martinez, “The ar face database,” CVC Technical Report, vol. 24, 1998.
  • [37] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre, “Xm2vtsdb: The extended m2vts database,” in Second international conference on audio and video-based biometric person authentication, vol. 964.   Citeseer, 1999, pp. 965–966.
  • [38] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang, “Interactive facial feature localization,” in Computer Vision–ECCV 2012.   Springer, 2012, pp. 679–692.
  • [39] B. M. Smith, L. Zhang, J. Brandt, Z. Lin, and J. Yang, “Exemplar-based face parsing,” in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on.   IEEE, 2013, pp. 3484–3491.
  • [40] C. Peng, X. Gao, N. Wang, D. Tao, X. Li, and J. Li, “Multiple representations-based face sketch-photo synthesis,” Neural networks and learning systems, IEEE Transactions on, vol. 99, no. 1, pp. 1–15, 2015.
  • [41] H. Zhou, Z. Kuang, and K.-Y. Wong, “Markov weight fields for face sketch synthesis,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on.   IEEE, 2012, pp. 1091–1097.
  • [42] W. Zhang, X. Wang, and X. Tang, “Lighting and pose robust face sketch synthesis,” in Computer Vision–ECCV 2010.   Springer, 2010, pp. 420–433.
  • [43] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1440–1448.