With the fast growth of on-line fashion sales, fashion related applications, such as clothing recognition and retrieval [29, 22], automatic product suggestions , have shown huge potential in e-commerce. Among them, human parsing, namely decomposing a human image into semantic fashion/body regions, serves as the basis of many high-level applications, and has drawn much research attention in recent years  .
However, there are still some problems with existing algorithms. Firstly, some previous works often take the reliable human pose estimation as the prerequisite   . However, the possibly bad result from pose estimation shall degrade the performance of human parsing. Secondly, some parsing methods, such as parselets  and co-parsing , which take advantage of the bottom-up hypotheses generation methods  , are implemented based on a critical assumption that the objects or semantic regions have a large probability to be tightly covered by at least one of the generated hypotheses. This assumption does not always hold. When the semantic regions appear with larger appearance diversity, it is very difficult to obtain a single hypothesis to cover the whole region, as the object hypotheses by the over-segmentation tend to capture the appearance consistency other than the semantic meanings. Thirdly, all existing methods do not sufficiently capture the complex contextual information among the key elements of human parsing, including semantic labels, label masks and their spatial layouts. We argue that human parsing can greatly benefit from the structural information among these elements. As shown in Figure 1, the presence of the skirt (i.e. its visibility) will hinder the probability of the dress/pants, and meanwhile encourage the visibilities and constrain the locations of left/right legs in (a). For example, the mask of a specific label can also provide the informative guidance for predicting the masks and locations of other labels, especially for the neighboring regions. The mask of the upper-clothes is a single segment due to the presence of the skirt in (c), while the upper-clothes mask is composed of two separate regions due to the dress in (b). Without capturing such structure information, the methods based on low level pixel or region hypotheses are not fully capable of accurately predicting the masks of different labels.
Different from these previous works, we propose a novel end-to-end framework for human parsing and formulate it as an Active Template Regression (ATR) problem. Instead of assigning a label to each pixel or hypothesis, we directly predict and locate the mask of each label. The parsing result for the test image is represented by the set of semantic regions (as in Figure 2), which are morphed by the normalized masks with the corresponding active shape parameters, including the position, scale and visibility. In terms of the label mask generation, we first collect all the binary masks of the training images and then learn a batch of mask bases to construct the template dictionary for each label. Intuitively, the template dictionaries can be used to span the subspaces of label masks, which encode the shape priors of each label mask. Any mask with the specific shapes can be generated by adjusting the corresponding template coefficients, inspired by the classic Active Appearance Model (AAM)  and Active Shape Model (ASM) . In this way, our representation is able to capture the natural variability within a set of mask templates for each label. The normalized mask of each label is thus expressed as the linear combination of the mask template dictionary and parameterized by the template coefficients. In terms of active shape parameters, we predict the positions, scales of each semantic region and the visibility flag which indicates whether the specific label appears in the image or not. In this paper, we denote the template coefficients and active shape parameters for each label as two types of structure outputs. Our active template regression framework targets on effectively regressing these structure outputs.
Inspired by its outstanding performance on traditional classification and detection tasks   , we utilize the deep Convolutional Neural Network (CNN) to build the end-to-end relation between the input human image and the structure outputs for human parsing, including the mask template coefficients and the active shape parameters. To predict the template coefficients, we aim to find the best linear combination of the learned mask templates. Larger coefficients indicate higher similarities between the label masks and the corresponding templates. The active shape parameters can be predicted similarly as the CNN-based detection task 
. We thus use two separate networks, namely active template network and active shape network, to predict the structure outputs. First, the template coefficients of all labels are together regressed by using the designed active template network which is capable of capturing the contextual correlations among all label masks. Second, the active shape network is designed to predict the position, scale and visibility of each label. To make our active shape network sensitive to position variance, we eliminate the max-pooling layer in the traditional CNN infrastructure, which is often designed to be invariant to scale and translation changes. For a new photo, the structure outputs of the two networks are fused to generate the probability of each label for each pixel. The super-pixel smoothing is finally used to refine the parsing result.
To effectively train our networks, we conduct the experiments on a large dataset combining three public parsing dataset and our collected human parsing dataset. Comprehensive evaluations and comparisons well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. Furthermore, we also visualize our learned label masks, which demonstrate that our model can generate label masks with strong semantic meanings. Our contributions can be summarized as
Our ATR framework provide an end-to-end approach for human parsing, which directly predicts the label masks and morphs them into the parsing result with active shape parameters. There is no need to explicitly design feature representations, the model topology or contextual interaction among labels.
Our active template network can efficiently predict the most appropriate template coefficients for each label mask, represented by the linear combinations of the template dictionary.
Our active shape network is designed to eliminate max-pooling for accurate position prediction and shows superiority in accurately regressing the active shape parameters over the generic network for classification .
2 Related Work
, it has not been fully solved. Previous methods are generally based on two types of pipelines: the hand-designed pipeline and the deep learning pipeline.
2.1 Hand-designed Pipeline
The traditional pipeline often requires many hand-designed processing steps to perform human parsing, each of which needs to be carefully designed and tuned    . These steps use the low-level over-segmentation and pose estimation as the building blocks of human parsing. The classic composite And-Or graph template [3, 18] is utilized to model and parse clothing configurations. Yamaguchi et al.  performed human pose estimation and attribute labeling sequentially and then improved clothes parsing with a retrieval-based approach . Dong et al.  proposed to use a group of parselets under the structure learning framework. However, such approaches based on hand-crafted relations often fail to fully capture the complex correlations between human appearance and structure. Although great progress has been achieved in human parsing, the involved representative model usually requires a lot of prior knowledge about the specific tasks, and these previous methods heavily rely on over-segmentation and pose estimation.
2.2 Deep Learning Pipeline
Recently, rather than using hand-crafted features and model representations, capturing contextual relations and extracting features with deep learning structures, especially deep Convolutional Neural Network (CNN), have shown great potential in various vision tasks, such as image classification  , object detection , pose estimation . To our best knowledge, Convolutional Neural Network has not been applied to human parsing. However, there exist some works on scene parsing and object segmentation with CNN architectures. Farabet et al.  trained a multi-scale convolutional network from raw pixels to extract dense features for assigning the label to each pixel. However, multiple complex post-processing methods were required for accurate prediction. The recurrent convolutional neural network  was proposed to speed up scene parsing and achieved the state-of-the-art performance. Girshick et al. 
also proposed to classify the candidate regions by CNN for semantic segmentation. All of these approaches use the CNNs as local or semi-local classifiers either over super-pixels or region hypotheses. However, our approach builds an end-to-end relation between the input image and the structure outputs, which is a more efficient application of CNN.
The above-mentioned hand-crafted and deep models share a similar pipeline: each image is decomposed into small units (pixels, super-pixels or region hypotheses) and local features (hand-crafted features or rich features learned by deep networks) are extracted; then the additional classifiers (shallow models like SVM, or deep models) are trained. In contrast, our approach builds an end-to-end relation between the input image and the structure outputs, which is simple and more efficient. Taking an image as the input, our deep model directly predicts the label masks and the corresponding shape parameters of each semantic region. All the components (e.g., hypothesis generation, feature-extraction and then classification) used in the traditional pipelines are integrated into one unified framework, which distinguishes us from all previous parsing approaches. The closest approaches to ours are  which use CNN-based regression for predicting landmark locations and bounding boxes of the objects, respectively. Their approaches are intuitively similar with our active shape network except that our model eliminates the max-pooling layer for position effectiveness. Moreover, the other important component in our model (i.e. the active template network) is designed to predict the mask template coefficients to actively generate the arbitrary masks of the semantic labels.
3 Active Template Regression
We formulate the task of human parsing as an active template regression problem. Our framework targets on predicting two kinds of structure outputs: active template coefficients and shape parameters. First, for different semantic labels (e.g. hair, hat, dress, etc.), we encode the normalized mask of each label as the linear combination of the mask template dictionary . Each label mask is parameterized by the corresponding template coefficients, , which are treated as the first type of structure outputs. Second, the position of each label mask is parameterized by its top-left coordinates as well as the width and height . The visibility flag for each label indicates whether the label (e.g. hat, belt) appears in this image. The second type of structure outputs, the active shape parameters, can thus be represented as . Finally, the parsing result of the input image is generated by morphing the masks of all different labels with the corresponding active shape parameters. In this paper, we train these two types of structure outputs with two separate neural networks: active template network and active shape network, which predict the template coefficients and the active shape parameters , respectively. The reason for training two separate networks is that the learning of template coefficients and shape parameters can be treated as two different tasks: the first one is essentially selecting the most appropriate templates for reconstructing label masks with the template dictionaries, similar to the classification problem, and the second one aims at regressing the precise locations, similar to the detection problem.
As shown in Figure 3, given an input image, we first detect the human body by using the state-of-the-art detector, i.e., the region convolution neural network method . Considering that the detected bounding box of the human body may not contain all of the body parts, we thus enlarge the detected bounding box with the factor . The pixels outside the enlarged bounding box are regarded as the background. The normalized mask of each label is reconstructed by using the predicted template coefficients and the template dictionaries . We then morph these masks into the absolute image coordinates indicated by the shape parameters . The confidence maps of each label and the background can be obtained according to the morphed masks. Finally, we use the super-pixel smoothing to generate and refine the final parsing result .
3.1 Active Template Network
The masks of different individual semantic regions for the same label often show various shapes but also common patterns which can distinguish one label from the others. We can thus represent each label mask by the linear combination of the corresponding template dictionary for each label and the label masks can be parameterized by the corresponding template coefficients to best fit the image. Intuitively, the template dictionaries span the subspace of the label masks and incorporate the shape priors for all labels. By selecting the appropriate template coefficients, we can obtain diverse semantic regions for each label. And the output size of the network can also be significantly reduced by using the template coefficients, rather than using all pixels of the whole mask.
The active template network is designed to predict the template coefficients. We first generate the mask template dictionaries for each label using dictionary learning. More precisely, given the set of training samples , we first collect a set of ground-truth binary masks for all labels. The mask set is denoted by for the -th label, where represents the binary mask of the -th label from the sample . Specifically, for each label mask, values of the pixels assigned with the specific label are set as and otherwise . The binary mask is obtained by the minimum bounding rectangle of the label mask. To learn the template dictionary for each label, we re-scale all these cropped binary masks into a fixed width and height . We denote the dictionary for each label as where , and as the number of learned templates. The template coefficients of each training sample are denoted as . To jointly predict the template dictionary for each label and the template coefficients , we optimize the following cost function for -th label,
where is the regularized parameter. It is well-known that penalty yields a sparse solution for . However, our active template network with sparse solution may be difficult to converge because of the dominance of the zero values. We thus use the -norm to regularize the template coefficients. Our experiments demonstrate the superiority with the -norm than the -norm. Moreover, we constrain and
to be non-negative, which can help our network generate more reasonable mask templates with semantic meanings than the traditional Principal Component Analysis (PCA) methods with both negative and non-negative values . Specifically, the NMF learns part-based decompositions for covering diverse visual patterns of each label and the additive combinations of active templates are beneficial for our reconstruction and network optimization. This Non-negative Matrix Factorization (NMF) problem can thus be effectively solved by the on-line dictionary learning based on stochastic approximations .
We normalize the coefficient values
into the Gaussian distribution with the meanfor each label. Let and . The normalized temporal coefficients can be defined
We train our active template network to predict the normalized coefficients
based on the Convolutional Neural Network (CNN). The convolutional network consists of several layers and each layer is a linear transformation followed by a non-linear one. The first layer takes aninput image as the input. The last layer outputs the target values of the regression, in our case dimensions for all labels. Our network is based on the architecture used by Zeiler et al. 
for image classification since it has shown better performance on the ImageNet benchmark than the one used by Krizhevsky. Each layer consists of: (1) convolution of the previous layer output (or, in the case of the 1st layer, the image) with a set of filters; (2) passing the responses through a rectified linear function; (3) (optionally) max pooling over local neighborhood; (4) (optionally) the local contrast function that normalizes the responses across feature maps. The top few layers of the network are fully-connected and the final layer is an -norm regressor. We refer the reader to Zeiler et al.  and Krizhevsky et al.  for more details. Figure 4 shows the model used in our active template network. The difference from 
is the loss function we use. Instead of a classification loss, we predict the normalized coefficients by minimizingdistance between the prediction and the ground truth coefficients. Suppose the predicted coefficients are denoted as and the ground truth coefficients as . The loss is defined as
The network parameters (filters in the convolutional layers, weight matrices in the fully-connected layers and biases) are trained by Back-propagation. For the simplicity, we eliminate the subscript for each image in the following. Given an input image , our active template network can predict template coefficients for all labels and then we obtain the absolute coefficients by using the inverse function of Eq.(2). The normalized mask for each label can be reconstructed by the linear combination of the specific template dictionary with , as .
3.2 Active Shape Network
After obtaining the normalized mask of each label, we need to morph them into the more precise masks at accurate positions in the image. In this paper, we denote the positions, scales and visibilities of the label masks as the active shape parameters , predicted by our active shape network. The structure outputs include the top-left coordinates , the width , the height and the visibility flag which is set as if the -th label appears in the image.
Figure 5 shows the architecture of our active shape network. The first convolutional layer filters a input image with kernels of size with a stride of pixels. The second convolutional layer takes the rectified output of the first convolutional layer as the input and filters it with kernels of size with a stride of pixels. The 3rd, 4th and 5th convolutional layers are connected to one another, and the 3rd and 4th layers are also with a stride of pixels. The last two fully-connected layers have and units, respectively. The output layer predicts for all labels, resulting in units. Furthermore, since the positions and scales are in absolute coordinates, it will be beneficial to normalize them with respect to the mean and standard deviations of positions and scales, similar as in Eq.(2). We keep the original values of visibility flags which are either 1 or 0. We minimize distance between the prediction and the ground truth parameters. Suppose the predicted parameters are denoted as and the ground truth parameters as . The corresponding loss is defined as
The previous infrastructures for the classification tasks   include the max-pooling layer to make the network invariant to scale/translation changes and reduce the scale of feature maps. However, our network for regressing shape parameters is sensitive to position variance. To remedy this problem, our network eliminates the pooling layer and keeps the same overall depth of the network with . The new architecture retains much more information in the first few layers (e.g. the feature map with size vs in the 1st layer and vs in the 2nd layer, compared with the model in Figure. 4). Additionally, we reduce the scale of feature maps gradually, using a stride of pixels as well in the 2nd, 3rd, and 4th layers. Given that our dataset is much smaller than the ImageNet dataset, we decrease the filter number in each convolution layer and the size of the fully-connected layers to prevent over-fitting.
The contextual interactions between all semantic label masks (e.g. label exclusiveness and spatial layouts) are intrinsically captured by all of the hidden layers. Given a test image , the active shape network predicts shape parameters for all label masks and the absolute image coordinates are obtained by using the inverse normalization.
Bounding Box Refinement. In addition, considering the prediction error of shape parameters, we utilize the bounding box refinement to further reduce the mislocalizations. Specifically, we train linear regression models to predict new positions (e.g., ) for all labels, following the method proposed for object detection . To train the bounding box regressor for each label, all the training images are cropped around the predicted positions and then enlarged by a factor of to contain more surrounding context. The input for training is a set of training pairs, i.e., the predicted positions from our network and the ground-truth bounding boxes for each label. Note that only the predicted label mask which has an over overlap ratio with the ground-truth box is considered. The features for each training image are extracted from the outputs of the fully-connected layer of the ImageNet model . Finally, we use the same strategy to learn the position transformation. Please refer to  for more details.
3.3 Structure Output Combination and Super-pixel Smoothing
After feeding the image into the above two networks, we can obtain the normalized mask and the shape parameters for each label. The confidence map of each label is obtained by morphing the mask into the absolute image coordinates with . Note that the visibility flag denotes whether the -th label appears in the image or not. Only if the visibility flag satisfies , the associated masks are considered. Note that this threshold is only used to prune the less likely appeared label masks. The final label masks are mainly decided by the predicted template coefficients and active shape parameters.
Our network can only predict the confidence maps of all foreground labels. For the background label, we predict its probability for each pixel by adopting the interactive image segmentation method . We automatically obtain the reliable foreground and background seeds from the confidence maps of all labels. Specifically, we first calculate the foreground confidence map by maximizing the confidences of each label as . Only the pixels of with the confidence larger than is regarded as the foreground. Then the erode operation with the filter size based on the foreground mask is performed to produce the foreground seeds, displayed as the blue pixels of seed images in Figure 6. The background seeds are obtained by dilating the inverse of the foreground mask within neighborhoods, displayed as the pink pixels of seed images in Figure 6. Based on the seeds, we can predict the background confidence map by learning the color model as in .
Super-pixel Smoothing. To combine the confidence maps of all semantic labels and the background, we apply super-pixel smoothing and refine the parsing results for more precise pixel-level segmentation. In particular, our approach first computes an over-segmentation of the input image using a fast segmentation algorithm . We denote the background label as and thus we have possible labels for each pixel . The confidence map set is denoted as , where is the obtained background confidence map using . The super-pixel which contains the pixel is defined as and the predicted label of the pixel is denoted as . Our final parsing result is thus calculated as
where denotes each pixel in the super-pixel and is the probability of the pixel in the map . Since we only perform the maximization of the average confidences for all labels, our super-pixel smoothing method is very simple and fast.
4.1 Experimental Settings
Datasets: A large number of training samples are required for most of the deep models  . However, existing public available datasets for human parsing are relatively small. The largest existing human parsing dataset, to our best knowledge, contains only images, which is insufficient for training a robust deep network model. Thus, we combine data from three small benchmark datasets: (1) the Fashionista dataset  containing 685 images, (2) the Colorful Fashion Parsing Data (CFPD) dataset  containing 2,682 images, and (3) the Daily Photos dataset  containing images. All images in these three datasets contain standing people in frontal/near-frontal view with good visibilities of all body parts. Following the label set defined by Dong et al. , we merge the labels of Fashionista and CFPD datasets to categories: face, sunglass, hat, scarf, hair, upper-clothes, left-arm, right-arm, belt, pants, left-leg, right-leg, skirt, left-shoe, right-shoe, bag, dress and background. To enlarge the diversity of our dataset, we crawl another challenging images to construct the Human Parsing in the Wild (HPW) dataset and annotate pixel-level labels following . As shown in Figure 7, our newly annotated data are mainly more realistic images containing challenging poses (e.g. sitting) and occlusion, which is a good supplement to the existing three datasets. The final combined dataset from the four datasets contains images. We use images for training, for testing and as the validation set. The occurrences of each label in our collected dataset are reported in Table II. For fair comparison with published algorithms, we use the same evaluation criterion as in , which contains accuracy, average precision, average recall, and average F-1 scores over pixels.
|Method||Accuracy||F.g. accuracy||Avg. precision||Avg. recall||Avg. F-1 score|
|Yamaguchi et al.  (456)||82.54||46.70||31.67||43.74||35.78|
|PaperDoll  (456)||86.74||50.34||43.38||41.21||37.54|
|Yamaguchi et al.  (6000)||84.38||55.59||37.54||51.05||41.80|
|PaperDoll  (6000)||88.96||62.18||52.75||49.43||44.76|
|Yamaguchi et al.  (6000 test 229)||87.87||58.85||51.04||48.05||42.87|
|PaperDoll  (6000 test 229)||89.98||65.66||54.87||51.16||46.80|
|ATR (test 229)||92.33||76.54||73.93||66.49||69.30|
|Yamaguchi et al. |
|Yamaguchi et al. |
|(6000 test 229)||62.58||27.31||18.50||54.26||60.26||1.48||42.96||47.93||44.83||66.37||45.17||52.22||44.01||2.44||35.49||0.19||68.98|
|(6000 test 229)||64.45||31.22||16.78||65.42||62.32||2.12||48.20||56.16||46.79||73.51||48.62||58.35||45.40||3.93||47.17||0.28||74.36|
|ATR (test 229)||69.35||66.91||30.50||85.38||78.48||77.14||64.37||74.56||57.76||82.96||63.25||76.07||55.87||63.26||83.35||38.14||82.77|
Data Augmentation: To reduce over-fitting on image data, we manually enlarge the training data to increase the diversity using the translations and horizontal reflections. Specifically, we first detect the bounding box of the human body  and then incrementally cover more context outside the box with the stride of pixels in eight directions (i.e. top/down, left/right, topleft/topright, downleft/downright). In addition, we enlarge the scale of the detected bounding box with three factors, i.e., , , . The horizontal reflections are used for all the cropped images. Then we resize all these images into
using the nearest-neighbor interpolation. This increases size of our training set by a factor of. Although the resulting training examples are highly inter-dependent, the data augmentation can significantly increase the diversity of features, especially for predicting the active shape parameters.
Implementation Details: Our two networks aim to predict the masks and shape parameters of labels. To learn the template dictionary for each label, we normalize the binary mask into a regularized size and as and the template number as . The penalty for the NMF is set as
. When the training image does not have certain labels, we set their corresponding template coefficients and shape parameters as zeros. We implement the two networks under the caffe framework
and train them using stochastic gradient descent with a batch size ofexamples, momentum of , and weight decay of . We use an equal learning rate for all layers and adjust it manually. The strategy is to divide the learning rate by when the validation error rate stops decreasing with the current learning rate. The learning rate is initialized at for the two networks. We train the networks for roughly epochs, which takes to days on one NVIDIA GTX TITAN 6GB GPU. Our algorithm can rapidly process one image within about second, as measured on a NVIDIA GTX TITAN 6GB GPU. This compares favorably to other approaches, as some of the current state-of-the-art approaches have higher complexity:  runs in about to seconds, while  runs in to minutes.
4.2 Results and Comparisons
We compare our ATR framework with the two state-of-the-art works  . We use their public available codes and carefully tune the parameters according to   and train their models with the same training images as our method for the fair comparison. Note that Dong et al.  is not compared in this work because our experiments show the PaperDoll  can achieve the accuracy of on the 229 test images of the Fashionista dataset, which is better than the accuracy of reported in  with the same label set. We implement two versions of our method. (1) “ATR (noSPR)”: the parsing results are obtained by maximizing the all confidence maps where no Super-Pixel Refinement (SPR) is used. (2) “ATR”: we refine the parsing results with the super-pixel smoothing. The results are listed in Table I.
The method of Yamaguchi et al.  and the PaperDoll  with 456 training images as on the public Fashionista dataset achieve and of average F1-score on evaluating our 1,000 test images, respectively. When training the model with more data (e.g., 6,000 images), the performances of the two baselines can be increased by 6.02  and 7.22 . However, our “ATR” can significantly outperform these two baselines by over for Yamaguchi et al.  and for PaperDoll . Our method also gives a huge boost in foreground accuracy: the two baselines achieve for Yamaguchi et al.  and for PaperDoll  while “ATR” obtains . “ATR” also obtains much higher precision (71.69% vs 37.54% for  and 52.75% for ) as well as higher recall (60.25% vs 51.05% for  and 49.43% for ). The pixel-level accuracy is also increased by at least . This verifies the effectiveness of our algorithm though it does not require explicit definition of any contextual relations and incorporation of complicated prior knowledge. For “ATR (noSPR)”, it also achieves superior performance than the baselines. The superiority of “ATR (noSPR)” over the baselines demonstrates that our network has the capability of directly predicting reasonable label masks without any low-level segmentation methods which are commonly used by all previous methods. The improvements from “ATR (noSPR)” to “ATR” show that the super-pixel smoothing enables the parsing result to preserve more accurate boundary information. For the fair comparison, we also report the parsing results on the 229 test images of the Fashionista dataset . Our method “ATR (test 229)” can also significantly outperform these two baselines by over for “Yamaguchi et al.  (6000 test 229)” and for PaperDoll  (6000 test 229)” of average F1-score on evaluating 229 test images. This speaks well that our collected dataset contains much more realistic images with the challenging poses and occlusions than the small Fashionista dataset .
We also present the F1-scores for each label in Table II. Generally, both versions of our method show much higher performance than the baselines. In terms of predicting small labels such as hat, belt, bag and scarf, our method achieves a large gain, e.g. 57.07% vs 11.43%  and 2.95%  for scarf, 53.66% vs 24.53%  and 30.52%  for bag. It demonstrates that our two networks can capture the internal relations between the labels and robustly predict the label masks with various clothing styles and poses. The qualitative comparison of parsing results is visualized in Figure 9. Our methods predict much more reasonable and meaningful label masks than the PaperDoll method  despite their large appearance and position variations. We can successfully predict small labels (e.g. sun-glasses, hat) while the PaperDoll  often fails and confuses them with the neighboring regions. For example, for the left image of the third row in Figure 9, we can detect sunglasses and hat while the PaperDoll totally misses them. The parsing results of our methods are cleaner and label masks bears strong semantic meanings while the results of  are heavily influenced by the low-level information, such as image clarity and color similarity. It demonstrates that our framework performs better in solving the high-level human parsing problem than the models based on low-level features. Finally, comparing the results of “ATR (noSPR)” and “ATR”, we can find that “ATR” can provide refined parsing results with respect to the region boundary. For example, for the left image in the first row in Figure 9, “ATR” with super-pixel smoothing can effectively fill the gaps between the shoes and pants.
|Active Template||Active Shape||
|ATR (unified)||ATR (PCA)||ATR (NMF)||ATR (zeilernet)||ATR (lessfc)||ATR (lessfcfilters)||ATR (nopool)||ATR (noSPR)||ATR|
|AT output num||No||850||850||850||850||850||850||850||850|
|AS output num||NA||85||85||85||85||85||85||85||85|
4.3 Ablation Studies of Our Networks
We further evaluate the effectiveness of our two components of ATR, including the active template network and the active shape network, respectively.
Active Template Network: To justify the rationality of using the template coefficients rather than the binary label masks, we test the reconstruction errors in dictionary learning, named as “Upperbound”. The label masks are reconstructed using the ground truth template coefficients and the learned dictionaries, and all active shape parameters are fixed. Table I shows that our “Upperbound” can achieve in accuracy and in average precision. This well demonstrates that the strategy of representing the binary masks with the corresponding coefficients results in very few reconstruction errors.
We also evaluate other mask reconstruction approaches: “ATR (PCA)” and “ATR (NMF)”. The results are listed in Table I and Table II. Table III shows the details of experimental settings. First, we use the Principal Component Analysis (PCA) method  for dictionary learning instead of the NMF, named as ”ATR (PCA)”. The same number of bases (i.e. 50 for each label) as in NMF is selected to construct the template dictionary. ”ATR (PCA)” results in accuracy decrease by as well as
in average F1-score, compared with “ATR”. PCA can be viewed as the eigenvector-based multivariate analysis that projects the data using only a few principle components, and the reconstruction coefficients and the basis vectors are either negative or positive. However, the NMF can learn the part-based decompositions and only additive combinations of templates are allowed, which is beneficial for our reconstruction. We also visualize our learned templates of each label as Figure10 shows. Most of the learned templates are in good shapes and bear strong semantic meanings. In addition, the templates are very diverse that can capture the large variances of label masks. These results verify that the nonnegative basis vectors can generate more expressiveness in the reconstruction. Second, to evaluate the effects of different norms upon the template coefficient prediction in Eq.(1), we use the -norm for “ATR (NMF)” to yield more sparse template coefficients. Even though the -norm has shown promising results in image reconstruction 
and is commonly used in a wide range of computer vision problems, its performance is inferior to the “ATR (SPR)” that uses the-norm to constrain too many sparse values, that is, 88.49% vs 91.11% in accuracy. The possible reason may be that our network can hardly predict optimal values with the sparse coefficients which contain too many zeros.
Figure 8 visualizes the predicted label masks for semantic labels with our active template network. The pixel in each mask with brighter color indicates its larger probability to be the specific label. Our network performs well in predicting the various shapes of the label masks. In particular, the predicted masks of ”hat” and ”hair” are highly consistent with the ground truth masks, and the fine-grained shapes for each label can also be visually distinguished (e.g. long hair vs short hair). For example, the third row in Figure 8 shows several scarfs of different shapes. Even though the first scarf contains two disconnected regions and the second scarf is an entire region, our network can actively predict their respective shapes.
Active Shape Network: In Table I and Table II, we also explore other model architectures for regressing the active shape parameters by adjusting the layer size gradually. We evaluate four cases of architectures: 1) “ATR (zeilernet)” which follows the model architecture in ; 2) “ATR (lessfc)” where the size of two fully-connected layers is changed into 2048 and 1024 from the original 4096, respectively; 3) “ATR (lessfcfilters)” where the number of filter maps is decreased by half, and also the size of fully-connected layers is changed as in “ATR (lessfc)”; 4) “ATR (nopool)” where the max-pooling layer is eliminated and the feature map size is gradually reduced by using the stride in the convolution layers, i.e., our proposed active shape network. The performances of these settings are evaluated without the bounding-box refinement. In “ATR (lessfc)” and “ATR (lessfcfilters)”, the model is trained from the scratch with the architecture in . Please refer to Table III for more details of experimental settings. The “ATR (zeilernet)” which uses the well-performed model infrastructure in image classification  gives inferior performance to our network “ATR (nopool)” (88.59% vs 91.01% in accuracy and 53.62% vs 62.78% in average F1-score). The main reason may be that the model for classification is not optimal for predicting our shape parameters which are sensitive to position variances. Besides, our dataset is much smaller than the ImageNet dataset. Using large layer size may result in over-fitting for our model. Thus we decrease the size of fully-connected layers, since they contain the majority of model parameters. The resulting accuracy and average F1-score of “ATR (lessfc)” show significant increase by 1.57% and 6.88%, respectively, compared to “ATR (zeilernet)”. The “ATR (lessfcfilters)” which decreases the number of filter maps yields slight performance improvements, but largely decreases the training parameter number. This suggests that a small number of filter maps is enough for training our model. Based on the “ATR (lessfcfilters)”, our final network “ATR (nopool)” eliminates the max-pooling operation, such that more information is reserved in the first few layers. “ATR (nopool)” gives a large gain in performance compared with “ATR (lessfcfilters)” (91.01% vs 90.21% in accuracy and 62.78% vs 60.77% in average F1-score). This verifies the effectiveness of eliminating max-pooling layers for solving the position sensitive problems. Moreover, we test the effectiveness of the bounding box regression for obtaining better shape parameters, by comparing the results of “ATR (nopool)” and ”ATR”. It shows that the bounding box refinement improves the average F1-score of “ATR (nopool)” by by using fine-tuned active shape parameters of semantic labels.
Discussion: We evaluate the performance of training one unified network for regressing the template coefficients and active shape parameters. The “ATR (unified)” version follows the network infrastructure in  and targets on predicting all the structure outputs together. More details are presented in Table III. The reported results in Table II are much worse than all other versions, especially than “ATR” (84.95% vs 91.11% in accuracy and 38.62% vs 64.38% in average F1-score). The reason for the inferiority of the unified network may be that the learning of template coefficients and active shape parameters can be treated as two different tasks and often require different network architectures, as we design. The first task with max-pooling is essentially selecting the most appropriate templates for reconstructing label masks with the template dictionaries and the second one without max-pooling aims at predicting the precise locations. Particularly, our framework with two separated networks has shown significant improvement on performance than previous work  (increasing by 19.62 of F1-scores). The network for regressing active template coefficients and shape parameters together may further improve the performance by incorporating the complicated contextual interactions of label masks and their spatial layouts. But our experiment shows that directly combining two kinds of structure outputs works do not work well for human parsing. In the further works, we will explore how to design a more effective network architecture to combine these two networks.
In this work, we formulate the human parsing task as an Active Template Regression problem. Two separate convolutional neural networks, namely, active template network and active shape network, are designed to build the end-to-end relation between the input image and the structure outputs. The first CNN network is with max-pooling to predict the mask template coefficients, while the second CNN network is without max-pooling for position sensitiveness to predict the active shape parameters. Extensive experimental results clearly demonstrated the effectiveness of the proposed ATR framework. In the future, we plan to further explore how to adequately utilize the low-level information (e.g. edges and super-pixels). In addition, we will integrate the fine-grained attributes of each semantic label into our framework. Finally, we will build a website to provide a user interface so that any user can upload his/her own photo, and we output the parsing result within one second. Our framework can also be easily extended to improve the generic image parsing (e.g. scene parsing or human pose estimation) by utilizing the area-specific active templates.
This work is supported by National Natural Science Foundation of China (No. 61328205), the Microsoft Research Asia collaboration projects, Guangdong Natural Science Foundation (no. S2013050014548), Program of Guangzhou Zhujiang Star of Science and Technology (no.2013J2200067), Special Project on Integration of Industry, Educationand Research of Guangdong (no.2012B091000101).
-  João Carreira, Rui Caseiro, Jorge Batista, and Cristian Sminchisescu. Semantic segmentation with second-order pooling. In European Conference on Computer Vision, pages 430–443. 2012.
-  Joao Carreira and Cristian Sminchisescu. Cpmc: Automatic object segmentation using constrained parametric min-cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(7):1312–1328, 2012.
Hong Chen, Zijian Xu, Ziqiang Liu, and Song Chun Zhu.
Composite templates for cloth modeling and sketching.
Computer Vision and Pattern Recognition, pages 943–950, 2006.
-  Huizhong Chen, Andrew Gallagher, and Bernd Girod. Describing clothing by semantic attributes. In European Conference on Computer Vision, pages 609–623. 2012.
-  Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. IEEE Transactions on pattern analysis and machine intelligence, 23(6):681–685, 2001.
-  Timothy F. Cootes, Christopher J. Taylor, David H. Cooper, and Jim Graham. Active shape models-their training and application. Computer Vision and Image Understanding, 61(1):38–59, 1995.
-  Matthias Dantone, Juergen Gall, Christian Leistner, and Luc Van Gool. Human pose estimation using body parts dependent joint regressors. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 3041–3048, 2013.
-  Jian Dong, Qiang Chen, Wei Xia, ZhongYang Huang, and Shuicheng Yan. A deformable mixture parsing model with parselets. In International Conference on Computer Vision, 2013.
-  Clément Farabet, Camille Couprie, Laurent Najman, and Yann LeCun. Learning hierarchical features for scene labeling. IEEE Transaction on Pattern Analyisi and Machine Intelligence, 35(8):1915–1929, 2013.
-  Pedro F. Felzenszwalb and Daniel P. Huttenlocher. Efficient graph-based image segmentation. International Journal of Computer Vision, 59(2):167–181, 2004.
-  Brian Fulkerson, Andrea Vedaldi, and Stefano Soatto. Class segmentation and object localization with superpixel neighborhoods. In International Conference on Computer Vision, 2009.
-  Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition, 2014.
-  Varun Gulshan, Carsten Rother, Antonio Criminisi, Andrew Blake, and Andrew Zisserman. Geodesic star convexity for interactive image segmentation. In Computer Vision and Pattern Recognition, pages 3129–3136, 2010.
-  Yangqing Jia. Caffe: An open source convolutional architecture for fast feature embedding. http://caffe.berkeleyvision.org/, 2013.
-  Ian Jolliffe. Principal component analysis. Encyclopedia of Statistics in Behavioral Science, 2002.
-  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012.
-  Daniel D. Lee and H. Sebastian Seung. Learning the parts of objects by nonnegative matrix factorization. Nature, 401:788–791, 1999.
-  Liang Lin, Xiaolong Wang, Wei Yang, and J Lai. Discriminatively trained and-or graph models for object shape detection. IEEE Transactions on pattern analysis and machine intelligence, 2014.
-  Si Liu, Jiashi Feng, C. Domokos, Hui Xu, Junshi Huang, Zhenzhen Hu, and Shuicheng Yan. Fashion parsing with weak color-category labels. IEEE Transactions on Multimedia, 16(1):253–265, 2014.
-  Si Liu, Jiashi Feng, Csaba Domokos, Hui Xu, Junshi Huang, Zhenzhen Hu, and Shuicheng Yan. Fashion parsing with weak color-category labels. IEEE Transactions on Multimedia, 16(1):253–265, 2014.
-  Si Liu, Jiashi Feng, Zheng Song, Tianzhu Zhang, Hanqing Lu, Changsheng Xu, and Shuicheng Yan. Hi, magic closet, tell me what to wear! In Proceedings of the 20th ACM international conference on Multimedia, pages 619–628. ACM, 2012.
-  Si Liu, Zheng Song, Guangcan Liu, Changsheng Xu, Hanqing Lu, and Shuicheng Yan. Street-to-shop: Cross-scenario clothing retrieval via parts alignment and auxiliary set. In Computer Vision and Pattern Recognition, pages 3330–3337, 2012.
Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro.
Online learning for matrix factorization and sparse coding.
The Journal of Machine Learning Research, 11:19–60, 2010.
-  Yigang Peng, Arvind Ganesh, John Wright, Wenli Xu, and Yi Ma. Rasl: Robust alignment by sparse and low-rank decomposition for linearly correlated images. IEEE Transactions on Pattern Analysis and Machine Intelligence,, 34(11):2233–2246, 2012.
-  Pedro H. O. Pinheiro and Ronan Collobert. Recurrent convolutional neural networks for scene labeling. In International conference on Machine Learning, pages 82–90, 2014.
-  Jamie Shotton, Matthew Johnson, and Roberto Cipolla. Semantic texton forests for image categorization and segmentation. In Computer vision and pattern recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008.
-  Christian Szegedy, Alexander Toshev, and Dumitru Erhan. Deep neural networks for object detection. In Advances in Neural Information Processing Systems, pages 2553–2561, 2013.
-  Alexander Toshev and Christian Szegedy. Deeppose: Human pose estimation via deep neural networks. Computer Vision and Pattern Recognition, 2014.
-  K. Yamaguchi, M.H. Kiapour, and T.L. Berg. Paper doll parsing: Retrieving similar styles to parse clothing items. In International Conference on Computer Vision, 2013.
-  K. Yamaguchi, M.H. Kiapour, L.E. Ortiz, and T.L. Berg. Parsing clothing in fashion photographs. In Computer Vision and Pattern Recognition, pages 3570–3577, 2012.
-  Wei Yang, Ping Luo, and Liang Lin. Clothing co-parsing by joint image segmentation and labeling. In Computer Vision and Pattern Recognition, 2014.
-  Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. arXiv preprint arXiv:1311.2901, 2013.