Recently human parsing [Liu et al.2015] has been receiving increasing attention owning to its wide applications, such as person re-identification [Zhao, Ouyang, and Wang2014], people search [Li et al.2017], fashion synthesis [Zhu et al.].
Existing human parsing algorithms can be divided into following two categories. The first one is constrained human parsing. More specifically, the clean images of well-posed persons are collected from some fashion sharing websites, e.g., Chictopia.com, for training and testing. Representative datasets include Fashionista [Yamaguchi et al.2012] with 685 images, Colorful-Fashion dataset [Liu et al.2014] with images and ATR dataset [Liang et al.2015a] with dataset. Each image in these datasets contains only one person, with relatively simple poses (mostly standing), against relatively clean backgrounds. The human parsers trained in such strictly constrained scenario often fail when applied to images captured under the real-life, more complicated environments. The second category is unconstrained human parsing. Representative datasets include Pascal human part dataset [Chen et al.2014b] with images and LIP dataset [Gong et al.2017] with images. The images in these dataset present humans with varying clothing appearances, strong articulation, partial (self-) occlusions, truncation at image borders, diverse viewpoints and background clutters. Although they are closer to real environments than the constrained datasets, when applying the human parser trained on these unconstrained datasets to a real application scenario, such as shop, airport, the performance is still worse than the parser trained on that particular scenario even with much less training samples, due to domain shift on visual features.
In this paper, we explore a new cross-domain human parsing problem: taking the unconstrained benchmark dataset with rich pixel-wise labeling as the source domain, how to obtain a satisfactory parser for a totally different target domain without any additional manual labeling? As shown in Figure 1, the source domain (shown in the upper panel of Figure 1) is a set of labeled data. The target domain training set (shown in the lower panel of Figure 1) is as a set of images without any annotations. We believe investigation on this challenging problem will push human parsing models toward more practical applications.
From Figure 1, we observe the following differences and commonality across two domains, e.g., the source domain and the first target domain, canteen. On the one hand, they have very different illumination, view points, image scale, resolution and degree of motion blur etc. For example, the lighting condition in the canteen scenario is much darker than the source domain. On the other hand, the persons to parse from both domains share the intrinsic commonality such as the high-order relationships among labels (reflecting human body structure) are similar. For example, in both domains, the arms are below the head, but above the legs. Therefore, the cross-domain human parsing problem can be solved by minimizing the differences of the features and maximizing the commonality of the structured labels.
A typical semantic segmentation network [Long, Shelhamer, and Darrell2015, Chen et al.2014a] is composed of a feature extractor and a pixel-wise labeler. In this work, we propose to introduce a new and learnable feature compensation network that transforms the features from different domains to a common space where the cross-domain difference can be effectively alleviated. In this way, the pixel-wise labeler can be readily applied to perform parsing on the compensated features. The feature compensation network is trained under the joint supervision from a feature adversarial network and a structured label adversarial network. More specifically, the feature adversarial network serves as a supervisor and provides guidance on the feature compensation learning like the discriminator of the Generative Adversarial Network (GANs) [Goodfellow et al.2014, Radford, Metz, and Chintala2015]. It is trained to differentiate target and compensated source feature representations. Similarly, the structured label adversarial network differentiates the groundtruth structural labels and the predicted target domain labels. Supervised by these two level information, the cross-domain shift issues can be effectively addressed. We evaluate our approach using LIP [Gong et al.2017] as source domain and datasets as target domains. Extensive experiments demonstrate the effectiveness of our method on all domain shifts adaptation tasks.
The contributions of the paper can be summarized as following. Firstly, we are the first to explore the cross-domain human parsing problem. Since no manual labeling in the target domain is needed, it is very practical. Secondly, we propose a cross-domain human parsing framework with the novel feature adaptation and structured label adaptation network. It is the first cross-domain work to consider both feature invariance and label structure regularization. Thirdly, we will release the source code of our implementation to the academic to facilitate future studies.
Human parsing and cross-domain feature transformation have been studied for decades. However, they generally develop independently. There are few works consider solving the cross-domain human parsing by considering these directions jointly. In this section, we briefly review recent techniques on human parsing as well as feature adaption.
Human parsing: Yamaguchi et al. [Yamaguchi, Kiapour, and Berg2013] tackle the clothing parsing problem using a retrieval based approach. Simo-Serra et al. [Simo-Serra et al.2014] propose a Conditional Random Field (CRF) model that is able to leverage many different image features. Luo et al. [Luo, Wang, and Tang2013] propose a Deep Decompositional Network for parsing pedestrian images into semantic regions. Liang et al. [Liang et al.2015b]
propose a Contextualized Convolutional Neural Network to tackle the problem and achieve very impressing results. Xiaet al. [Xia et al.2015] propose the “Auto-Zoom Net” for human paring. Wei et al. [Wei et al.2016, Wei et al.2017] propose several weakly supervised parsing methods to reduce the human labeling burden. Existing human parsing methods work well in the benchmark datasets. However, when applied in the new application scenarios, the performances are unsatisfactory. The cross-domain human parsing problem becomes a significant problem for making the technology practical.
There have been extensive prior works on domain transfer learning[Gretton et al.2009]. Recent works have focused on transferring deep neural network representations from a labeled source dataset to a target domain where labeled data is sparse. In the case of unlabeled target domains (the focus of this paper) the main strategy has been to guide feature learning by minimizing the differences between the source and target feature distributions [Ganin and Lempitsky2015, Liu and Tuzel2016, Long et al.2015]. Different from existing feature adaptation methods, we explicitly consider the cross-domain differences via a feature compensation network.
Structured Label Adaptation:
There are few works to consider the label structure adaptation during domain adaption. Some pioneer pose estimation works take the geometric constraints of human joints connectivity into consideration. For example, Chenet al. [Chen et al.2017] propose Adversarial PoseNet, which is a structure-aware convolutional network to implicitly take such priors into account during training of the deep network. Chou et al. [Chou, Chien, and Chen2017] employ GANs as pose estimator, which enables learn plausible human body configurations。 Our proposed cross-domain human parsing method differs from existing domain adaptation methods in that we consider both feature and structured label adaption simultaneously.
Proposed Cross-domain Adaptation Model
Suppose the source domain images and labels are denoted as and respectively. The target domain images are represented as . Typical human parsing models are composed of a feature extractor and a pixel-wise labeler . However, the parsing model trained on the source domain does not perform well in the target domain in presence of significant domain shift.
Our proposed cross-domain adaptation model to address this issue is shown in Figure 2. It includes a novel feature compensation component supervised by two components, namely adversarial feature adaptation and adversarial structured label adaptation components. The feature adaptation component aims to minimize the feature differences between different domains, while the structured label adaptation is used to maximize the label map commonalities across the domains. Therefore, the whole model introduces three novel components (shown in purple rectangular) on top of conventional human parsing models: feature compensation network , feature adversarial network and structured label adversarial network to address the cross-domain human parsing problem. Next, we will introduce the two adversarial learning components and explain how they help feature compensation to mitigate the domain difference.
Adversarial Feature Adaptation
The feature compensation network and feature adversarial network collaboratively contribute to the feature adaptation. maps the feature representation of the source domain toward the target domain under the supervision of . Alternatively updating them gradually narrows the cross-domain feature differences.
The feature compensation network, as shown in Figure 2, takes as input the extracted features from source domain, where is a part of the feature extractor . The output is the feature differences (shift) between source and target domains. is composed of convolutional layers of VGG-16 net [Simonyan and Zisserman2014], from conv1 till pool5 in VGG-16.The first several layers (from conv1 till pool1) forms . The structure of the feature compensation network is a ResNet-like [He et al.2016] network with a 77 convolution filters and then 6 residual blocks with the identical layout consisting of two 33 convolution filter to reduce feature maps’ sizes. The result of the feature compensation network is pixel-wisely added to that of the feature extractor to produce the compensated source domain feature .
The feature adversarial network is introduced to guide the cross-domain feature adaptation. Different from traditional adversarial learning models (e.g., vanilla GAN [Goodfellow et al.2014]) that performs judgment over raw images, our proposed feature adversarial network is defined upon the high-level feature space (pool5) which incorporates essential feature information for human parsing. It can accelerate the training and inference. The architecture of is composed of the same fc6-fc7 layers of the Atrous Spatial Pyramid Pooling (ASPP) scheme in DeepLab [Chen et al.2016]. Then a convolution layer with 3
3 convolution filters is appended to create a 1-channel probability map, which is used to calculate the pixel-wise least square feature adversarial loss, like the local LSGANs[Shrivastava et al.2016].
The optimization of and are iterative. More specifically, the objective for updating is:
whereto while regressing the features of the compensated source domain to . It distinguishes the target feature and the compensated source domain feature, while the feature compensation network tries to transform them into indistinguishable ones.
The learning target of the feature compensation network is to mitigate the difference between source and target features. It is trained by optimizing the following objective function:
The target of is to transform the source domain features to the one similar to target domain by trying to confuse . Or more concretely, the tries to generate features that persuade the to predict the feature is from target domain (output binary prediction of 1). It implicitly maps the source domain features toward the target domain by encoding lighting conditions, environment factors. By iteratively boosting the abilities of and through alternative training, the gap between the two domains are gradually narrowed down.
Adversarial Structured Label Adaptation
Only feature compensation cannot fully utilize the valuable information about human body structure and leads to suboptimal parsing performance. Therefore, we also propose a structured label adversarial network that learns to capture the commonalities of parsing labels from different domains. Such information is learnable from the source domain data because of the following reasons. Firstly, the labels have very strong spatial priors. For example, in daily-life scenarios, the head always lies on the top, while the shoes appear in the bottom in most cases. Moreover, relative positions between the labels are consistent across domains. For example, the arms lie on both sides of the body, while the head is at the top of the body. Finally, the part shapes of certain labels are basically similar on both domains. For example, the faces are usually round or oval while the legs are often long striped. The pixel-wise labeler and the structured label adversarial network collaboratively adapt the structured label prediction.
The pixel-wise labeler is composed of the , and layers of DeepLab [Chen et al.2016], which is a fully convolutional variant of the VGG-16 net [Simonyan and Zisserman2014] by modifying the atrous (dilated) convolutions to increase the field-of-view. Depending on the properties of the input, two losses are defined upon the network.
: The pixel-wise cross entropy loss defined upon the source domain images and .
: The pixel-wise cross entropy loss defined upon the compensated source domain features and .
The structured label adversarial network is used to distill the high-order relationships of the labels from the source domain groundtruth pixel-wise labels and transfer to guide parsing target domain images. The architecture of
is as follows. LeakyReLU activations and batch normalization are used for all layers except the output. All layers contain stride = 2 convolution filter except the last layer, which just contains one stride = 1 convolution filter to produce the confidence map. All convolution filter used in the network is 55 convolution filter.
The optimization is conducted jointly through a minimax scheme that alternates between optimizing the parsing network and the adversarial network. takes either the ground truth label or the prediction parsing result, and output the probability estimate of the input is the ground truth (with training target 1) or the segmentation network prediction (with training target 0). The learning target is:
The can help refine the feature extractor and pixel-wise labeler via:
Both and collaboratively confuse to produce the output 1, which means that the parsing results are drawn from the ground truth labels.
Model Learning and Inference
Training details of the integrated cross-domain human parsing framework are summarized in Algorithm 1. Generally speaking, all the model parameters are alternatively updated. Note that before every update of , the network , , and are updated for times. Experiments show that the different updating scheduling between and the remaining network facilitate the model convergence.
During inference, the parsing label of the test sample is obtained by . Note that the feature compensation network, tow adversarial networks are not involved in the inference stage. Therefore, the complexity of our algorithm is the same with conventional human parsing method.
In terms of the architecture of the adversarial networks, we originally tried DCGANs [Radford, Metz, and Chintala2015]. However, we found it difficult to optimize (issue of convergence) and performs not so well. Therefore, we borrow the architecture Least Squares Generative Adversarial Networks (LSGANs) [Mao et al.2016] to build our adversarial learning networks, which adopts least squares loss function for the discriminator. It performs more stable during learning. For the feature adversarial network, the adversarial loss is defined pixel-wisely on the 2-dim feature maps. The local LSGANs structure [Shrivastava et al.2016] can hence the capacity of the network. The situation is similar for structured label adversarial network.
|Methods||Avg. acc||Fg. acc||Avg. pre||Avg. rec||Avg. F1|
|Feat. + Lab. Adapt||87.98||73.86||50.84||54.49||51.73|
|Methods||Avg. acc||Fg. acc||Avg. pre||Avg. rec||Avg. F1|
|Feat. + Lab. Adapt||87.88||65.87||64.08||65.97||64.36|
We conduct extensive experiments to evaluate performance of our model for 4 cross-domain human parsing scenarios.
Source Domain : We use LIP dataset [Gong et al.2017] as the source domain that contains more than images with careful pixel-wise annotations of semantic human parts. These images are collected from real-world scenarios and the persons present challenging poses and views, heavily occlusions, various appearances and low-resolutions. The original labels are merged to labels or labels by discarding or combining to be consistent with target domains.
Target Domain: The following four target domains are investigated in this paper. Some example images from these target domains are shown in Figure 3.
Indoor dataset [Liu et al.2016] contains labeled images with semantic human part labels and unlabeled images. The images are captured in the canteen by surveillance cameras and have dim lights.
Daily Video dataset is a newly collected dataset, containing labeled images with semantic human part labels and unlabeled images. These images are collected from a variety of scenes including shop, road, etc.
PridA and PridB datasets are selected from camera view A and camera view B of Person Re-ID 2011 Dataset [Roth et al.2014]. Person Re-ID 2011 Dataset consists of images extracted from multiple person trajectories recorded from two different, static surveillance cameras.
Baseline & Evaluation We compare the proposed method is compared with following baseline methods.
Since all of our target domains have pixel-level annotations, we train and test the parsing model directly on the target domains. We take the results, derived from accessing the full supervision, as performance upper bound for the cross-domain parsing models. In the following experiments, the basic model is the same as our feature extraction network and label predicting network.
Source Only: We apply the model trained on the source domain directly to the target domain, without any fine-tuning on the target domain datasets. It is a valid performance lower bound of the cross-domain methods.
DANN: There are a few works investigating cross-domain learning problems following the adversarial learning strategy. Here, we take the most competitive one proposed in [Ganin et al.2016]. It resolves the cross-domain problems on classifications. DANN uses an adversary network to make the features extracted from the source domain and target domains undistinguishable. The feature extraction network are shared for images from both domains. We adapt this method for the human parsing problem.
|Methods||Avg. acc||Fg. acc||Avg. pre||Avg. rec||Avg. F1|
|Feat. + Lab. Adapt||87.24||80.81||74.76||82.32||77.92|
|Methods||Avg. acc||Fg. acc||Avg. pre||Avg. rec||Avg. F1|
|Feat. + Lab. Adapt||86.26||82.39||75.20||81.62||77.89|
|Feat. + Lab. Adapt||94.49||56.73||67.86||78.81||42.79||42.64||78.97||36.25||22.86||47.00||25.32||27.00|
|Feat. + Lab. Adapt||95.38||70.88||57.11||73.04||57.05||53.92||73.34||64.80||64.73||64.80||48.34||48.97|
For ablation studies, we consider three variants of our method, to evaluate the contribution of each sub-network. Feat. Adapt: Our method with the Feature Adversarial network alone. Lab. Adapt: Our method with the Structured Label Adversarial network alone. Feat. + Lab. Adapt: Our method with both Feature Adversarial network and Structured Adversarial network.
We adopt five popular evaluation metrics, i.e., accuracy, foreground accuracy, average precision, average recall, and average F-1 scores over pixels[Yamaguchi, Kiapour, and Berg2013]. All these scores are obtained on the testing sets of the target domains. The annotations of target domains are only used in the “Target Only” method.
: The feature extractor and the pixel-wise labeler use the DeepLab model, with pre-trained models on PASCAL VOC. The other networks are initialized with “Normal” distribution.
During training of the feature adversarial adaption component, “Adam” optimizer is used with and . The learning rate is -5. When training the structured label adaptation component, we use “Adam” optimizer with , and , while the learning rate is -8. The remaining networks are optimized via “SGD” optimizer with momentum of 0.9, learning rate
-8 and weight decay of 0.0005. The whole framework is trained on PyTorch with a mini-batch size of. The input image size is . The experiments are done on a single NVIDIA GeForce GTX TITAN X GPU with 12GB memory. The constant is in our experiment.
|Feat. + Lab. Adapt||92.83||72.01||76.72||70.14|
|Feat. + Lab. Adapt||91.59||72.71||78.27||68.99|
From these results, we can observe that the “Feat. + Lab. Adapt” method always outperforms about than the method “Source Only” and “DANN” in the value “Avg. F-1”, which verifies the effectiveness of the proposed cross-domain method. Note that, the “Avg. F-1” score of “Feat. + Lab. Adapt” is even higher than those of “Target Only” on the Daily Video dataset. We believe the reason is that the number of images in the training set is quite limited in this dataset and our proposed model is effective at transferring useful knowledge to address the sample-insufficiency issue. Besides, “Feat. Adapt” performs better than “Lab. Adapt” on the dataset Indoor, PridA, and PridB. This is from the fact that the features output by the “pool5” layer contain more sufficient characteristics, so the adversary network on these features has more influence on the whole performance.
Some qualitative comparisons on the four target domains are shown in Figure 3.
For the dataset Indoor, back-view persons appear more frequently, and the illuminations are poor. Therefore, the predictions of left and right arms/shoes are often incorrect, and the hairs may be mis-predicted as backgrounds as well. For the persons in the 1st and 3rd rows of the dataset Indoor, the left and right arms are confused by “Source Only”. The DANN performs slightly better, but our model is able to predict the left and right arms correctly. The hair of the second person is missed in both the “Source Only” and “DANN” methods, due to the dim lights of the image. The dress of the 4th person looks smaller because the camera is much higher than the person. So “Source Only” and “DANN” methods wrongly predict them as upper clothes.
For the dataset Daily Video, cameras are put at general positions but the poses of persons are more challenging. People usually appear in frontal view, but they are often moving fast, e.g. the 2nd person, or in nonuniform illuminations, e.g. 3th and 4th persons. In these cases, the proposed model performs better, benefiting from the structure adversary network. Our method also performs better in predicting the classes of clothes, e.g. the 1st person.
The resolution of the dataset PridA and PridB are very low. As shown in Figure 3, our model and its variants also win in predicting details of the persons.
In this paper, we explored a new cross-domain human parsing problem: making use of the benchmark dataset with extensive labeling, how to build a human parsing for a new scenario without additional labels. To this end, an adversarial feature and structured label adaptation method were developed to learn to minimize the cross-domain feature differences and maximize the label commonalities across the two domains. In future, we plan to explore unsupervised domain adaptation when the target domain are unsupervised videos. The videos provide rich temporal context and can facilitate cross-domain adaptation. Moreover, we would like to try other types of GANs, such as WGAN [Arjovsky, Chintala, and Bottou2017] in our network.
This work was supported by Natural Science Foundation of China (Grant U1536203, Grant 61572493, Grant 61572493), the Open Project Program of the Jiangsu Key Laboratory of Big Data Analysis Technology, Fundamental theory and cutting edge technology Research Program of Institute of Information Engineering, CAS(Grant No. Y7Z0241102) and Grant No. Y6Z0021102.
- [Arjovsky, Chintala, and Bottou2017] Arjovsky, M.; Chintala, S.; and Bottou, L. 2017. Wasserstein gan. arXiv:1701.07875.
- [Chen et al.2014a] Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2014a. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv:1412.7062.
- [Chen et al.2014b] Chen, X.; Mottaghi, R.; Liu, X.; Fidler, S.; Urtasun, R.; and Yuille, A. 2014b. Detect what you can: Detecting and representing objects using holistic models and body parts. In CVPR.
- [Chen et al.2016] Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2016. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv:1606.00915.
- [Chen et al.2017] Chen, Y.; Shen, C.; Wei, X.-S.; Liu, L.; and Yang, J. 2017. Adversarial posenet: A structure-aware convolutional network for human pose estimation. arXiv:1705.00389.
- [Chou, Chien, and Chen2017] Chou, C.-J.; Chien, J.-T.; and Chen, H.-T. 2017. Self adversarial training for human pose estimation. arXiv:1707.02439.
Ganin, Y., and Lempitsky, V.
Unsupervised domain adaptation by backpropagation.In ICML.
- [Ganin et al.2016] Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; and Lempitsky, V. 2016. Domain-adversarial training of neural networks. JMLR.
- [Gong et al.2017] Gong, K.; Liang, X.; Shen, X.; and Lin, L. 2017. Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing. arXiv:1703.05446.
- [Goodfellow et al.2014] Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. In NIPS.
- [Gretton et al.2009] Gretton, A.; Smola, A. J.; Huang, J.; Schmittfull, M.; Borgwardt, K. M.; and Schölkopf, B. 2009. Covariate shift by kernel mean matching.
- [He et al.2016] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR.
- [Li et al.2017] Li, S.; Xiao, T.; Li, H.; Zhou, B.; Yue, D.; and Wang, X. 2017. Person search with natural language description. arXiv:1702.05729.
- [Liang et al.2015a] Liang, X.; Liu, S.; Shen, X.; Yang, J.; Liu, L.; Dong, J.; Lin, L.; and Yan, S. 2015a. Deep human parsing with active template regression. TPAMI.
- [Liang et al.2015b] Liang, X.; Xu, C.; Shen, X.; Yang, J.; Liu, S.; Tang, J.; Lin, L.; and Yan, S. 2015b. Human parsing with contextualized convolutional neural network. ICCV.
- [Liu and Tuzel2016] Liu, M.-Y., and Tuzel, O. 2016. Coupled generative adversarial networks. In NIPS.
- [Liu et al.2014] Liu, S.; Feng, J.; Domokos, C.; Xu, H.; Huang, J.; Hu, Z.; and Yan, S. 2014. Fashion parsing with weak color-category labels. TMM.
[Liu et al.2015]
Liu, S.; Liang, X.; Liu, L.; Shen, X.; Yang, J.; Xu, C.; Lin, L.; Cao, X.; and
Matching-cnn meets knn: Quasi-parametric human parsing.CVPR.
- [Liu et al.2016] Liu, S.; Wang, C.; Qian, R.; Yu, H.; and Bao, R. 2016. Surveillance video parsing with single frame supervision. arXiv:1611.09587.
- [Long et al.2015] Long, M.; Cao, Y.; Wang, J.; and Jordan, M. 2015. Learning transferable features with deep adaptation networks. In ICML.
- [Long, Shelhamer, and Darrell2015] Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. CVPR.
- [Luo, Wang, and Tang2013] Luo, P.; Wang, X.; and Tang, X. 2013. Pedestrian parsing via deep decompositional network. In ICCV.
- [Mao et al.2016] Mao, X.; Li, Q.; Xie, H.; Lau, R. Y.; Wang, Z.; and Smolley, S. P. 2016. Least squares generative adversarial networks. arXiv:1611.04076.
- [Radford, Metz, and Chintala2015] Radford, A.; Metz, L.; and Chintala, S. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434.
- [Roth et al.2014] Roth, P. M.; Hirzer, M.; Koestinger, M.; Beleznai, C.; and Bischof, H. 2014. Mahalanobis distance learning for person re-identification. In Person Re-Identification.
- [Shrivastava et al.2016] Shrivastava, A.; Pfister, T.; Tuzel, O.; Susskind, J.; Wang, W.; and Webb, R. 2016. Learning from simulated and unsupervised images through adversarial training. arXiv:1612.07828.
- [Simo-Serra et al.2014] Simo-Serra, E.; Fidler, S.; Moreno-Noguer, F.; and Urtasun, R. 2014. A high performance crf model for clothes parsing. In ACCV.
- [Simonyan and Zisserman2014] Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556.
- [Wei et al.2016] Wei, Y.; Xia, W.; Lin, M.; Huang, J.; Ni, B.; Dong, J.; Zhao, Y.; and Yan, S. 2016. Hcp: A flexible cnn framework for multi-label image classification. TPAMI.
- [Wei et al.2017] Wei, Y.; Liang, X.; Chen, Y.; Shen, X.; Cheng, M.-M.; Feng, J.; Zhao, Y.; and Yan, S. 2017. Stc: A simple to complex framework for weakly-supervised semantic segmentation. TPAMI.
- [Xia et al.2015] Xia, F.; Wang, P.; Chen, L.-C.; and Yuille, A. L. 2015. Zoom better to see clearer: Human part segmentation with auto zoom net. arXiv:1511.06881.
- [Xia et al.2017] Xia, F.; Wang, P.; Chen, X.; and Yuille, A. 2017. Joint multi-person pose estimation and semantic part segmentation. CVPR.
- [Yamaguchi et al.2012] Yamaguchi, K.; Kiapour, M. H.; Ortiz, L. E.; and Berg, T. L. 2012. Parsing clothing in fashion photographs. In CVPR.
- [Yamaguchi, Kiapour, and Berg2013] Yamaguchi, K.; Kiapour, M. H.; and Berg, T. 2013. Paper doll parsing: Retrieving similar styles to parse clothing items. In ICCV.
- [Zhao, Ouyang, and Wang2014] Zhao, R.; Ouyang, W.; and Wang, X. 2014. Learning mid-level filters for person re-identification. In CVPR.
- [Zhu et al.] Zhu, S.; Fidler, S.; Urtasun, R.; Lin, D.; and Loy, C. C. Be your own prada: Fashion synthesis with structural coherence.