Multi-person pose estimation aims at locating body keypoints (eyes, ears, nose, shoulders, elbows, wrists, hips, knees, ankles, etc.) for all persons from an image. It is fundamental and important to a variety of computer vision applications, such as human action recognition  and human re-identification .
Due to the help of deep convolution neural networks, remarkable progress has been made in multi-person pose estimation[8, 24, 4, 28, 16, 27, 29, 18, 17, 9, 1, 5]. Although great progress has been made, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints, change of view point, and crowed background. Both affluent context information and spatial information are essential to locate keypoints accurately. For example, we can capture the global context of the image by enlarging the receptive field and fusing different context information. The context information represents the global position of human and indicates the contextual relationship between keypoints, thus holds potential to accurately estimate the occluded and invisible keypoints, e.g. the left knee of the man in Figure 1. Adding the spatial information can provide detail information which is useful for refining the positions of keypoints.
With these observations, we aim at leveraging both the context information and spatial information to improve multi-person pose estimation. Toward this end, we present a novel Context-and-Spatial Aware Network (CSANet) to extract effective context information and spatial information, as shown in Figure 2. Based on a backbone network, there are three parts in our network architecture: Context Aware Path (CAP), Spatial Aware Path (SAP), and Heavy Head Path (HHP). For Context Aware Path, we design a structure supervision module to learn the part-aware context information and adopt the Atrous Spatial Pyramid Pooling (ASPP) module  to capture context information of different receptive fields. Thus, sufficient context information are extracted to infer the occluded and invisible keypoints. In respect of Spatial Aware Path, we preserve the spatial size and encode affluent spatial information for accurate localization, which also shorten the information path from low-level features to the high-level features. On top of these two paths, the Heavy Head Path (HHP) is proposed to adopt adaptivity fusion learning to combine the context information with spatial information and employ a small fully convolution network to recalibrate the fusion features.
Based on our Context-and-Spatial Aware Network (CSANet), we address the multi-person pose estimation problem in a top-down pipeline. First, we apply the human detect network to obtain human detection bounding boxes. Then, the CSANet is adopted to locate body keypoints for each human bounding box. Next, ablation studies are conducted to demonstrate the effectiveness of the CAP path, SAP path and HHP path. Finally, we evaluate our proposed network on the COCO keypoint benchmark , and the experimental results show that our proposed CSANet outperforms existing state-of-the-art methods.
In summary, there are five contributions in our paper:
We design a Context Aware Path to learn the part-aware context information and context information of different receptive fields for inferencing the challenge keypoints.
We design a Spatial Aware Path to preserve spatial detail information for refining the position of keypoints.
We design a Heavy Head Path to adaptively fuse the context information and the spatial information.
Based on the Context Aware Path, Spatial Aware Path and Heavy Head Path, we propose a Context-and-Spatial Aware Network which can make full use of the context information and spatial information.
We evaluate our method on the COCO keypoint benchmark, and achieve state-of-the-art performance in multi-person pose estimation.
2 Related Work
Recently, lots of approaches based on Convolution Neural Network (CNN) have achieved high performance on different benchmarks of multi-person pose estimation [8, 24, 4, 28, 16, 27, 29, 18, 17, 9, 1]. Several principles proposed for designing networks in scene parsing are also effective for our work, in which we pay more attention to the issue of context information extraction and spatial information preservation [3, 31, 2, 26, 30].
Multi-person Pose Estimation Recently, significant progress has been made in multi-person pose estimation with the development of CNN. In , a real-time Convolution Pose Machine (CPM) is proposed to locate the body keypoints, and assemble the keypoints to individuals in the image with the learning part affinity fields (PAFs). Based on the ResNet backbone, the Simple Baseline Network (SBN)  employs a deconvolution head network to predict human keypoints. The spatial detail information is inevitably lost along the information propogation in CPM and SBN, which is useful for refining keypoints’ localization. To avoid this problem, Newell et al.  integrate associate embedding with a stack-hourglass network to produce joint score heatmaps and embedded tags for grouping joints into individual people. The Cascaded Pyramid Network (CPN)  adopts the GlobalNet to learn a good feature representation and the RefineNet to further recalibrate the feature representation for accurate keypoint localization. The Hourglass network and Cascade Pyramid Network preserve spatial features at each resolution by adding skip layers and capture sufficient context information for accurately inferencing both simple keypoints and challenge keypoints.
As mentioned in [4, 27], context information represents the global position of human and indicates the contextual relationship between keypoints. Spatial information can provide detail information which is useful for refining the positions of keypoints. Thus, the well-designed network should take both context information and spatial information into account. Several principles (e.g. preserving spatial information, and capturing diverse context information) proposed for designing networks in scene parsing can be also effective for the multi-person pose estimation task.
Context Information Generally, as the network goes deep, the high-level feature holds potential to capture the context information with a large receptive filed. In another way, Atrous Spatial Pyramid Pooling (ASPP)  and Pyramid Pooling Module (PPM)  are widely used to extract abundant context information in scene parsing. ASPP module employs atrous convolution with different dilation rates and global pooling module to capture diverse context information. PPM module fuses features under different pyramid pooling scales to obtain global contextual prior information.
Spatial Information Consecutive down-sampling or pooling operations in the convolution neural network may lose the spatial information which is crucial to predicting the detailed output in scene parsing and pose estimation tasks. Some existing methods [3, 31, 2, 26] use the dilated convolution to preserve spatial size of the feature map. Other methods employ the feature pyramid network , U-shape method , Hourglass network  to shorten the information path between low-level features and high-level features. By using such skip-connected network structure, we can recover a certain extent of spatial information.
In our paper, we aim at leveraging both the context information and spatial information to improve multi-person pose estimation. Compared with existing methods, we design a Structure Supervision module to capture part-aware context information and adopt ASPP module to capture context information of different receptive fields in Context Aware Path. We preserve abundant spatial information by adding skip layers from the low-level features to high-level features with Spatial Aware Path. Experimentally, we find that adding the global pooling features of low-level features can further help accurately locate the keypoints. Moreover, we proposed a simple yet effective Heavy Head Path to fuse the context aware features and spatial aware features.
In this section, we propose a novel Context-and-Spatial Aware Network (CSANet) to make full use of the context information and spatial information. An overview of the proposed CSANet is illustrated in Figure 2. We first briefly review the structure of Simple Baseline Network. Then, we introduce Context-Aware-Path, Spatial-Aware-Path, and Heavy Head Path in detail. Finally, we describe the complete network architecture of Context-and-Spatial Aware Network, as well as training and inference details.
3.1 Revisiting Simple Baseline Network
ResNet is the most commonly used backbone network for image classification, scene parsing, and human pose estimation. Simple Baseline Network (SBN) uses a DeconvHead (consists of three deconvolution layers) after the last convolution stage of the ResNet, in which each deconvolution layer has 256 filters with
kernel size and stride parameter is 2. After the DeconvHead, aconvolution layer is added to predict the heatmaps for all keypoints.
3.2 Context Aware Path
Motivation In the task of multi-person pose estimation, most of the modern methods tackle it as a dense regression issue. Due to the lack of abundant context information, the regression could not handle the prediction of invisible keypoints, occluded keypoints, and other complex situations. To this end, we design a Context Aware Path to extract abundant context information which represents the global position of human and the contextual relationship of body keypoints.
The Context Aware Path contains two modules: Structure Supervision module and Atrous Spatial Pyramid Pooling module, as shown in Figure 3. The Structure Supervision (SS) module is to encode part-aware context information, and the Atrous Spatial Pyramid Pooling (ASPP) module is to capture diverse context information of different receptive fields.
Structure Supervision Body structural priors can provide valuable cues to infer the locations of the hidden body parts from the visible ones. Motivated by this, we perform multi-part supervision at each part prediction branch to obtain part-aware features. Compared with the Simple Baseline Network, we replace the DeconvHead module with the Structure Supervision module. In this paper, we divide human body into three parts for COCO keypoint dataset: face part (ears, eyes, and nose), upper limb part (shoulders, elbows, and wrists) and lower-limb part (hips, knees, and ankles). Then, we combine the face aware features, upper limb aware features, and lower limb aware features with the hybrid context features for capturing diverse part-aware context information.
Atrous Spatial Pyramid Pooling Atrous convolution is a powerful operation to adjust the filed-of-view in order to capture multi-scale information. The Atrous Spatial Pyramid Pooling module has been widely used in scene parsing task, which adopts atrous convolution with different dilation rates for diverse context information extraction. In the CAP path, we simply add this module after the structure supervision module to capture context information of different receptive fields. In our experiment, we set the dilation rates as 1, 6, 12, and 18.
As shown in Figure 3, three branches are employed to extract part-aware context features, which respectively supervised by face ground-truth score maps, upper limb ground-truth score maps, and lower limb ground-truth score maps. Then ASPP module is adopted to further recalibrate the fusing information of the part-aware features and hybrid context features.
3.3 Spatial Aware Path
Motivation In the task of multi-person pose estimation, spatial information can provide detailed information which is useful for refining the positions of keypoints. Some existing methods [1, 8, 18] attempt to estimate keypoints from the heatmaps of which the resolution is 1/8 of the input image. Yet, higher resolution information should be added to provide more spatial details. To this end, we extract the spatial aware features from the lower stages of the backbone network to preserve abundant spatial information of which the resolution is 1/4 of the input resolution.
In our proposed network, we use ResNet  as a backbone model. According to the feature maps’ size, the ResNet can be divided into five stages, denoted as , , , , and stages. The ResNet encodes more detailed spatial information in the lower stages, however, extracts stronger context information in the higher stages. Based on this observation, we design our Spatial Aware Path to capture the finer spatial information, as shown in Figure 4. First, we use a convolution path (contains a convolution layer with filters and a convolution layer with filters) to recalibrate the last feature maps of stage to obtain the spatial feature, denoted as Conv2 Features. Conv3 Features are captured by another convolution path (same as Conv2 Features) on the last feature maps of stage and resized to the resolution of Conv2 Features. Next, we use a series of operations (consists of global pooling, two convolution layers with filters, and resize to the resolution of Conv2 Features) to generate Conv2GP Features. Finally, we concatenate the Conv2 Features, Conv3 Features and Conv2GP Features, and reduce the concatenated features to 256 dimension feature maps by a convolution layer with filters. Experimentally, we find that adding the Conv2GP Features can further help accurately locate the keypoints.
3.4 Heavy Head Path
Heavy head, namely stack of convolution layers, is quite effective for bounding box prediction 
. In our paper, we find it is also useful in the dense regression of keypoints’ score maps. This path first concatenates the context aware features extracted by Context Aware Path and spatial aware features captured by Spatial Aware Path, followed by using a small fully convolution network (FCN) to regress the body keypoints’ ground-truth score maps. After concatenating the context aware features and spatial aware features, the fusion parameters of these features can be adaptively learned by the network. The FCN consists ofconvolution layers. The first layers consists of filters and the last convolution layer is filters. According to the number of keypoints in the COCO keypoint benchmark , is set to 17 in our paper.
3.5 Network Architecture, Training and Inference
With the Context Aware Path, Spatial Aware Path, and Heavy Head Path, we propose a novel Context-and-Spatial Aware Network (CSANet) for multi-person pose estimation as illustrated in Figure 2.
Network Architecture We use the pre-trained ResNet as our backbone network. First, we operate the Context Aware Path (CAP) on the last feature maps of stage to capture the part-aware context information and diverse context information of different receptive filed. Then, we employ the Spatial Aware Path (SAP) on the last feature maps of , stage to encode spatial information feature. In our paper, the CAP path encodes abundant context information, while the SAP path provides rich spatial information. They are complementary to each other for higher performance on keypoint localization. Given the different feature representation of the CAP path and SAP path, we concatenate these features instead of simply summing operation to fuse the context information features and spatial information features. Finally, the Heavy Head Path (HHP) is operated on the concatenated features, which encodes both affluent context information and spatial information to accurately predict the keypoints’ heatmaps.
Network Training In our paper, we use the score maps to represent the location of body keypoints. For each person, the ground-truth locations are labeled as , where denotes the coordinate of the th keypoint ( denote the keypoints of the face, denote the keypoints of the upper limb, and denote the keypoints of the lower limb) of the person. The ground-truth score map is defined as,
in which, denotes the location, and is set to 2 for input, and is set to 3 for
input. In our Context Aware Path, we use three auxiliary loss functions to supervise it to learn part-aware context information. The face aware branch, upper limb aware branch and lower limb branch predict the face related keypoints’ heatmaps (i.e., ), upper limb related keypoints’ heatmaps ( i.e., ), and lower limb related keypoints’ heatmaps ( i.e., ) respectively. For the Heavy Head Path, it predicts the holistic body keypoints’ heatmaps ( i.e., ). Then, the loss of our CSANet is,
where is the number of samples, , , and are the loss weight parameters.
Network Inference During the inference, we obtain the predicted body keypoints localizations from the predicted score maps generated from the Heavy Head Path by taking the locations with the maximum score as follows:
This section is organized in accordance with the progress of our experiments. Firstly, we describe the experimental setup. Then, we decompose our proposed network to reveal the effect of each component on MS-COCO val2017 dataset. Last but not least, we compare our network with previous state-of-the-art methods on MS-COCO val2017 dataset and MS-COCO test-dev2017 dataset.
4.1 Experimental Setup
Dataset and Evaluation Metric
Dataset and Evaluation MetricWe train and evaluate our Context-and-Spatial Aware Network (CSANet) on MS-COCO 2017 dataset . Our models are only trained on the MS-COCO train2017 dataset including 57K images and 150K person instances, no extra data involved. There are 5000 images (MS-COCO val2017 dataset) for validation and 20K images (MS-COCO test-dev2017 dataset) for testing. Following previous work [1, 4, 28], evaluation is conducted using the Object Keypoints Similarity (OKS) based mAP, where OKS defines the difference between predicted person keypoints and ground-truth person keypoints.
Cropping Strategy The person ground-truth box (or detection box) is changed to a fixed aspect ratio, e.g. height : weight = 4 : 3. Then, we crop the image and resize it to a fixed resolution. In our paper, the default resolution of the network input image is .
Data Augmentation Strategy We use random flip, random scale, and random rotation in training. The possibility of flip or not is 0.5. The random rotation range is (), and the random scale is ().
Person Detector For MS-COCO val2017 dataset, we use the human detection boxes provided by  to make a fair comparison, the detection boxes are generated by a Faster-RCNN detector  with human detection AP 56.4 on MS-COCO val2017. For MS-COCO test-dev2017 dataset, we adopt the SNIPER detector  with human detection AP 58.1 on MS-COCO test-dev2017.
|ResNet-50+CAP+SAP+HHP (Our CSANet)||72.5||89.4||79.4||69.1||79.4|
Training Details We train our proposed CSANet using Adam 
algorithm with a mini-batch of 128 (32 per GPU) for 140 epochs. The initial learning rate is 1e-3 and is dropped by 10 at the 90th epoch and the 120th epoch. Generally, the training of ResNet-50 based models takes about 52 hours on four NVIDIA Titan V100 GPUs. All codes are implemented with PyTorch
. In this paper, our network is trained with ResNet-50, ResNet-101, and ResNet-152. The ResNet backbones are initialized with the public-released pre-trained model on the ImageNet. We also conduct experiments with different resolutions of the input image ( and ).
Testing Details A top-down pipeline is adopted for estimating the multi-person pose. First, we use a person detector to generate the human bounding boxes. Then, we apply our CSANet to generate the pose prediction heatmaps for each bounding box. Following previous work [28, 4], we average the heatmaps of origin image and the heatmaps of the flipped image to get the final prediction. A quarter offset in the direction from the highest response to the second highest response is used to obtain the final location.
4.2 Ablation Study
In this subsection, we will step-wise decompose our proposed CSANet to reveal the effect of each component. In the following experiments, we evaluate all comparisons on MS-COCO val2017 dataset. Unless otherwise specified, the default backbone is ResNet-50, and the input size of all models is .
4.2.1 Component Analysis
In Table 1, we show our ablation study from the Simple Baseline Network  (SBN, which achieves the state-of-art) gradually to all components incorporated. Based on the SBN, we replace the DeconvHead with our Context Aware Path (CAP), the AP performance is improved from 70.6 to 71.1. Furthermore, when adding the Spatial Aware Path, we can achieve 71.7 AP. Finally, we adopt the Heavy Head Path (HHP) to fuse the context aware information and spatial aware information to predict the pose heatmaps. After adding the HHP module, the AP performance can be further improved from 71.7 to 72.5.
4.2.2 Ablation Study on Context Aware Path
Different with the SBN, we replace the DeconvHead (consists of three deconvolution layers) with our Context Aware Path. The CAP path consists of two modules: Structure Supervision module and ASPP module.
Ablation for Structure Supervision We use the Structure Supervision module which performs multi-part supervision operation to extract the part-aware context information. As shown in Table 2, this module improves AP performance from 70.6 to 71.0, which is an obvious improvement. In our paper, the loss weight parameters , , and are set to 1. We also conduct the experiment which sets all the loss weight parameters to 0, the AP performance is 70.8.
Ablation for Atrous Spatial Pyramid Pooling To capture diverse context information of different receptive fields, we apply the ASPP module on the features extracted by Structure Supervision module. As shown in Table 2, this further improves the performance by 0.1.
4.2.3 Ablation Study on Spatial Aware Path
While the Context Aware Path pays attention to the context information, the Spatial Aware Path focus on the spatial information which can provide detail information for refining the positions of keypoints. By integrating the CAP path and SAP path, the AP performance is improved from 71.1 to 71.7, as shown in Table 3.
Design Choices of Spatial Aware Path Here, we compare different design strategies of the SAP path as shown in Table 3. We compare the following implementations: 1) Conv2 Features. 2) Conv2 Features + Conv3 Features. 3) Conv2 Features + Conv2GP Features. 4) Conv2 Features + Conv3 Features + Conv2GP Features. The Conv2 Features, Conv2GP Features, and Conv3 Features are detailedly described in Section 3.3.
Then, we reduce the spatial aware features to 256 dimension, and integrating it with the context aware features to predict the person pose. As shown in Table 3, we find that adding spatial detail information with Conv2 Features, Conv3 Features and Conv2GP Features can effectively achieve 0.6 AP gains.
4.2.4 Ablation Study on Heavy Head Path
This path first concatenates the context aware features extracted by Context Aware Path and spatial aware features captured by Spatial Aware Path, then uses a small fully convolution network (FCN) to regress the keypoints’ ground-truth score maps. The FCN consists of N convolution layers with filters and one convolution layer with filters. As shown in Table 4, this improves the AP performance from 71.7 to 72.5 when N is chosen as 3, 5, or 6. In our CSANet, the N is set to 3 for less computation.
4.2.5 Ablation Study on Data Pre-processing
Here, we investigate the performance of our CSANet with different input sizes. Due to the increase of input image size, more spatial information are fed into our network. Therefore, this improves the AP performance from 72.5 ( input size) to 74.1 ( input size), which is an obvious large improvement, as shown in Table 5.
4.2.6 Ablation Study on Backbone Network
As in most computer vision tasks, a deeper backbone model has better performance. We conduct experiments with ResNet-50, ResNet-101, and ResNet-152 backbones with the input size of . Table 6 shows that AP increase is 0.3 from ResNet-50 to ResNet-101 and 1.0 from ResNet-50 to ResNet-152.
4.3 Comparison with State-of-the-art Methods
In this subsection, we compare our proposed CSANet with state-of-the-art methods on MS-COCO val2017 dataset and MS-COCO test-dev2017 dataset.
Results on MS-COCO val2017 As shown in Table 7, we compare our network with a 8-stage Hourglass (a classical model), CPN (Cascaded Pyramid Network, COCO2017 winner), and SBN (Simple Baseline Network, the state-of-the-art network) . All these methods use top-down pipeline. For human bounding boxes generating, the person detection AP of Hourglass and CPN is 55.3. The person detection AP of SBN is 56.4, we use the human bounding boxes provided by SBN to make a fair comparison.
Compared with Hourglass , our CSANet has an improvement of 5.6 points in AP for input size of . Our network outperforms CPN  by 3.1 AP for input size of , and 2.5 AP for input size of . By contrasting SBN  with our CSANet, the AP performance is improved from 70.6 to 72.5 for input size of , and from 72.2 to 74.1 for input size of . Our method improves the previous best results with a large margin by 1.9 AP for both and input size.
Results on MS-COCO test-dev 2017 Table 8 illustrates the results of modern state-of-the-art methods in the literature on MS COCO test-dev2017 dataset. For the human bounding boxes generating, CPN uses a human detector with person detection AP 62.9 on COCO minival split dataset. SBN adopts a human detector with person detection AP 60.9 on COCO test-dev dataset. We use the SNIPER detector with person detection AP 58.1 on COCO test-dev dataset.
Compared with CMU-Pose , G-RMI , and Mask-RCNN , our method achieves significant improvement. Even though CPN  use a stronger backbone of ResNet-Inception, our CSANet’s single model (ResNet-152) achieves 74.5 AP and outperforms CPN’s single model by 2.4 AP for the input size of . As mentioned before, SBN  use a more powerful human detector with person detection AP 60.9 on COCO test-dev dataset, which is higher than our human detector by 2.8. Yet, our model has an improvement of 0.7 AP in multi-person pose estimation for the input size of . Figure 5 illustrates some results generated using our method.
Aiming at fully leveraging both context information and spatial information to improve multi-person pose estimation, we propose a novel Context-and-Spatial Aware Network (CSANet) in this paper. From the architecture perspective, we design a Context Aware Path to capture part-aware information and diverse context information of different receptive filed which indicates the contextual relationship between keypoints. Then, we propose a Spatial Aware Path to preserve detail information for refining the position of keypoints. Next, a Heavy Head Path is proposed to further combine and recalibrate the context aware features and spatial aware features. These modules are trained as a whole to maximally complement each other. We also conduct a series of ablation studies to validate the effectiveness of each module. Finally, our experimental results show that our proposed CSANet can significant improve the performance on COCO keypoint benchmark.
Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh.
Realtime multi-person 2d pose estimation using part affinity fields.
Proceedings of the IEEE conference on computer vision and pattern recognition, volume 1, page 7, 2017.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2018.
-  L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
-  Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun. Cascaded pyramid network for multi-person pose estimation. arXiv preprint arXiv:1711.07319, 2017.
-  X. Chu, W. Yang, W. Ouyang, C. Ma, A. L. Yuille, and X. Wang. Multi-context attention for human pose estimation. arXiv preprint arXiv:1702.07432, 1(2), 2017.
-  K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2980–2988. IEEE, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  E. Insafutdinov, L. Pishchulin, B. Andres, M. Andriluka, and B. Schiele. Deepercut: A deeper, stronger, and faster multi-person pose estimation model. In European Conference on Computer Vision, pages 34–50. Springer, 2016.
-  L. Ke, M.-C. Chang, H. Qi, and S. Lyu. Multi-scale structure-aware network for human pose estimation. arXiv preprint arXiv:1803.09894, 2018.
-  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  T.-Y. Lin, P. Dollár, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie. Feature pyramid networks for object detection. In CVPR, volume 1, page 4, 2017.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
-  J. Liu, B. Ni, Y. Yan, P. Zhou, S. Cheng, and J. Hu. Pose transferrable person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4099–4108, 2018.
-  S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8759–8768, 2018.
-  MS-COCO. Coco keypoint leaderboard. http://cocodataset.org/.
-  A. Newell, Z. Huang, and J. Deng. Associative embedding: End-to-end learning for joint detection and grouping. In Advances in Neural Information Processing Systems, pages 2274–2284, 2017.
-  A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, pages 483–499. Springer, 2016.
-  G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler, and K. Murphy. Towards accurate multi-person pose estimation in the wild. In CVPR, volume 3, page 6, 2017.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
-  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
-  B. Singh, M. Najibi, and L. S. Davis. Sniper: Efficient multi-scale training. arXiv preprint arXiv:1805.09300, 2018.
-  A. Toshev and C. Szegedy. Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1653–1660, 2014.
-  C. Wang, Y. Wang, and A. L. Yuille. An approach to pose-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 915–922, 2013.
-  P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell. Understanding convolution for semantic segmentation. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1451–1460. IEEE, 2018.
L. Wentao, C. Jie, L. Cheng, Q. Chen, C. Xiao, and H. Xiaolin.
A cascaded inception of inception network with attention modulated
feature fusion for human pose estimation.
AAAI Conference on Artificial Intelligence, 2018.
-  B. Xiao, H. Wu, and Y. Wei. Simple baselines for human pose estimation and tracking. arXiv preprint arXiv:1804.06208, 2018.
-  W. Yang, S. Li, W. Ouyang, H. Li, and X. Wang. Learning feature pyramids for human pose estimation. In arXiv preprint arXiv:1708.01101, 2017.
-  C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang. Bisenet: Bilateral segmentation network for real-time semantic segmentation. arXiv preprint arXiv:1808.00897, 2018.
-  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 2881–2890, 2017.