. Human pose estimation is very important for many high-level computer vision tasks, including action and activity recognition, human-computer interaction, motion capture, and animation. Estimating human poses from natural images is quite challenging. An effective pose estimation system must be able to handle large pose variations, changes in clothing and lighting conditions, severe body deformations, heavy body occlusions[32, 31, 21]16], resnet , and inception design [28, 29], single-person human pose estimation has recently achieved significant progress.
Recent research emphasis has been put on multi-person human pose estimation, where multiple individuals may exist in a natural scene. Compared to single person human pose estimation, where human candidates are cropped and centered in the image patch, the task of multi-person human pose estimation is more challenging. The best performance on MS COCO 2016 Keypoints challenge  is only around 60% in mean average precision (mAP).
Existing methods can be classified into two kinds of approaches, the top-down approach and the bottom-up approach. The top-down approach[11, 22] relies on a detection module to obtain human candidates and then apply a single-person human pose estimator to detect human keypoints. The bottom-up approach [8, 12, 34, 20], on the other hand, detects human keypoints from all potential human candidates and then assemble these keypoints into human limbs for each individual based on various data association techniques. The main advantage of the bottom-up approaches is its excellent tradeoff between estimation accuracy and computational cost. It takes the winner  of MS COCO 2016 keypoint challenge less than 200 ms to run the pose estimator for one frame on a Pascal TITAN X GPU. More importantly, contrary to top-down approaches, its computational cost is invariant to the number of human candidates in the image. We follow the bottom-up approach of the works of Zhe. et. al  and aim to design smaller, faster, and more accurate neural networks for multi-person keypoints regression. According to , their proposed Part Affinity Fields (PAF) and its corresponding data-association techniques are robust and reliable. More accurate keypoint and PAF regression would potentially increase the overall performance up to 10%.
In this work, we focus on the network regression part and leave the data association part to future works. We propose a dual-path network specially designed for multi-person human pose estimation, and compare our performance with the openpose  network in aspects of model size, forward speed, and estimation accuracy. Our contributions include: (1) We analyze the tasks of keypoint regression and PAF, where PAF estimation depends heavily on keypoints estimation but not vice versa. (2) We then design a dual-path network, the denseNet path responsible for PAF regression while the resNeXt path regressing human keypoints. Our performance is superior than the openpose  network even though the proposed network is of lower computational complexity and smaller model size.
2 Related Work
2.1 Single-Person Human Pose Estimation
This task is simpler than multi-person pose estimation because it aims to estimate the pose of a single person, where the image is cropped assuming the person dominates the image content. Traditional methods for single-person human pose estimation are mostly based on pictorial structure models [25, 23, 27, 30, 10, 19]. Since the work of DeepPose by Toshev et al. , research on human pose estimation has shifted from traditional approaches to deep neural networks (DNN) due to their superior performance. Recent methods [21, 33, 15, 7] have achieved quite accurate performance on popular datasets [6, 18]. However, the assumption that the person can always be correctly located is not necessarily satisfied.
2.2 Multi-Person Human Pose Estimation
Multi-person human pose estimation is a more realistic problem. It attempts to estimate the poses of multiple persons in natural scenes. It is quite challenging due to the variance of sizes and scales of the persons. Existing methods can be classified into two kinds of approaches, the top-down approach and the bottom-up approach.
The top-down approach [11, 22] relies on a detection module to obtain human candidates and then apply a single-person human pose estimator to detect human keypoints. Insafutdinov et al  propose a pipeline which uses the Faster R-CNN  as detection module and a unary DeeperCut as their single-person pose estimator. Their method achieves 51.0 in mAP on MPII dataset . Because the single-person pose estimator is usually sensitive to the detection results, this approach requires the detection module to be very robust. More accurate performance has been achieved by Hao et al . Their framework facilitates pose estimation in the presence of inaccurate human bounding boxes by introducing more components into the pipeline that refine the detection and pose estimation results.
The bottom-up approach [8, 12, 34, 20], on the other hand, detects human keypoints from all potential human candidates and then assemble these keypoints into human limbs for each individual based on various data association techniques. Many of these techniques are graph-based [8, 34]. The main advantage of the bottom-up approaches is its excellent tradeoff between estimation accuracy and computational cost. The winner of COCO2016  proposes to estimate human keypoints as well as Part-Affinity Fields (PAF) simultaneously. PAFs are limb association vectors that can be used to assemble the keypoints into multi-person poses with certain graph-based association techniques. According to , their proposed PAF and corresponding data-association techniques are robust and reliable. More accurate keypoint and PAF regression would potentially increase the overall performance up to 10%. We follow their works and focus on the network regression part, aiming to design smaller, faster, and more accurate neural networks for multi-person human pose estimation.
2.3 Dual Path Networks
According to , their proposed data-association technique is robust and reliable; more accurate keypoint and PAF regression would potentially increase the overall performance up to 10%. Motivated by this, we look into network engineering and explore more robust and efficient learning of features and spatial inter-dependencies.
Dual Path Networks (DPN) is first proposed in  as a hybrid network design that incorporates the core idea of DenseNet  with that of ResNeXt . ResNeXt is a variant of the widely-used ResNet , introducing a homogeneous, multi-branch architecture that has a new dimension called cardinality
(the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. They show that increasing cardinality is able to improve classification accuracy, and is more effective than going deeper or wider when we increase the capacity of the network. The core of denseNet is that it connects each layer to every other layer in a feed-forward fashion. They alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
According to the research of DPN, ResNet and its variants enable feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. By carefully incorporating these two network designs into dual-path topologies, DPN shares common features while maintaining the flexibility to explore new features through dual path architectures.
Inspired by the DPN network that is originally designed for the task of image classfication, we aim to design a variant of DPN that is specially tailored for multi-person human pose estimation because the regression tasks for keypoints and association vectors are naturally two paths. The two tasks are dependent on each other but unique in their own ways. In the next section, we introduce our proposed DPN network, describe how the regression of keypoints and association vectors are assigned to each path and, explain the intuition behind it. For detailed description of part association techniques, please refer to the original paper of PAF .
3 Proposed Method
The proposed network is highly modulized. We intentionally follow the general network structure of openpose. As shown in Figure 1
, there are multiple stages of repetitive subnetworks, where each subnetwork outputs estimated heatmaps of keypoints and PAFs and is enforced with loss functions as intermediate supervision. The network is first fed with an image into the first 16 layers of the VGG network and outputs 128 channels of low-level visual features, denoted by
. These features are then fed into each stage of the following subnetworks. The modules in the figure only indicates the input and output channels and leaves out the resolution because the convolutional layers are all padded such that the resolution of the feature maps do not change.
Our proposed network differs from the openpose network in the structure of the subnetwork patterns, specifically, the DPN blocks. As shown in Figure 2, our proposed DPN block consists of two paths. The regression of human keypoints and association vectors are dependent on each other and share information from the previous block. With the operator of element-wise addition, the Keypoints Path (KP) leverages features before and after the feature fusion/transition within a DPN block. The association vectors in the Association Path (AP), however, accumulate features over blocks to further exploit spatial interdependencies. They are regressed with accumulated channels of feature maps from all previous DPN blocks. With such representation, features from the AP path is less constrained and more flexible than the KP path. It enforces the AP path to learn features at a higher level compared to the KP path, even though they are dependent and share common features.
It is declared in  that most of their false positives come from imprecise localization, other than background confusion and that there is more improvement space in capturing spatial dependencies than in recognizing body parts appearances. Therefore, we set the learning rate for the VGG layers to be zero, thus maintaining the same low-level visual features as that used in the openpose model. In this way, we can purely compare the capability of the networks in capturing spatial dependencies.
The network from the first stage produces a set of keypoint heatmaps and a set of PAFs , where and represent high-dimensional functions of the KP path and AP path networks. To guide the network to iteratively predict keypoint heatmaps and PAFs at each stage, we apply two loss functions in each sub-network. We use an loss between the estimated predictions and the groundtruth maps and fields. Specifically, the loss functions for the dual paths at stage are:
where is the groundtruth keypoint heatmap, is the groundtruth PAF vector field, at an image location .
4 Experimental Results
Dataset The PoseTrack  dataset consists of over frames. The workshop is organized around a challenge with three competition tracks focusing on single frame multi-person pose estimation, multi-person pose estimation in videos, and multi-person articulated tracking. In our work, we focus on the single frame multi-person pose estimation.
Experimental Settings In order to make a fair comparison with the openpose network, which is trained on the MS COCO dataset , we use the same training data before testing on the PoseTrack dataset. In our experiments, all the experiment settings including the testing scales and parameters in the data association techniques are uniform for the two networks. Therefore, no special tuning on the training and testing for the PoseTrack dataset is made.
Quantitative Results We report our Average Precision (AP) scores on the PoseTrack test set11endnote: 1Performance on test set is evalutaed by the PoseTrack server. Challenge results available at: https://posetrack.net/workshops/iccv2017/posetrack-challenge-results.html. The result on test set is performed at 3 stages and 2 scales (1, 0.75).
4.1 Algorithm Performance Analysis
Average Precision We compare the AP scores of the proposed network with openpose on the validation set. All experiments are performed on the local machine with the same resolution and single scale.
Speed and Model Size Comparison We test and compare the networks by averaged forward time for a single frame. The unit is miliseconds (ms). Both experiments are performed at the same original resolution and single scale.
|Method||forward time (ms)|
|Method||3 stages||4 stages||5 stages||6 stages|
Even though our model is much smaller than the openpose model, the intermediate storage of network strucures including accumulated feature maps from AP path consume GPU memory greatly in current Caffe
version. In the future, by porting memory-efficient denseNet implementation from other deep learning frameworks[5, 4] into Caffe, which enables more stages of DPN to fit into the GPU memory, we believe the performance will potentially be better.
In this work, we propose a dual-path network specially designed for multi-person human pose estimation, and compare our performance with the openpose network in aspects of model size, forward speed, and estimation accuracy. Extentive experiments on PoseTrack challenge dataset show that our method is both accurate and efficient. Even though the method described in this work regresses PAFs as the association vector, the dual-path network is generic and not limited to specific vector representation and association techniques.
-  Coco keypoints challenge. http://image-net.org/challenges/ilsvrc+coco2016, 2016.
-  Openpose library. May, 2017.
-  Posetrack: Iccv workshop. https://posetrack.net/workshops/iccv2017/, October, 2017.
-  M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
B. Amos and J. Z. Kolter.
A pytorch implementation of densenet, 2017.
-  M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2d human pose estimation: New benchmark and state of the art analysis. In CVPR, 2014.
-  V. Belagiannis and A. Zisserman. Recurrent human pose estimation. arXiv preprint arXiv:1605.02914, 2016.
-  Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1611.08050, 2016.
-  Y. Chen, J. Li, H. Xiao, X. Jin, S. Yan, and J. Feng. Dual path networks. arXiv preprint arXiv:1707.01629, 2017.
-  M. Dantone, J. Gall, C. Leistner, and L. Van Gool. Human pose estimation using body parts dependent joint regressors. In CVPR, 2013.
-  Y.-W. T. Hao-Shu Fang, Shuqin Xie and C. Lu. RMPE: Regional multi-person pose estimation. In ICCV, 2017.
-  K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. arXiv preprint arXiv:1703.06870, 2017.
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016.
-  E. Insafutdinov, L. Pishchulin, B. Andres, M. Andriluka, and B. Schiele. Deepercut: A deeper, stronger, and faster multi-person pose estimation model. In ECCV, 2016.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
-  Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, pages 675–678. ACM, 2014.
-  S. Johnson and M. Everingham. Clustered pose and nonlinear appearance models for human pose estimation. In BMVC, 2010.
-  L. Karlinsky and S. Ullman. Using linking features in learning non-parametric part models. In ECCV, 2012.
-  A. Newell and J. Deng. Associative embedding: End-to-end learning for joint detection and grouping. arXiv preprint arXiv:1611.05424, 2016.
-  A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In ECCV, 2016.
-  G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler, and K. Murphy. Towards accurate multi-person pose estimation in the wild. arXiv preprint arXiv:1701.01779, 2017.
-  L. Pishchulin, M. Andriluka, P. Gehler, and B. Schiele. Poselet conditioned pictorial structures. In CVPR, 2013.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
-  B. Sapp and B. Taskar. Modec: Multimodal decomposable models for human pose estimation. In CVPR, 2013.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  M. Sun and S. Savarese. Articulated part-based model for joint object detection and pose estimation. In ICCV, 2011.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
-  Y. Tian, C. L. Zitnick, and S. G. Narasimhan. Exploring the spatial hierarchy of mixture models for human pose estimation. In ECCV, 2012.
-  J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In NIPS, 2014.
-  A. Toshev and C. Szegedy. Deeppose: Human pose estimation via deep neural networks. In CVPR, 2014.
-  S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In CVPR, 2016.
-  F. Xia, P. Wang, X. Chen, and A. Yuille. Joint multi-person pose estimation and semantic part segmentation. arXiv preprint arXiv:1708.03383, 2017.
-  S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016.