The U-Net [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox] architecture has been widely used in the location-sensitive tasks such as human pose estimation [Newell et al.(2016)Newell, Yang, and Deng], semantic segmentation [Long et al.(2015)Long, Shelhamer, and Darrell], etc. The top-down and bottom-up processing facilitates the inference at multiple scales. The shortcut connections between the corresponding top-down and bottom-up blocks help keep the spatial information.
More recently, the DenseNet [Huang et al.(2017)Huang, Liu, Weinberger, and van der Maaten] has shown superior performance in both image classification accuracy and parameter efficiency than the ResNet [He et al.(2016)He, Zhang, Ren, and Sun]. The dense connectivity improves the feature reuse in the network forward process and the gradient propagation during the backward process. Thus, it could use less parameters to achieve comparable or even better accuracy. A natural question arises: how could we use the dense connectivity to improve the performance of the U-Net?
Some works [Jégou et al.(2017)Jégou, Drozdzal, Vazquez, Romero, and Bengio, Li et al.(2017)Li, Chen, Qi, Dou, Fu, and Heng] tried to combine the dense connectivity and the U-Net. They follow the DenseNet design. In particular, each top-down or bottom-up resolution has a dense block containing several densely connected convolutional layers. This straightforward application of dense connectivity is restricted within local blocks of a single U-Net. Another question arises: could we integrate the dense connectivity into several stacked U-Nets?
In this paper, we propose a global connection pattern. Given several stacked U-Nets, we add shortcut connections for each U-Net pair, generating the coupled U-Nets (CU-Net). The key idea is we connect blocks of the same semantic meanings, i.e. having the same resolution in either top-down or bottom-up context. Please refer to Figure 1 for an illustration. Basically, a pair of U-Nets are connected at both top-down and bottom-up context.
The proposed coupled U-Nets have three merits. First, the coupling connections are global, extending from the first U-Net to the last one. It encourages the feature reuse as well as gradient propagation globally across different U-Nets. In contrast, the straightforward application of dense connectivity only helps the information flow inside a single U-Net. Second, we could easily add a supervision at the end of each U-Net if several U-Nets are coupled together. In other words, the coupled U-Nets could naturally take advantage of multiple supervisions. However, a single dense U-Net generally only has one supervision at the end. Third, the coupled U-Nets also preserve the advantage of stacked U-Nets. Generally, several stacked U-Nets could achieve higher accuracy than a large U-Net of the equivalent model size. This benefits from the multi-stage top-down and bottom-up inference along the U-Net cascade. The proposed coupled U-Nets still inherit this nice property. Furthermore, the U-Nets coupling could largely improve the information flow based on the traditional stacked U-Nets. This could significantly reduce the model parameter number, yielding very compact models. In summary, our key contributions are:
To the best of our knowledge, we are the first to propose coupled U-Nets (CU-Net) by connecting semantic blocks of pairwise U-Nets. The information can flow more efficiently and the feature reuse across U-Net pairs makes each U-Net light-weighted.
We investigate to use intermediate supervisions with coupled U-Nets. With a moderate amount of intermediate supervisions, the coupled U-Nets could get the highest accuracy. We also observe that full intermediate supervisions are not the optimal choice.
Exhaustive experiments are conducted on the human pose estimation. CU-Net demonstrates superior localization accuracy and use at least 60% less parameters compared with state-of-the-art methods.
2 Related Work
In this section, we review recent work on designing convolutional network architectures and recent developments in human pose estimation.
Network Architecture. The research on network architectures have been active since AlexNet [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton] appeared. First, by using smaller filters, the VGG [Simonyan and Zisserman(2014)] network become several times deeper than the AlexNet and obtain much better performance. Then the Highway Networks [Srivastava et al.(2015)Srivastava, Greff, and S.] could extend its depths to more than 100 layers with the shortcut connections. Furthermore, the identity mappings make it possible to train ResNet [He et al.(2016)He, Zhang, Ren, and Sun] with more than one thousand layers. More recently, the DenseNet [Huang et al.(2017)Huang, Liu, Weinberger, and van der Maaten] outperforms the ResNet benefitting from its dense connections.
The U-Net [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox] architecture was proposed for the biomedical image segmentation. It has been used in semantic segmentation [Long et al.(2015)Long, Shelhamer, and Darrell], face alignment [Peng et al.(2016)Peng, Feris, Wang, and Metaxas], etc. Newell et al. [Newell et al.(2016)Newell, Yang, and Deng] use the stacked U-Nets in human pose estimation. The also apply the residual module [He et al.(2016)He, Zhang, Ren, and Sun] in the stacked U-Nets. Recently, some efforts [Jégou et al.(2017)Jégou, Drozdzal, Vazquez, Romero, and Bengio, Li et al.(2017)Li, Chen, Qi, Dou, Fu, and Heng] try to bring the dense connectivity [Huang et al.(2017)Huang, Liu, Weinberger, and van der Maaten] into the U-Net. However, their shortcut connections are only within a single U-Net.
Human Pose Estimation. CNNs based approaches [Wei et al.(2016)Wei, Ramakrishna, Kanade, and Sheikh, Pishchulin et al.(2016)Pishchulin, Insafutdinov, Tang, A., A., Gehler, and Schiele, Lifshitz et al.(2016)Lifshitz, Fetaya, and Ullman, Zhao et al.(2018)Zhao, Peng, Tian, Kapadia, and Metaxas] dominate the human pose estimation and prediction. Newell et al. [Newell et al.(2016)Newell, Yang, and Deng] apply the stacked U-Nets and get high estimation accuracy. Nearly all recent state-of-the-art methods [Chu et al.(2016)Chu, Yang, Ouyang, Ma, Yuille, and Wang, Yang et al.(2017)Yang, Li, Ouyang, Li, and Wang, Chen et al.(2017)Chen, Shen, Wei, Liu, and Yang, Peng et al.(2018)Peng, Tang, Yang, Feris, and Metaxas] build on it. They use more sophisticated modules, graphical models, or additional adversarial networks [Tian et al.(2018)Tian, Peng, Zhao, Zhang, and Metaxas, Zhu et al.(2018)Zhu, Elhoseiny, Liu, Peng, and Elgammal]. We focus on largely reducing the model parameters but still obtaining comparable accuracy.
3 Network Architecture
In this section, we first introduce a naive dense U-Net and recap the stacked U-Nets. After analyzing their strengths and weaknesses, we propose a new architecture coupled U-Nets. We also discuss using coupled U-Nets with intermediate supervisions.
3.1 Naive Dense U-Net
A U-Net [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox] contains the same number of top-down and bottom-up blocks. There are usually skip connections between them. An illustration is shown in Figure 1. The main difference the naive dense U-Net from the traditional U-Net is the previous convolution layers become dense blocks. More specifically, the successive convolution layers at the same spatial resolution are densely connected, forming a dense block.
Besides, the dense connections result in increasing feature channels in the dense block. To adapt the feature channel number, the 11 convolution is used to after each dense block to compress the features.
The dense connections could increase the information flow in the U-Net to some extent. However, they are only within the local blocks. Besides, The naive dense U-Net has only one single U-Net. If we have several U-Nets, is there any more specific design?
3.2 Stacked U-Nets
Recently, some works [Newell et al.(2016)Newell, Yang, and Deng, Wei et al.(2016)Wei, Ramakrishna, Kanade, and Sheikh] stack multiple U-Nets together. Figure 1 gives an illustration of stacked U-Nets. Basically, the features would go sequentially from the first U-Net to the last one. The last U-Net makes the final prediction of the model.
An advantage of stacked U-Nets is its repeated top-down and bottom-up inference. In one U-Net, the input goes through the top-down and bottom-up pipeline once. The U-Net could capture some spatial relationships of the predictions. However, sometimes they may be not enough for the accurate predictions. For instance, in human pose estimation, the relations of upper and lower body joints are complex. Adding a U-Net on top of another could help capture higher order spatial relationships, resulting in higher prediction accuracy.
Besides, the stacked U-Nets make it very easy to add intermediate supervisions. Each U-Net could extend a side path to make its own prediction. We could the same groundtruth for each prediction. It does not affect the feature flow in the main U-Nets cascade. However, it is not straightforward to use intermediate supervisions in a single U-Net. The intermediate supervisions on its top-down blocks encourage predictions ignorant of global cues in the lower resolutions. Similarly, the intermediate supervisions in its bottom-up blocks cannot evaluate the feature effectiveness in the higher resolutions.
Since stacked U-Nets have more advantages than a single U-Net, could we incorporate the dense connectivity into them? The hybrid should keep the merits of both stacked U-Nets and dense connectivity.
3.3 Coupled U-Nets
Although U-Nets stacked together could refine the prediction stage-by-stage, there is no communication among them except for their inputs and outputs. To make information flow more efficiently across different U-Nets, we propose to couple U-Net pairs. Blocks at the same locations of two U-Nets have shortcut connections. Figure 2 gives an illustration.
The coupled U-Nets still have a main feature flow along the U-Nets cascade. Let denote the feature number in the main flow and represent the generated feature number at each block of U-Net. For each block of the U-Net, its inputs contain the features in the main flow and another features from the shortcut connections of previous U-Nets. They are concatenated channel-wise to features. Then a convolution compresses them to features. A following convolution produces new features. At last, the input features and generated features are concatenated. Another convolution compresses them to output features, flowing into the next block.
Intuitively, the coupled U-Nets are stacked U-Nets plus the shortcut connections among the semantic blocks. Therefore, the coupled U-Nets still possess the two advantages of stacked U-Nets: multiple stages top-down and bottom-up inference and the effective intermediate supervisions. Moreover, the additional shortcut connections largely boost the information flow across U-Nets.
The proposed coupling helps not only feature reuse but also the gradient backpropagation. The intermediate supervisions are known to provide additional gradients. Hence, they have an overlapping function. It is interesting to investigate how they cooperate with each other. Empirically, coupled U-Nets with moderate intermediate supervisions would achieve the highest prediction accuracy. However, the stacked U-Nets usually work the best with full intermediate supervisions. The coupling makes some intermediate supervisions not necessary.
In the experiments, we apply the CU-Net on the human pose estimation. First, we compare different hyper-parameter configurations of the CU-Net and choose one setting with the trade-off of accuracy and parameter efficiency. Then we investigate how the CU-Net performs with intermediate supervisions. After that, we compare the CU-Net with the naive dense U-Net. At last, we compare the CU-Net with state-of-the-art human pose estimators in terms of both accuracy and the parameter number.
Training.which is decayed to after the validation accuracy becomes stable.
Datasets. For human pose estimation, we use benchmark datasets: MPII Human Pose [Andriluka et al.(2014)Andriluka, Pishchulin, Gehler, and Schiele] and Leeds Sports Pose (LSP) [Johnson and Everingham(2010)]. We also use random scaling (0.75-1.25), rotation (-/+30) and left-right flip to augment the data. We measure the human pose estimation accuracy by the Percentage of Correct Keypoints (PCK). More specifically, we PCKh@0.5 and PCK@0.2 are used to measure the accuracy on MPII and LSP respectively.
4.1 Hyper-Parameter Selection
There are two important hyper-parameters in designing the CU-Net. One is the feature number in the main feature stream. In the experiments, remains the same when the feature map resolution changes. The other hyper-parameter is the generated feature number in a block of U-Net. We have tried 6 combinations of and . Table 1 gives the PCKhs on the MPII validation set. Besides, we choose 4 from the 6 settings and show how their validation PCKhs change during the training process in Figure 4.
In Table 1, the smallest and are 64 and 16. We set the increments 64 and 8 for and . We could observe how the accuracy (PCKh) and the parameter number change along with the two hyper-parameters. First, the accuracy increases when and grow. Furthermore, the increase is 2.6%, 1.4%, 0.4%, 0.3% and 0.3% from the left to the right. The increase slows down. Similar phenomena could be observed in Figure 4. The training is more stable when and become larger according to the curves in Figure 4.
Besides, the parameter number also grows as and become larger. Moreover, the growths are 0.5M, 0.4M, 0.5M, 0.5M and 0.5M. The parameter growth remains consistent. We would like to select a model with high accuracy and low model complexity. Through balancing the accuracy and parameter number, we choose =128 and =32. We fix this setting in the following experiments.
4.2 Investigation of CU-Net with Intermediate Supervisions
Generally, the supervision of a CU-Net is the supervision of its last U-Net. Since a CU-Net contains several U-Nets, we consider to add supervisions for preceding U-Nets. More specifically, we only add the supervision at the end of a U-Net. Fortunately, the coupling connections do not prevent us from doing this. Note that if the supervision number is smaller than the U-Net number, we distribute the supervisions as uniformly as possible. For example, if 2 supervisions exist in 4 coupled U-Nets, they are at the end of the second and fourth U-Nets.
Table 2 gives the PCKh comparison of CU-Net with different number of supervisions. For 2 coupled U-Nets, adding a supervision for the first U-Net makes the validation PCKh drop by 0.2%. The coupling connections already strengthen the gradient propagation. The additional supervision makes the gradient too strong so that the model overfits the training set a little bit.
However, observations are different for more coupled U-Nets. According to Table 2, additional supervisions could improve the PCKh of 4 coupled U-Nets (CU-Net-4). However, the CU-Net-4 obtains the highest PCKh with 1 additional supervision. Similar results appear for the CU-Net-8. But 3 additional supervisions help get the highest PCKh. CU-Net-4 and CU-Net-8 are much deeper than the CU-Net-2. The coupling connections still could not compensate the gradient vanishing due to the long distance propagation. Thus, adding some intermediate supervisions could further improve the accuracy. The CU-Net-8 is twice deeper than the CU-Net-4, thereby requiring more additional intermediate supervisions.
4.3 Comparison of CU-Net with Naive Dense U-Net
We design the CU-Net after analyzing the drawbacks of the naive dense U-Net. In this experiment, we compare them to validate the design. The overall PCKh comparison of naive dense U-Net, CU-Net and CU-Net with intermediate supervisions are shown in Figure 5. It shows three groups of comparisons with 2, 4 and 8 U-Nets. Note that the dense U-Net is always a single U-Net. For fair comparison, we add one layer in each dense block of the dense U-Net every time we increase one U-Net in the CU-Net.
According to Figure 5, the CU-Net obviously outperforms the dense U-Net by 1.0%, 0.5% and 0.5% from the left to the right. This demonstrates the multi-stage top-down bottom-up inference in the CU-Net could improve the accuracy. And the CU-Net and dense U-Net have the same number of parameters in the three settings. Further, adding intermediate supervisions could improve the accuracy except for the 2 U-Nets setting. Because larger networks requires more supervisions to help the training. This proves that the CU-Net has the flexibility to use intermediate supervisions. It is worth pointing out that both the repeated top-down and bottom-up processing and the intermediate supervisions do not require extra parameters.
We also show their PCKh curves under the three settings in Figures 4, 7 and 7. The converged PCKh gaps are consistent with those in Figure 5. Besides, the PCKh curve fluctuates more when adding the intermediate supervisions. The model learner would make larger steps on the training set with more supervisions. Due to the distribution shift of training and validation sets, it is easier to over-shoot the local minimas in the validation set.
4.4 Comparison with State-of-the-art Methods
In this experiment, we compare 8 coupled U-Nets (CU-Net-8) with state-of-the-art approaches for human pose estimation. Based on above experiments, we choose hyper-parameters and and use intermediate supervisions with the CU-Net. More specifically, we use supervisions for the , , and U-Nets.
Table 4 shows comparisons of human pose estimation on MPII and LSP test sets. The CU-Net-8 achieves comparable PCKhs as state-of-the-art methods. In contrast, as shown in Table 3, the CU-Net-8 has only 17%-40% parameters of other recent state-of-the-art methods. It is worth highlighting that Newell et al. [Newell et al.(2016)Newell, Yang, and Deng] use 8 stacked U-Nets. The CU-Net-8 could obtain comparable PCKhs but with only 40% parameters.
The CU-Net is simple and effective. Other state-of-the-art methods use stacked U-Nets with either sophisticated modules [Yang et al.(2017)Yang, Li, Ouyang, Li, and Wang], graphical models [Chu et al.(2016)Chu, Yang, Ouyang, Ma, Yuille, and Wang] or extra adversarial networks [Chen et al.(2017)Chen, Shen, Wei, Liu, and Yang].
|et al.[Yang et al.(2017)Yang, Li, Ouyang, Li, and Wang]||et al.[Wei et al.(2016)Wei, Ramakrishna, Kanade, and Sheikh]||et al.[Bulat and Tzimiropoulos(2016)]||et al.[Chu et al.(2016)Chu, Yang, Ouyang, Ma, Yuille, and Wang]||et al.[Newell et al.(2016)Newell, Yang, and Deng]||(8 U-Nets)|
|Pishchulin et al. ICCV’13 [Pishchulin et al.(2013)Pishchulin, Andriluka, Gehler, and Schiele]||74.3||49.0||40.8||34.1||36.5||34.4||35.2||44.1|
|Tompson et al. NIPS’14 [Tompson et al.(2014)Tompson, Jain, LeCun, and Bregler]||95.8||90.3||80.5||74.3||77.6||69.7||62.8||79.6|
|Carreira et al. CVPR’16 [Carreira et al.(2016)Carreira, Agrawal, Fragkiadaki, and Malik]||95.7||91.7||81.7||72.4||82.8||73.2||66.4||81.3|
|Tompson et al. CVPR’15 [Tompson et al.(2015)Tompson, Goroshin, Jain, LeCun, and B.]||96.1||91.9||83.9||77.8||80.9||72.3||64.8||82.0|
|Hu et al. CVPR’16 [Hu and Ramanan(2016)]||95.0||91.6||83.0||76.6||81.9||74.5||69.5||82.4|
|Pishchulin et al. CVPR’16 [Pishchulin et al.(2016)Pishchulin, Insafutdinov, Tang, A., A., Gehler, and Schiele]||94.1||90.2||83.4||77.3||82.6||75.7||68.6||82.4|
|Lifshitz et al. ECCV’16 [Lifshitz et al.(2016)Lifshitz, Fetaya, and Ullman]||97.8||93.3||85.7||80.4||85.3||76.6||70.2||85.0|
|Gkioxary et al. ECCV’16 [Gkioxari et al.(2016)Gkioxari, Toshev, and Jaitly]||96.2||93.1||86.7||82.1||85.2||81.4||74.1||86.1|
|Rafi et al. BMVC’16 [Rafi et al.(2016)Rafi, Leibe, Gall, and Kostrikov]||97.2||93.9||86.4||81.3||86.8||80.6||73.4||86.3|
|Belagiannis et al. FG’17 [Belagiann. and Zisserman(2017)]||97.7||95.0||88.2||83.0||87.9||82.6||78.4||88.1|
|Insafutdinov et al. ECCV’16 [Insafutdinov et al.(2016)Insafutdinov, Pishchulin, Andres, Andriluka, and Schiele]||96.8||95.2||89.3||84.4||88.4||83.4||78.0||88.5|
|Wei et al. CVPR’16 [Wei et al.(2016)Wei, Ramakrishna, Kanade, and Sheikh]||97.8||95.0||88.7||84.0||88.4||82.8||79.4||88.5|
|Bulat et al. ECCV’16 [Bulat and Tzimiropoulos(2016)]||97.9||95.1||89.9||85.3||89.4||85.7||81.7||89.7|
|Newell et al. ECCV’16 [Newell et al.(2016)Newell, Yang, and Deng]||98.2||96.3||91.2||87.1||90.1||87.4||83.6||90.9|
|Chu et al. CVPR’17 [Chu et al.(2016)Chu, Yang, Ouyang, Ma, Yuille, and Wang]||98.5||96.3||91.9||88.1||90.6||88.0||85.0||91.5|
|Belagiannis et al. FG’17 [Belagiann. and Zisserman(2017)]||95.2||89.0||81.5||77.0||83.7||87.0||82.8||85.2|
|Lifshitz et al. ECCV’16 [Lifshitz et al.(2016)Lifshitz, Fetaya, and Ullman]||96.8||89.0||82.7||79.1||90.9||86.0||82.5||86.7|
|Pishchulin et al. CVPR’16 [Pishchulin et al.(2016)Pishchulin, Insafutdinov, Tang, A., A., Gehler, and Schiele]||97.0||91.0||83.8||78.1||91.0||86.7||82.0||87.1|
|Insafutdinov et al. ECCV’16 [Insafutdinov et al.(2016)Insafutdinov, Pishchulin, Andres, Andriluka, and Schiele]||97.4||92.7||87.5||84.4||91.5||89.9||87.2||90.1|
|Wei et al. CVPR’16 [Wei et al.(2016)Wei, Ramakrishna, Kanade, and Sheikh]||97.8||92.5||87.0||83.9||91.5||90.8||89.9||90.5|
|Bulat et al. ECCV’16 [Bulat and Tzimiropoulos(2016)]||97.2||92.1||88.1||85.2||92.2||91.4||88.7||90.7|
|Chu et al. CVPR’17 [Chu et al.(2016)Chu, Yang, Ouyang, Ma, Yuille, and Wang]||98.1||93.7||89.3||86.9||93.4||94.0||92.5||92.6|
|Newell et al. ECCV’16 [Newell et al.(2016)Newell, Yang, and Deng]||98.2||94.0||91.2||87.2||93.5||94.5||92.6||93.0|
We have proposed the CU-Net, a new architecture based on the U-Net. We connect the same semantic blocks of several stacked U-Nets. Each U-Net pair are coupled since they are connected at multiple resolutions. Compared with the naive dense U-Net, the CU-Net has the advantages of multi-stage top-down and bottom-up inference and intermediate supervisions. Compared with the stacked U-Nets, it is more parameter efficient benefiting from the feature reuse across U-Nets. Experiments on MPII and LSP benchmark datasets show that it could achieve state-of-the-art accuracy but using at most 40% model parameters of other methods.
This work is partly supported by the Air Force Office of Scientific Research (AFOSR) under the Dynamic Data-Driven Application Systems Program, NSF 1763523, 1747778, 1733843 and 1703883 Awards.
- [Andriluka et al.(2014)Andriluka, Pishchulin, Gehler, and Schiele] Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2d human pose estimation: New benchmark and state of the art analysis. In CVPR, 2014.
- [Belagiann. and Zisserman(2017)] V. Belagiann. and A. Zisserman. Recurrent human pose estimation. In FG, 2017.
- [Bulat and Tzimiropoulos(2016)] Adrian Bulat and Georgios Tzimiropoulos. Human pose estimation via convolutional part heatmap regression. In ECCV, 2016.
- [Carreira et al.(2016)Carreira, Agrawal, Fragkiadaki, and Malik] Joao Carreira, Pulkit Agrawal, Katerina Fragkiadaki, and Jitendra Malik. Human pose estimation with iterative error feedback. In CVPR, 2016.
- [Chen et al.(2017)Chen, Shen, Wei, Liu, and Yang] Yu Chen, Chunhua Shen, Xiu-Shen Wei, Lingqiao Liu, and Jian Yang. Adversarial posenet: A structure-aware convolutional network for human pose estimation. In ICCV, 2017.
- [Chu et al.(2016)Chu, Yang, Ouyang, Ma, Yuille, and Wang] Xiao Chu, Wei Yang, Wanli Ouyang, Cheng Ma, A. Yuille, and Xiaogang Wang. Multi-context attention for human pose estimation. In CVPR, 2016.
[Gkioxari et al.(2016)Gkioxari, Toshev, and
Georgia Gkioxari, Alexander Toshev, and Navdeep Jaitly.
Chained predictions using convolutional neural networks.In ECCV, 2016.
- [He et al.(2016)He, Zhang, Ren, and Sun] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
- [Hu and Ramanan(2016)] Peiyun Hu and Deva Ramanan. Bottom-up and top-down reasoning with hierarchical rectified gaussians. In CVPR, 2016.
- [Huang et al.(2017)Huang, Liu, Weinberger, and van der Maaten] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In CVPR, 2017.
- [Insafutdinov et al.(2016)Insafutdinov, Pishchulin, Andres, Andriluka, and Schiele] Eldar Insafutdinov, Leonid Pishchulin, Bjoern Andres, Mykhaylo Andriluka, and Bernt Schiele. Deepercut: A deeper, stronger, and faster multi-person pose estimation model. In ECCV, 2016.
- [Jégou et al.(2017)Jégou, Drozdzal, Vazquez, Romero, and Bengio] Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero, and Yoshua Bengio. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In CVPRW, 2017.
- [Johnson and Everingham(2010)] Sam Johnson and Mark Everingham. Clustered pose and nonlinear appearance models for human pose estimation. In BMVC, 2010.
- [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
- [Li et al.(2017)Li, Chen, Qi, Dou, Fu, and Heng] Xiaomeng Li, Hao Chen, Xiaojuan Qi, Qi Dou, Chi-Wing Fu, and Pheng Ann Heng. H-denseunet: Hybrid densely connected unet for liver and liver tumor segmentation from ct volumes. arXiv, 2017.
- [Lifshitz et al.(2016)Lifshitz, Fetaya, and Ullman] Ita Lifshitz, Ethan Fetaya, and Shimon Ullman. Human pose estimation using deep consensus voting. In ECCV, 2016.
- [Long et al.(2015)Long, Shelhamer, and Darrell] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
- [Newell et al.(2016)Newell, Yang, and Deng] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In ECCV, 2016.
- [Peng et al.(2016)Peng, Feris, Wang, and Metaxas] Xi Peng, Rogerio S Feris, Xiaoyu Wang, and Dimitris N Metaxas. A recurrent encoder-decoder network for sequential face alignment. In ECCV, 2016.
- [Peng et al.(2018)Peng, Tang, Yang, Feris, and Metaxas] Xi Peng, Zhiqiang Tang, Fei Yang, Rogerio S Feris, and Dimitris Metaxas. Jointly optimize data augmentation and network training: Adversarial data augmentation in human pose estimation. In CVPR, 2018.
- [Pishchulin et al.(2013)Pishchulin, Andriluka, Gehler, and Schiele] Leonid Pishchulin, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele. Strong appearance and expressive spatial models for human pose estimation. In ICCV, 2013.
- [Pishchulin et al.(2016)Pishchulin, Insafutdinov, Tang, A., A., Gehler, and Schiele] Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern A., Mykhaylo A., Peter V Gehler, and Bernt Schiele. Deepcut: Joint subset partition and labeling for multi person pose estimation. In CVPR, 2016.
- [Rafi et al.(2016)Rafi, Leibe, Gall, and Kostrikov] Umer Rafi, Bastian Leibe, Juergen Gall, and Ilya Kostrikov. An efficient convolutional network for human pose estimation. In BMVC, 2016.
- [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
- [Simonyan and Zisserman(2014)] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv, 2014.
- [Srivastava et al.(2015)Srivastava, Greff, and S.] Rupesh K Srivastava, Klaus Greff, and Jürgen S. Training very deep networks. In NIPS, 2015.
- [Tian et al.(2018)Tian, Peng, Zhao, Zhang, and Metaxas] Yu Tian, Xi Peng, Long Zhao, Shaoting Zhang, and Dimitris N Metaxas. Cr-gan: Learning complete representations for multi-view generation. IJCAI, 2018.
- [Tompson et al.(2015)Tompson, Goroshin, Jain, LeCun, and B.] Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph B. Efficient object localization using convolutional networks. In CVPR, 2015.
- [Tompson et al.(2014)Tompson, Jain, LeCun, and Bregler] Jonathan J Tompson, Arjun Jain, Yann LeCun, and Christoph Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In NIPS, 2014.
- [Wei et al.(2016)Wei, Ramakrishna, Kanade, and Sheikh] Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional pose machines. In CVPR, 2016.
- [Yang et al.(2017)Yang, Li, Ouyang, Li, and Wang] Wei Yang, Shuang Li, Wanli Ouyang, Hongsheng Li, and Xiaogang Wang. Learning feature pyramids for human pose estimation. In ICCV, 2017.
- [Zhao et al.(2018)Zhao, Peng, Tian, Kapadia, and Metaxas] Long Zhao, Xi Peng, Yu Tian, Mubbasir Kapadia, and Dimitris Metaxas. Learning to forecast and refine residual motion for image-to-video generation. In ECCV, 2018.
- [Zhu et al.(2018)Zhu, Elhoseiny, Liu, Peng, and Elgammal] Yizhe Zhu, Mohamed Elhoseiny, Bingchen Liu, Xi Peng, and Ahmed Elgammal. A generative adversarial approach for zero-shot learning from noisy texts. In CVPR, 2018.