Lifting 2d Human Pose to 3d : A Weakly Supervised Approach

05/03/2019 ∙ by Sandika Biswas, et al. ∙ Tata Consultancy Services 0

Estimating 3d human pose from monocular images is a challenging problem due to the variety and complexity of human poses and the inherent ambiguity in recovering depth from the single view. Recent deep learning based methods show promising results by using supervised learning on 3d pose annotated datasets. However, the lack of large-scale 3d annotated training data captured under in-the-wild settings makes the 3d pose estimation difficult for in-the-wild poses. Few approaches have utilized training images from both 3d and 2d pose datasets in a weakly-supervised manner for learning 3d poses in unconstrained settings. In this paper, we propose a method which can effectively predict 3d human pose from 2d pose using a deep neural network trained in a weakly-supervised manner on a combination of ground-truth 3d pose and ground-truth 2d pose. Our method uses re-projection error minimization as a constraint to predict the 3d locations of body joints, and this is crucial for training on data where the 3d ground-truth is not present. Since minimizing re-projection error alone may not guarantee an accurate 3d pose, we also use additional geometric constraints on skeleton pose to regularize the pose in 3d. We demonstrate the superior generalization ability of our method by cross-dataset validation on a challenging 3d benchmark dataset MPI-INF-3DHP containing in the wild 3d poses.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Human pose estimation from images and videos is a fundamental problem in computer vision which has a variety of applications such as virtual reality, gaming, surveillance, human-computer interaction, health-care

[1, 2], etc. Estimating the shape of the human skeleton in 3d from a single image [3, 4, 5, 6, 7, 8, 9] or video [10, 11, 12, 13] is a much more challenging problem than estimating the pose in 2d [14, 15] due to the inherent ambiguity of estimating depth from a single view. Due to the availability of large scale 2d pose annotated datasets [16], the state-of-the-art deep supervised learning-based methods for 2d pose estimation have successfully been able to generalize to new “in-the-wild” images [14]. These are the images that are not captured under any specific scene settings or pose restrictions. However, the well-known 3d pose datasets [17, 18] contain 3d motion capture (MoCap) data recorded in controlled setup in indoor settings. Hence, 3d supervised learning methods [19, 4, 9, 7] do not generalize well to datasets in the wild where 3d ground-truth is not present.

Fig. 1: Left: Image from MPI-INF-3DHP test dataset. Right: 3d pose prediction from state-of-the-art weakly-supervised method proposed by Zhou et al. which does not capture the pose correctly.
Fig. 2: Left: Image from MPI-INF-3dHP test dataset. Right: Comparison of 2d ground truth pose (red), 2d re-projection of 3d ground truth pose (blue) and 2d re-projection of predicted 3d pose (using camera parameters) from state-of-the-art method (black). There is a significant error between 2d re-projection of predicted 3d pose and ground truth 2d pose which we want to minimize to improve the predicted 3d pose.

Almost all the recent methods for monocular 3d pose estimation fall under one of these three approaches - (i) Estimating 3d pose from images directly using full 3d supervision [4, 3, 20], (ii) Estimating 3d pose from ground-truth 2d pose using full 3d supervision [19] (iii) Estimating 3d pose directly from images using weakly-supervised learning [13, 5]. Approach (ii) has been shown to be more effective than approach (i) since the 2d pose input makes the process of lifting 2d pose to 3d invariant to image-related factors such as illumination, background, occlusion, etc. which adversely affect the overall accuracy of 3d pose estimation. Both the approaches (i) and (ii) produce very high accuracy on the popular 3d benchmark datasets which are captured under controlled settings, but may fail to generalize well if the pose or scene is very different from 3d training examples [19]. On the other hand, weakly-supervised methods use 2d pose ground-truth from 2d pose datasets as weak labels in addition to the 3d pose ground-truth from 3d pose datasets. Since 2d datasets contain poses in the wild [16], the generalization of these methods is higher than the fully-supervised methods following approaches in (i) and (ii). However, the current methods of weakly-supervised learning are carried out as a two-step approach by first predicting 2d pose from image and then regressing joint depth in a single end-to-end network [5, 13]. Training a network using such an approach is crucially dependent on the accuracy of the 2d pose detector. Due to the depth regression, the accuracy of 2d pose prediction is adversely affected in an end-to-end pipeline [5]. If the 2d pose accuracy is lowered, the 3d pose accuracy also degrades. Figure 1 shows an example where a weakly supervised approach [5], which compute 2d pose and 3d pose jointly, produces incorrect joint locations due to incorrect estimation of intermediate 2d pose during training.

In this work, we address the following problem - given ground-truth 2d poses in the wild, can 3d poses be recovered with sufficient accuracy even in the absence of 3d ground truth? To the best of our knowledge, this is the first work that addresses this specific problem that combines the motivation behind both approaches of (ii) and (iii). We propose a weakly supervised approach for 3d pose estimation from given 2d pose. We use a simple deep network that consists of a 2d-to-3d pose regression module and a 3d-to-2d pose re-projection module. The advantage of using our network is that it can be simultaneously trained on data from both 3d pose datasets and 2d pose datasets (without 3d pose annotations). Our 2d-to-3d pose regression module is similar to the state-of-the-art network [19] with the difference that a) our learning is weakly-supervised instead of being fully supervised and b) it can be trained on any dataset containing only ground-truth 2d labels. Our 3d-to-2d pose re-projection module is designed to ensure that the predicted 3d pose re-projects correctly to input 2d pose, which is not incorporated in the existing fully supervised method of [19], as shown in Figure 2. In the absence of ground truth 3d pose, the predicted 3d pose is constrained in our method by minimization of re-projection error with respect to the 2d pose input. Our 3d-to-2d regression network does not require the knowledge of camera parameters and hence can be used on any arbitrary images without known camera parameters (e.g., images from MPII dataset [16]). This simple approach of “lifting” 2d pose to 3d and subsequently re-projecting 3d to 2d enables joint training on in-the-wild 2d pose datasets that do not contain 3d pose ground-truth. Our approach differs from other weakly-supervised methods for 3d pose estimation as we do not address the problem of 2d pose estimation and focus only on the effective learning of 3d poses from ground truth 2d poses, even when datasets do not contain 3d pose labels.
The main contributions of our paper are outlined below:

  • We propose a network for predicting 3d human pose from 2d pose that can be trained in a weakly-supervised manner using 3d pose annotated data as well as data with only 2d pose annotations.

  • In addition to the standard 2d-to-3d pose regression, we introduce a 3d-to-2d re-projection network that minimizes 3d pose re-projection error in order to predict accurate 3d pose in the absence of 3d ground-truth. This re-projection error is also used to refine predicted 3d pose while 3d ground truth is available. The 3d-to-2d projection network can be trained on data with unknown camera parameters.

  • By training on a mixture of 2d and 3d pose datasets, our method outperforms state-of-the-art 3d pose estimation methods on a benchmark 3d dataset containing challenging poses in the wild.

Fig. 3: Proposed network architecture. Each of the two network modules, i.e 2d-to-3d regression module and 3d-to-2d re-projection module use the architecture of Martinez et al.

Ii Related Work

Monocular 3d human pose estimation: The monocular 3d human pose estimation problem is to learn the 3d structure of a human skeleton from a single image or a video, without using depth information or multiple views. It is a severely ill-posed problem which has been formulated in recent literature [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 3, 20] as a supervised learning problem given the availability of 3d human motion capture datasets [18, 17]. Most of these works focus on end-to-end 3d pose estimation from single images [5, 6, 7, 8, 9, 12, 13, 3, 20], while some utilize temporal sequences for estimating 3d pose from video [11, 10]. Our work is related to the problem of 3d pose estimation from a single image, which can be applicable for videos, but without utilizing any temporal information.

2d pose to 3d pose: Recent research works have approached the problem of estimating 3d poses from 2d poses, which are learned apriori from images [11, 19, 21]. These methods use 2d pose detections from state-of-the-art 2d pose detector [14], which provides invariance to illumination changes, background clutter, clothing variation, etc. By decoupling the two stages, it is also possible to infer the accuracy of “lifting” ground truth 2d pose to 3d. The current state-of-the-art work by Martinez et al.[19] uses a deep feedforward network that takes a 2d human pose as input and estimates 3d pose with very high accuracy using a simple network. Their results suggest the effectiveness of decoupling the 3d pose estimation problem into two separate problems - namely, 2d pose estimation from an image and 3d pose estimation from a 2d pose. Their 3d pose detector trained on ground-truth 3d poses achieved a remarkable improvement in accuracy (30 %), leading to the implication that the accuracy of 2d pose estimation remains a bottleneck in end-to-end 3d pose estimation. Inspired by their work, we address the problem of learning 3d poses from known 2d poses of high accuracy. However, the fully supervised method of Martinez et al. [19] can fail to recover 3d pose from ground truth 2d pose accurately if the 2d pose is considerably different from Human3.6m training examples or contain occluded or cropped human poses[19]. This has led us to address the problem of effectively learning 3d poses from 2d pose data with greater pose diversity along with the existing 3d pose datasets.

Weakly-supervised learning of 3d pose: In this approach, 2d pose datasets (without 3d annotations) have been used for training 3d pose estimation network for simultaneous 2d and 3d pose prediction [22] and weakly-supervised learning of 3d poses from images [5, 13]. In the absence of 3d ground truth labels, there can be a number of 3d poses, which when projected back gives the same pose in 2d; hence the 3d poses must follow geometric validity constraints, such as bone length ratio [5], illegal angle constraints [13, 6]. However, current approaches for weakly-supervised learning work directly on images. Hence the accuracy of the predicted 3d pose is affected by the accuracy of the 2d pose learned in intermediate stages of the network [5]. Hence it is difficult to identify whether 3d pose failure on an arbitrary image is due to a noisy estimate of 2d pose or an inaccurate “lifting” of 2d pose to 3d. In this work we carry out the weakly-supervised learning directly on 2d poses instead of images, to investigate the accuracy of learning 3d poses from ground-truth 2d pose in the absence of ground truth 3d pose.

In-the-wild 3d pose: The widely used 3d pose benchmark datasets Human3.6m [18] and HumanEva [17] are captured using MoCap systems in controlled lab environments and do not contain sufficient pose variability and scene diversity. On the other hand, datasets such as MPII [16] contain large scale in-the-wild data with ground truth annotations for 2d pose obtained from crowdsourcing. This has led to greater generalization of 2d pose estimation methods to the in-the-wild images. 3d pose estimation methods that utilize these 2d pose datasets for weak-supervision can only be assessed in a qualitative manner. Recently a more challenging 3d pose dataset MPI-INF-3DHP [23] has been introduced for more generalized 3d human pose estimation, as it contains some “in-the-wild” 3d pose training examples. Recent works have used this dataset for supervised learning and cross-dataset evaluation [6, 3, 5, 8, 20, 13]. We use this dataset to demonstrate generalization of our method.

Iii Proposed Method

Our goal is to learn the 3d human pose (a set of J body joint locations in 3-dimensional space), given the 2d pose in 2-dimensional image coordinates. The 3d pose is learnt in a weakly-supervised manner from a dataset having samples with 2d-3d ground-truth pose pairs () as well as samples with only 2d pose labels . For any given training sample, 3d pose prediction is learnt using supervised learning when ground-truth is present and in an unsupervised manner from input when is not present. is also used as labels for increasing the re-projection accuracy of predicted 3d pose, while ground-truth is present. The proposed network architecture is illustrated in Figure 3. The network consists of (i) 2d-to-3d pose regression module (Section  III-A) for predicting 3d pose from given 2d pose and (ii) 3d-to-2d re-projection module (Section  III-B) for aligning the 2d re-projection of predicted 3d pose with input 2d pose . For both the 2d-to-3d and 3d-to-2d network, we adopt similar network architecture as proposed by Martinez et al.[19].

Iii-a 2d-to-3d Pose Regression

Our 2d-to-3d pose regression is carried out using the network proposed by Martinez et al.[19]. This method [19] uses a deep feedforward neural network that effectively learns to predict 3d pose from input 2d pose, using a fully supervised 3d loss, which is defined as,

(1)

Where, is the predicted 3d pose, is ground truth 3d pose, and is the number of training samples. When our training sample contains 3d ground-truth pose, our network minimizes 3d supervised loss defined in Equation 1.

Iii-B 3d Pose Re-projection

The predicted 3d pose is valid if it projects back correctly to the input 2d pose. The re-projection error is minimized to constrain the predicted 3d pose. This re-projection loss () is defined as:

(2)

Where, is re-projection of predicted 3d pose and is input 2d pose.

An infinite number of 3d poses can be re-projected to a single 2d pose, but all of them may not be a physically plausible human pose. Hence, the solution space is further restricted to ensure plausibility in predicted 3d poses by introducing structural constraints on the bone lengths.

Iii-C Geometric Constraints on 3d Human Poses

Bone length symmetry loss

To ensure symmetry between contra-lateral segments of the human pose, bone length symmetry loss () has been applied on predicted limb lengths. Bone lengths of leg, arm, between neck to shoulder and hip to pelvis, remain same for left and right segments of the body. This constraint has been enforced on predicted 3d pose, using symmetry loss , which is defined as:

(3)

Where, represents a set of skeleton segments
. and are bone lengths of left and right side of each of the segments .

The total loss () minimized by our full network is defined as:

(4)

Here , , are scalar values denoting the weightage of each loss term in total loss. In the absence of 3d pose ground truth, is set to 0.

Iii-D Network architecture

Figure 3 shows the overall architecture for our proposed network. Both the 2d-to-3d and 3d-to-2d network modules use the architecture similar to Martinez et al. [19]

. Each module consists of two residual blocks, where each residual block contains two linear layers, with batch normalization, dropout and ReLu activation after each layer. A residual connection is added from the initial to the final layer of each block. Also, the input and output of each module are connected to two fully connected layers that map the input dimension to the dimension of intermediate layers and back to output dimension respectively.

Iv Experimental Setup

Method Direct. Discuss Eating Greet Phone Photo Pose Purch.
Zhou et al. [5] 54.8 60.7 58.2 71.4 62.0 53.8 55.6 75.2
Dabral et al. [13] 44.8 50.4 44.7 49.0 52.9 61.4 43.5 45.5
Yang et al. [8] 51.5 58.9 50.4 57.0 62.1 65.4 49.8 52.7
Luo et al. [6] 49.2 57.5 53.9 55.4 62.2 73.9 52.1 60.9
Sun et al. [22] 42.1 44.3 45.0 45.4 51.5 43.2 41.3 59.3
Martinez et al. (GT) [19] 37.7 44.4 40.3 42.1 48.2 54.9 44.4 42.1
Ours (GT) 35.74 42.39 39.06 40.55 44.37 52.54 42.86 38.83
Method Sitting SittingD Smoke Wait WalkD Walk WalkT Avg.
Zhou et al. [5] 111.6 64.1 65.5 66.0 51.4 63.2 55.3 64.9
Dabral et al. [13] 63.1 87.3 51.7 48.5 52.2 37.6 41.9 52.1
Yang et al. [8] 69.2 85.2 57.4 58.4 43.6 60.1 47.7 58.6
Luo et al. [6] 73.8 96.5 60.4 55.6 69.5 46.6 52.4 61.3
Sun et al. [22] 73.3 51.0 53.0 44.0 38.3 48.0 44.8 48.3
Martinez et al. (GT)[19] 54.6 58.0 45.1 46.4 47.6 36.4 40.4 45.5
Ours (GT) 53.08 53.90 42.10 43.36 43.92 33.31 36.54 42.84
TABLE I: MPJE (Mean Per Joint Error, mm) metric on Human3.6m dataset under defined protocol i.e. no rigid alignment of predicted 3d pose with ground truth 3d pose. GT denotes training on ground-truth 2d pose labels. Except Martinez et al. all other state-of-the-art methods use images for training instead of 2d pose labels. Our model achieves least MPJE for majority of the actions.

Iv-a Dataset Description

Human3.6m [24] is the largest publicly available 3d human pose benchmark dataset, with ground truth annotations captured with four RGB cameras and motion capture (MoCap) system. The dataset consists of 3.6 million images featuring 11 professional actors (only 7 used in the experimental setup) performing 15 everyday activities such as walking, eating, sitting, discussing, taking photo, etc. This dataset has both 2d and 3d joint locations along with camera parameters and body proportions for all the actors. Each pose has annotations for 32 joints; however, only the major 17 joints are used in the experimental setup of most of the state-of-the-art methods [19, 5]. We evaluate the performance of our proposed method using standard protocol [25] of evaluating Human3.6m, which uses actors 1, 5, 6, 7 and 8 for training and actors 9 and 11 for testing.

MPII [16] is the benchmark dataset for 2d human pose estimation. The images were collected from short YouTube videos covering daily human activities with complex poses and variant image appearances. Poses are annotated by human with sixteen 2d joints. It contains around 25k training images and 2957 validation images. Since the dataset has been collected randomly (not in controlled lab setup), it consists of a large variety of poses. Hence, 3d pose estimation methods can use this data for better generalization to in-the-wild human poses.

MPI-INF-3DHP [3] is a newly released 3d human pose dataset of 6 subjects performing 7 actions in indoor settings (background with a green screen (GS) and no green screen(NoGS)), captured with MoCap System and 14 RGB cameras and 2 subjects performing actions in outdoor in-the-wild settings. This makes it more challenging dataset than Human3.6M, which has data captured only in indoor settings. We use MPI-INF-3DHP dataset to test the generalization ability of our proposed model to in-the-wild 3d poses. The testing split consists of 2935 valid frames.

Iv-B Data Pre-processing

While no augmentation is done for 2d poses and 3d poses of Human3.6m and MPI-INF-3DHP datasets, MPII is augmented for 35 times (rotation and scaling of 2d poses) for training, to make it compatible with the size of Human3.6m dataset. Like previous work [19]

, we apply standard normalization (zero mean unit standard deviation) on 2d and 3d poses. We use root-centered 3d poses (skeleton with origin at pelvis joint) for 2d-to-3d regression like many other state-of-the-art methods. We also use root-centering on 2d pose labels for the 3d-to-2d re-projection module.

Iv-C Training Strategy

Our network is trained in three consecutive phases. In the first phase, only the 3d-to-2d regression module is trained with full supervision on 3d pose ground truth. In the second phase, the 2d-to-3d re-projection module is trained with the ground truth 3d pose input to predict re-projected 2d pose. In the third phase, both pre-trained 3d-to-2d and 2d-to-3d modules are fine-tuned simultaneously. During this final phase, the 2d re-projection module is fine-tuned using predicted 3d pose instead of ground truth 3d pose. To understand the generalization ability of our proposed network, three variants of models have been trained.

  • Model I, Trained on Human3.6m dataset.

  • Model II, Model I fine tuned on Human3.6m and MPII dataset.

  • Model III, Model I fine tuned on MPI-INF-3DHP dataset.

Except for Model II, all other models are trained with the 3d supervised loss, 2d re-projection loss and bone-length symmetry loss. In Model II, for MPII dataset only unsupervised losses i.e. 2d re-projection loss and bone-length symmetry loss have been used.

Iv-D Implementation details

2d-to-3d regression and 3d-to-2d re-projection modules of Model I and III (MPI-INF-3DHP fine tuning) are individually pre-trained on ground truth pose for 50 epochs. After pre-training, these modules are trained simultaneously for another 100 epochs with predicted 3d pose as input to 3d-to-2d re-projection module. For Model II (MPII fine tuning), both modules are fine-tuned simultaneously for 200 epochs using training samples from both Human 3.6M and MPII (1:1 ratio in a batch). For training samples from MPII, the value of

is set to 0 since MPII does not contain ground-truth 3d pose for supervision. In all other cases, the value of , and are empirically set to 0.5, 0.5 and 1.0 respectively during end-to-end training of full network. Learning rate is 1e-4, and batch size is 64, during the training of all models.

Method Training Data PCK AUC
GS NoGS Outdoor ALL ALL
Zhou et al. [5] H36m 45.6 45.1 14.4 37.7 20.9
Martinez et al.[19] H36m 62.8* 58.5* 62.2* 62.2* 27.7*
Mehta et al.[3] H36m 70.8 62.3 58.5 64.7 31.7
Luo et al. [6] H36m 71.3* 59.4* 65.7* 65.6* 33.2*
Yang et al.[8] H36M+MPII - - - 69.0 32.0
Zhou et al.[5] H36m+MPII 71.1 64.7 72.7 69.2 32.5
Ours (Model I) H36m 66.9* 63.0* 67.4* 65.8* 31.2*
Ours (Model II) H36m+MPII 74.2* 66.9* 71.4* 70.8* 34.5*
TABLE II: Results on MPI-INF-3DHP test-set by scene. Higher PCK(%) and AUC indicates better performance. means values are not given in original paper. denotes re-targeting of predicted 3d pose using ground truth limb length. Our model shows best performance among state-of-the-art methods while fine tuned on MPII dataset.
Method Training Data Walk Exer. Sit Reach Floor Sport Misc. Total
PCK PCK PCK PCK PCK PCK PCK PCK AUC MPJE
Mehta et al. [3] (MPII+LSP)H3.6M+3DHP 86.6 75.3 74.8 73.7 52.2 82.1 77.5 75.7 39.3 117.6
Mehta et al.[20] (MPII+LSP)H3.6M+3DHP 87.7 77.4 74.7 72.9 51.3 83.3 80.1 76.6 40.4 124.7
Dabral et al.[13] H3.6M+3DHP - - - - - - - 76.7 39.1 103.8
Luo et al.[6] (MPII)H3.6M+3DHP 90.5* 80.9* 90.0* 85.6* 70.2* 93.0* 92.9* 83.8* 47.7* 85.0*
Ours H3.6M+3DHP 97.3* 93.0* 92.3* 95.3* 86.4* 94.6* 94.3* 85.4* 55.8* 71.40*
TABLE III: Activity-wise performance on MPI-INF-3DHP test-set using standard metrics PCK (%), AUC and MPJE (mm). (MPII) means pretrained on MPII dataset. denotes background augmentation in training data. means values are not given in original paper. denotes the re-targeting of predicted 3d pose using ground truth limb length. Higher PCK, AUC and lower MPJE indicates better performance. We have achieved significantly better performance than the state-of-the-art methods on all the actions in terms of all the metrics.
Fig. 4: Qualitative evaluation of Model I on Human3.6m dataset. First and Fourth column: Input 2d poses. Second and Fifth column: Ground truth 3d poses. Third and Sixth column: 3d pose prediction using proposed Model I (model trained on Human3.6m with re-projection loss). Our model captures all the poses very accurately and performs better than the recent state-of-the-art methods. Quantitative results are given in Table I.

In the following section, we present an experimental evaluation of our proposed network on different benchmark datasets. We use Human3.6m and MPII datasets for training and show the quantitative performance of the proposed method on Human3.6m and qualitative performance on MPII to compare with state-of-the-art methods. For showing the generalization ability of our method we perform cross-dataset validation on MPI-INF-3DHP, and also present results after fine-tuning on MPI-INF-3DHP. We present a comparative analysis with different state-of-the-art methods for 3d pose estimation directly from 2d pose as well as the methods which give 3d pose prediction from images (end-to-end approaches).

V Experimental Evaluation

V-a Quantitative Results

Evaluation on test datasets is done using standard 3d pose estimation metrics, MPJE (Mean Per Joint Error in mm) for Human3.6m dataset, along with PCK (Percent of Correct Keypoints) and AUC (Area Under the Curve)[23, 20, 6, 13, 5] for MPI-INF-3DHP, which are more robust and stronger metrics in identifying the incorrect joint predictions. To follow the evaluation convention in the existing works [23], we chose a threshold of 150 mm in the calculation of PCK. For quantitative evaluation on MPI-INF-3DHP, to account for the depth scale difference between datasets Human3.6m and MPI-INF-3DHP, the predicted 3d pose is re-targeted to the ground-truth “universal” skeleton of MPI-INF-3DHP. This is done by scaling the predicted pose using ground-truth bone lengths while preserving directions of bones, following standard practices [20, 6, 5]. Moreover, we also account for the difference in pelvis joint definitions between Human3.6m and MPI-INF-3DHP during the evaluation of our model II on MPI-INF-3DHP. The location of the predicted pelvis and hip joints are moved towards the neck in a fixed ratio (0.2) before evaluation [5].

Human3.6m: Table I shows results on Human3.6m under defined protocol in [24] using Model I, which is trained on Human3.6m dataset under full supervision. As shown in the table, we achieve greater accuracy than the state-of-the-art methods on most of the actions including difficult actions such as Sitting, Greeting, etc in terms of MPJE (Mean Per Joint Error, mm). On average we have an overall improvement of 6% over our baseline method[19], which is also trained on 2d pose ground-truth. This improvement in accuracy can be attributed to the 3d-to-2d re-projection loss minimization and geometric constraints. We also outperform state-of-the-art method [22] which was trained on the input images from both Human3.6m and MPII, using our Model I trained on Human3.6m alone.

MPI-INF-3DHP: For MPI-INF-3DHP dataset, quantitative evaluation has been done using standard metrics PCK, AUC and MPJE as used in state-of-the-art methods [23, 20, 6, 13, 5].

(a) Cross Dataset Evaluation: Table II shows evaluation results on MPI-INF-3DHP with our Model I (trained on Human 3.6M) and Model II (trained on Human 3.6M + MPII) in terms of PCK and AUC for all three different settings (GS, NoGS and Outdoor) for the 2935 testing images. On an average we see an improvement of 2.3% on PCK (with a threshold of 150 mm) and 6.2% on AUC over the best performing state-of-the-art method. This establishes the improved cross-dataset generalization of our method when compared to the state-of-the-art methods.

(b) Results after Fine-tuning: We also present a performance analysis of our Model III (Model I fine-tuned on MPI-INF-3DHP dataset) in Table III. It shows a comparative analysis of the activity-wise performance of Model III with all recent state-of-the-art methods. We have achieved significant improvement over the state-of-the-art on all the actions in terms of all the metrics. On an average we exceed the best accuracy achieved by methods fully supervised on MPI-INF-3DHP by 2% on PCK, 17% on AUC and 16% on MPJE.

V-B Qualitative Results

The Qualitative results on Human3.6m, MPII and MPI-INF-3DHP are shown in Figure 4, 7 and 6 respectively. Our fine-tuned model on MPII dataset (Model II) shows significant improvement over the baseline model, on poses where joints are not visible. Hence, joint training of our proposed network on 2d poses with occluded joints (partly annotated 2d pose) along with 3d ground truth enhances its ability to predict occluded poses correctly.

Evaluation on our own dataset: To further evaluate the generalization capability of our proposed model we have tested the models on our own dataset. A video data has been collected using a mobile camera in our lab environment. Stacked Hourglass network [14] has been used to estimate the 2d poses, which are given as input to our Model. Figure 5 shows a sample image, corresponding predicted 3d pose from the baseline network and predicted 3d pose from our proposed model. Our model gives better prediction (in terms of pose structure, angle between joints) of 3d pose compared to the baseline network.

Method PCK AUC
2d-to-3d (supervised loss) [19] 62.2 27.7
Ours 2d-to-3d+3d-to-2d
(re-projection loss)
64.2 29.7
Ours 2d-to-3d+3d-to-2d
(re-projection loss+ bone symmetry loss)
65.8 31.2
TABLE IV: Ablation Study for different losses on MPI-INF-3DHP dataset.
Method Re-projection error
w/o batch normalization 36.2 30.7
w/o dropout 6.49 0.99
w/o dropout + w/o batch normalization 34.79 29.29
TABLE V: Ablation study on different network parameters for our (3d-to-2d re-projection module) in terms of re-projection error on Human3.6m. defines re-projection error differences between current training setup (as mentioned in section IV-D) and the above setups.
Fig. 5: Performance evaluation of proposed model on data captured in our lab condition. Left: Input image with 2d pose (prediction of Stacked Hourglass network [14]). Middle: Predicted 3d pose from baseline network [19]. Right: Predicted 3d pose from our model (Model II). The baseline model fails to capture the proper angular positions of the legs and the overall pose appears to be bent forward.
(a) (b) (c) (d)
Fig. 6: Qualitative evaluation on MPI-INF-3DHP dataset. First row: Input images with ground truth 2d pose. Second row: 3d ground truth poses. Third row: Prediction of the baseline network. Fourth row: Prediction of proposed Model I (Model trained on Human3.6m dataset.). Fifth row: Prediction of proposed Model II (Model I fine tuned on MPII dataset.). Sixth row: Prediction of Model III (Model I fine tuned on MPI-INF-3DHP dataset). The baseline model fails to capture proper 3d pose in many cases, e.g in Figure (d), hands of the lady are predicted in a more downward position than that of the ground truth 3d pose. All variations of our proposed model can recover this pose using re-projection loss along with baseline supervised loss. Quantitative results are given in Table II and Table III.

V-C Ablation Study:

Table IV and Table V shows ablative analysis of different network design parameters and losses used during training. Table IV shows, the addition of 2d re-projection loss with the supervised 3d loss in baseline network, increases PCK by 3.2% and AUC by 7.2% on MPI-INF-3DHP dataset, during cross-dataset validation. Using bone length symmetry loss with re-projection and supervised loss advances network performance further with 6% and 13% of improvement in PCK and AUC respectively for similar test-setup.

(a) (b) (c) (d) (e)
Fig. 7: Qualitative evaluation of models on MPII dataset. First row: Input images with ground 2d pose. Second row: 3d pose prediction of baseline architecture. Third row: 3d pose prediction of proposed Model I (trained on Human3.6m dataset). Fourth row: 3d pose prediction of proposed Model II (Model I fine tuned on MPII dataset). A major drawback in the baseline network is, it can not capture poses with occluded or invisible joints. In Figure (c), left foot joint of the person is not visible, hence 2d annotation for this joint is absent. The baseline model can not predict 3d position for this joint, while our proposed model fine-tuned on MPII dataset can predict position of this joint properly even from absent annotation. Similarly, in Figure (d), our model can perfectly predict un-annotated joints.

For 2d-to-3d and 3d-to-2d module we have used similar architecture as the baseline network. To understand the optimality of performance of our 3d-to-2d module we have performed an ablation study on different choice of design parameters. Table V, represents error between input ground truth 2d and re-projected 2d from the 3d-to-2d module for various design choices of the network. This error is measured in terms of Euclidean distance between joints in 2d co-ordinate space. The re-projection error is quite high when the network is trained without dropout or batch normalization between intermediate layers. Hence, the 3d-to-2d module is also trained using batch normalization and dropout similar to the 2d-to-3d module. defines the re-projection error differences between current training setup and different choices of training setups as mentioned in Table V.

Vi Conclusion

In this paper, we propose a deep neural network for estimating 3d human pose from 2d pose that combines data from 2d pose datasets in the wild and 3d pose datasets captured in controlled environments in a weakly-supervised framework. Our 3d-to-2d re-projection network is necessary for generalization of 3d pose estimation as it learns to predict 3d poses from in-the-wild 2d pose annotations that do not contain 3d pose annotations. Our method outperforms current state-of-the-art methods on a benchmark 3d dataset Human3.6m captured in controlled environments, as well as a challenging 3d dataset MPI-INF-3DHP containing in the wild human poses. Along with benchmark datasets, we also demonstrate the generalization ability of our method on our own dataset. As a future direction, we aim to improve our method by introducing more geometric constraints based on human anatomy, to improve the accuracy of 3d pose estimation in the absence of 3d ground-truth.

References

  • [1] Sanjana Sinha, Brojeshwar Bhowmick, Kingshuk Chakravarty, Aniruddha Sinha, and Abhijit Das, “Accurate upper body rehabilitation system using kinect,” in Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2016, pp. 4605–4609.
  • [2] Kingshuk Chakravarty, Suraj Suman, Brojeshwar Bhowmick, Aniruddha Sinha, and Abhijit Das, “Quantification of balance in single limb stance using kinect,” in International Conference on Acoustics, Speech and Signal Processing, 2016, pp. 854–858.
  • [3] Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, and Christian Theobalt, “Monocular 3d human pose estimation in the wild using improved cnn supervision,” in 3D Vision (3DV), 2017 International Conference on. IEEE, 2017, pp. 506–516.
  • [4] Georgios Pavlakos, Xiaowei Zhou, Konstantinos G Derpanis, and Kostas Daniilidis, “Coarse-to-fine volumetric prediction for single-image 3d human pose,” in

    Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on

    . IEEE, 2017, pp. 1263–1272.
  • [5] Xingyi Zhou, Qixing Huang, Xiao Sun, Xiangyang Xue, and Yichen Wei, “Towards 3d human pose estimation in the wild: a weakly-supervised approach,” in IEEE International Conference on Computer Vision, 2017.
  • [6] Chenxu Luo, Xiao Chu, and Alan Yuille, “Orinet: A fully convolutional network for 3d human pose estimation,” arXiv preprint arXiv:1811.04989, 2018.
  • [7] Denis Tome, Chris Russell, and Lourdes Agapito, “Lifting from the deep: Convolutional 3d pose estimation from a single image,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2500–2509.
  • [8] Wei Yang, Wanli Ouyang, Xiaolong Wang, Jimmy Ren, Hongsheng Li, and Xiaogang Wang, “3d human pose estimation in the wild by adversarial learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, vol. 1.
  • [9] Kyoungoh Lee, Inwoong Lee, and Sanghoon Lee, “Propagating lstm: 3d pose estimation based on joint interdependency,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 119–135.
  • [10] Mude Lin, Liang Lin, Xiaodan Liang, Keze Wang, and Hui Cheng, “Recurrent 3d pose sequence machines,” in Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017, pp. 5543–5552.
  • [11] Mir Rayat Imtiaz Hossain and James J Little, “Exploiting temporal information for 3d human pose estimation,” in European Conference on Computer Vision. Springer, Cham, 2018, pp. 69–86.
  • [12] Xiaowei Zhou, Menglong Zhu, Spyridon Leonardos, Konstantinos G Derpanis, and Kostas Daniilidis, “Sparseness meets deepness: 3d human pose estimation from monocular video,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4966–4975.
  • [13] Rishabh Dabral, Anurag Mundhada, Uday Kusupati, Safeer Afaque, Abhishek Sharma, and Arjun Jain, “Learning 3d human pose from structure and motion,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 668–683.
  • [14] Alejandro Newell, Kaiyu Yang, and Jia Deng, “Stacked hourglass networks for human pose estimation,” in European Conference on Computer Vision. Springer, 2016, pp. 483–499.
  • [15] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, “Realtime multi-person 2d pose estimation using part affinity fields,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7291–7299.
  • [16] Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele, “2d human pose estimation: New benchmark and state of the art analysis,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
  • [17] Leonid Sigal, Alexandru O Balan, and Michael J Black, “Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion,” International journal of computer vision, vol. 87, no. 1-2, pp. 4, 2010.
  • [18] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu, “Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 7, pp. 1325–1339, jul 2014.
  • [19] Julieta Martinez, Rayat Hossain, Javier Romero, and James J Little, “A simple yet effective baseline for 3d human pose estimation,” in International Conference on Computer Vision, 2017, vol. 1, p. 5.
  • [20] Dushyant Mehta, Srinath Sridhar, Oleksandr Sotnychenko, Helge Rhodin, Mohammad Shafiei, Hans-Peter Seidel, Weipeng Xu, Dan Casas, and Christian Theobalt, “Vnect: Real-time 3d human pose estimation with a single rgb camera,” ACM Transactions on Graphics (TOG), vol. 36, no. 4, pp. 44, 2017.
  • [21] Hao-Shu Fang, Yuanlu Xu, Wenguan Wang, Xiaobai Liu, and Song-Chun Zhu, “Learning pose grammar to encode human body configuration for 3d pose estimation,” in

    Proc. of the AAAI Conference on Artificial Intelligence

    , 2018.
  • [22] Xiao Sun, Jiaxiang Shang, Shuang Liang, and Yichen Wei, “Compositional human pose regression,” in The IEEE International Conference on Computer Vision (ICCV), 2017, vol. 2, p. 7.
  • [23] Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, and Christian Theobalt, “Monocular 3d human pose estimation in the wild using improved cnn supervision,” in 3D Vision (3DV), 2017 Fifth International Conference on. IEEE, 2017.
  • [24] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu, “Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 7, pp. 1325–1339, 2014.
  • [25] Sijin Li and Antoni B Chan,

    “3d human pose estimation from monocular images with deep convolutional neural network,”

    in Asian Conference on Computer Vision. Springer, 2014, pp. 332–347.