Log In Sign Up

Quantification of Occlusion Handling Capability of a 3D Human Pose Estimation Framework

by   Mehwish Ghafoor, et al.

3D human pose estimation using monocular images is an important yet challenging task. Existing 3D pose detection methods exhibit excellent performance under normal conditions however their performance may degrade due to occlusion. Recently some occlusion aware methods have also been proposed, however, the occlusion handling capability of these networks has not yet been thoroughly investigated. In the current work, we propose an occlusion-guided 3D human pose estimation framework and quantify its occlusion handling capability by using different protocols. The proposed method estimates more accurate 3D human poses using 2D skeletons with missing joints as input. Missing joints are handled by introducing occlusion guidance that provides extra information about the absence or presence of a joint. Temporal information has also been exploited to better estimate the missing joints. A large number of experiments are performed for the quantification of occlusion handling capability of the proposed method on three publicly available datasets in various settings including random missing joints, fixed body parts missing, and complete frames missing, using mean per joint position error criterion. In addition to that, the quality of the predicted 3D poses is also evaluated using action classification performance as a criterion. 3D poses estimated by the proposed method achieved significantly improved action recognition performance in the presence of missing joints. Our experiments demonstrate the effectiveness of the proposed framework for handling the missing joints as well as quantification of the occlusion handling capability of the deep neural networks.


page 1

page 2

page 3

page 4


Out of the Box: A combined approach for handling occlusion in Human Pose Estimation

Human Pose estimation is a challenging problem, especially in the case o...

Multi-Scale Networks for 3D Human Pose Estimation with Inference Stage Optimization

Estimating 3D human poses from a monocular video is still a challenging ...

Pose for Action - Action for Pose

In this work we propose to utilize information about human actions to im...

A Framework for Human Pose Estimation in Videos

In this paper, we present a method to estimate a sequence of human poses...

3D Pose Detection in Videos: Focusing on Occlusion

In this work, we build upon existing methods for occlusion-aware 3D pose...

View-Invariant, Occlusion-Robust Probabilistic Embedding for Human Pose

Recognition of human poses and activities is crucial for autonomous syst...

Mixture of Parts Revisited: Expressive Part Interactions for Pose Estimation

Part-based models with restrictive tree-structured interactions for the ...

I Introduction

3Dhuman pose estimation is an important problem having wide range of applications including video surveillance [9], action recognition [1], avatar creation, motion pasting [29]

, anomaly detection

[18], tracking [2, 16], and in healthcare [27]. Automatic 3D pose estimation is a challenging problem because of non-rigid human structure consisting of many joints and relatively rigid limbs. Therefore, 3D pose estimation is often referred to as estimating positions of various joints in 3D space. In case of partial or complete occlusion caused by other objects or humans in the scene, some or all joints may get missing or get degraded confidence which exacerbates the difficulty of 3D pose estimation problem and degrades the performance of the downstream applications using detected 3D pose as input.

Due to numerous 3D human pose applications, in the recent years many researchers have focused on this area. Many 3D pose estimation methods leverage the strength of 2D detectors by using detected 2D joints as an input [12, 4, 24, 33, 17]. Most of these methods face difficulty in recovering accurate 3D pose if the subject is partially or completely occluded. It is because most of these methods do not explicitly handle occlusion [24, 33, 17]. Recently some occlusion aware pose detection methods have also been proposed, however, the occlusion handling capability of these methods have not been thoroughly quantified [8, 7, 11, 25]. Some methods show improved performance on the publicly available datasets when occlusion aware training was used [8, 7], while others report very few experiments [25, 21, 23]. In contrast to the existing approaches, we propose a method to explicitly handle missing joints and we quantify the occlusion handling capability of the proposed method. We compare it with the existing state-of-the-art methods in terms of Mean Per Joint Position Error (MPJPE) as well as by using the accuracy of action recognition as a criterion.

Fig. 1: 3D human pose estimation by the proposed method in Human 3.6M dataset using (a) only 1 out of 17, (b) 3 out of 17, (c) 5 out of 17 random 2D input joints. In each case the top row shows ground-truth and the bottom row shows the estimated 3D poses.

The proposed occlusion guided 3D human pose estimation framework is based on a temporal dilated CNN which can estimate more accurate 3D joint positions even in the presence of severe occlusion. Explicit occlusion guidance is obtained by using an indicator variable which gives the additional information to the network regarding the presence or absence of a joint. The occlusion handling capability of the proposed method is quantified by randomly missing up to joints, where

are the total joints in the pose (keeping only one random joint in each frame). Experiments are also performed by missing all joints in 1, 3 and 5 consecutive frames in a sequence. In addition to that, experiments are also performed by missing fixed body parts such as missing lower body, left arm, etc. Performance comparisons with the existing state of the art methods reveal significant improvement in the missing joint estimation in terms of MPJPE. Moreover, we also quantify the occlusion handling capability of the proposed method as well as existing state of the art methods by using 3D pose based action recognition as the target application. To this end, we use a baseline method as well as a recent Graph Convolutional Neural network (GCN) based method 

[31]. We observe significant action recognition performance improvement when 3D poses were estimated using the proposed occlusion guided framework. Numerous experiments are performed for occlusion handling quantification on three datasets including Human M [14], NTU RGB+D [26] and SYSU [13] and compared with five existing state of the art methods. In both types of quantification experiments, proposed framework has exhibited significant performance improvement in terms of MPJPE as well as action recognition accuracy.

Fig. 2:

Proposed occlusion guided 3D pose estimation framework is based on a temporal dilated CNN with residual connections.

The main contributions of the current work are as follows:

  1. An occlusion guided framework is proposed based on temporal dilated CNN for the estimation of 3D human body pose in the presence of severe occlusions.

  2. Occlusion handling capability of deep neural networks is quantified by randomly missing 2D input joints, missing fixed body parts and missing all joints in few frames of a sequence. The quality of estimated poses is quantified using the mean per joint position error as well as action recognition accuracy.

  3. Comprehensive evaluations on three publicly available datasets including Human M, NTU RGB+D and SYSU have shown significant improvement in both 3D joint estimation and action recognition accuracy.

The rest of this paper is organized as follows: Section II presents the related work, the proposed approach and experimental results are presented in Sections III and IV respectively, followed by conclusions and future directions in Section V.

Ii Related work

3D human pose estimation has received much attention by the research community and several new directions have emerged [12, 4, 24, 17, 15, 28, 5, 22]. A significant number of detectors exploit the much-developed 2D methods for 3D pose estimation. Martinez et al. [19] introduced a fully connected neural network with two residual modules to estimate 3D pose using 2D joint positions as input. Fang et al. [10][30] improved 3D pose estimation using pose grammar network. Cheng et al. [6] proposed directed joint and bone based Graph Convolutional Network (GCN) for 3D pose estimation in multi-person scenario. Most of these methods get static pose as input and therefore do not use temporal information. Some later methods exploit the temporal information for 3D pose estimation. Hossain et al. [12] used LSTM seq-to-seq architecture to ensure temporal smoothness, however its limitation is fixed-size input and output. Later, Pavllo et al. [24] introduced temporal dilated convolution to estimate 3D pose from sequence of 2D poses and maintained efficiency with dilated convolution. Attention mechanism is also incorporated in temporal convolution to get improved results [17]. Most of these methods do not provide any occlusion handling mechanism.

Fig. 3: Proposed occlusion guidance mechanism provides precise information about missing 2D joints.

Some researchers have recently addressed the occlusion handling challenge in 3D pose estimation. Moreno Noguer [21] proposed a static 3D pose estimation method by representing 2D and 3D poses as Euclidean distance matrices which are robust to occlusion. Park et al. [23] proposed a static relational hierarchical dropout method (Rel-Hier-Drop) to handle occlusion and noisy 2D poses. Cheng et al. [8] used cylinder man model for data augmentation to train the network in occlusion aware fashion using a sequence of RGB video frames. Cheng et al. [7] used training data augmentation with partial and complete occlusion. For reliable 3D pose estimation, spatial and temporal kinematic chain space discriminator is also used. Both methods [8, 7] have obtained improved results on publicly available datasets with little occlusion. However, to quantify the capability of their networks for estimation of missing joints, no occlusion handling results are reported. Zhang et al. [32] estimated 3D pose and reconstructed occluded human body shape using 3D mesh model. Qammaz et al. [25]

proposed normalized signed rotation matrices that are translation and scale invariant and occlusion tolerant to classify the orientation of 2D pose. 3D pose is then estimated using ensemble of neural networks. Gu

et al. [11] used soft gated CNN for 3D pose estimation which acts as attention to handle occlusion from noisy 2D joints.

In contrast to the existing state-of-the-art occlusion handling approaches, we propose occlusion guided method based on temporal dilated CNN for the estimation of 3D poses. In large number of experiments, we quantify the capability of our network for missing joints estimation by randomly occluding up to 94% in Human 3.6M, up to 90% joints in SYSU, and up to 92% joints in NTU RGB+D dataset. We observe significant performance improvement in MPJPE in the presence of large number of missing joints, as well as improvement in action recognition performance. In the current work, we use action recognition as a quality measure of estimated 3D poses. As the number of missing 2D joints increases, the quality of estimated 3D poses degrades resulting in the loss of action recognition performance. Our proposed occlusion guided 3D pose estimation framework has shown a graceful degradation with increasing number of missing joints.

Fig. 4:

Action recognition framework: (a) Predicted 3D Poses are reshaped as 3D tensor and input to the baseline CNN model. (b) Predicted 3D Poses are input to a GCN with spatial and temporal positions of joints.

Iii Proposed Occlusion Guided 3D Pose Estimation Framework

The proposed framework as shown in Fig. 2 consists of a temporal dilated CNN with an occlusion guidance mechanism to handle the missing joints.

Iii-a Temporal Dilated CNN

The proposed network is a fully convolutional architecture with residual connections which takes consecutive 2D poses as input and estimates a 3D pose corresponding to the central location of the input sequence. The proposed architecture consists of multiple CNN blocks. Regular convolution is replaced with dilated convolution to allow larger receptive fields with lower parameter cost and higher efficiency without any resolution compromise. The input layer of the framework takes temporal sequence and outputs channels, where is the number of joints in one skeleton, is the dimensions of each joint including occlusion guidance matrix, and are the number of consecutive skeletons in the temporal sequence. The network estimates 3D pose, where because the estimated pose is 3D. Each block consists of two CNN layers where layer consists of 1D CNN with dilation of exponential factor

, Batch Normalization (BN), Mish activation function

[20], and dropout followed by the second layer which consists of 1D CNN with standard convolution (D=1), BN, Mish and dropout layers. The proposed framework consists of four such blocks with residual connections (see Table I). In the output layer, channels shrink to to get the required dimension using temporal information from past and future frames. Following objective function is minimized while training:


where indicates batch size, and are the ground-truth and the predicted 3D poses, respectively.

Iii-B Occlusion Guidance Mechanism

Most of the existing 2D pose detectors also generate a confidence score for each joint in addition to the geometric joint position. In case of occlusion or failure of joint estimation the value of confidence score degrades, depicting the quality loss of estimated joint. In order to avoid incorrect data input for the estimation of 3D pose, joints with low confidence score may be considered as missing . To effectively handle these missing joints, an occlusion guidance matrix is introduced which has a value of 1 for available joints (or high confidence joints) and a value 0 for the missing joints (low confidence joints). The input to the framework consists of geometric 2D joint positions and two indicator variables corresponding to both coordinates as shown in Fig. 3. The occlusion guidance methodology provides missing data information to the network. Both the indicator variable and corresponding coordinates are coupled together, effectively doubling the input dimensionality. The proposed T3D CNN estimates missing values as well as 3D pose simultaneously.

Layer 1-2 3-4 5-6 7-8 Total
Parameters 4.2M 4.2M 4.2M 4.2M 16.8 M
TABLE I: Number of parameters in Millions(M) in proposed T3D CNN.

Iii-C 3D Pose based Action Recognition

In order to compare the quality of the estimated 3D poses, two action recognition methods are used including a simple baseline CNN and an SGN [31] based method. In the baseline CNN method, as shown in Fig. 4a, the predicted 3D poses are normalized using mean of all joints in a frame as and magnitude of mean subtracted frame as :


Normalized poses are reshaped as 3D tensor where (, , ) coordinates are represented in the form of three channels which are scaled between . This 3D tensor is again re-sized to

and passed through pre-trained ResNet50 to convert it into a feature vector of size

which is followed by a trainable fully connected layer of the same size as the number of action classes. A SoftMax layer is then used to convert the outputs to the class probabilities.

Dset TV TF MnF MxF Cams Acts Subj
H3.6M 836 2.1M 992 6343 4 15 7
SYSU 480 0.1M 58 638 1 12 40
NTU 56880 4.9M 32 300 3 60 40
TABLE II: Dataset Details: TV=Total Videos, TF=Total Frames,and MnF and MxF = Minimum and Maximum Frames in any video.
Method None Rand 2 Rand 4 Rand 6 Rand 8 Rand 10 Rand 12 Rand 14 Rand 16
Protocol 1
RN-Hier-Drop [23] 59.7 65.9 - - - - - - -
Sem-GCN [33] 42.1 490.6 714.6 833.4 887.5 912.7 904.1 883.5 856.5
VideoPose3D [24] 37.5 199.7 251.5 283.4 328.3 353.4 375.2 395.4 418.3
Attention-3D [17] 34.7 743.6 987.4 1122.2 1198.2 1234.4 1245.0 1097.5 1217.6
Baseline 37.5 44.9 45.4 48.0 51.3 58.9 59.2 66.3 72.9
T3D CNN 37.5 36.3 39.0 43.2 48.2 53.0 55.3 58.3 68.6
Protocol 2
FConv [21] 74.0 106.8 - - - - - - -
RN-Hier-Drop[23] 45.6 51.0 - - - - - - -
Sem-GCN [33] 33.5 255.1 306.8 327.5 337.8 343.6 347.1 349.4 350.9
VideoPose3D [24] 27.6 118.8 154.8 177.2 207.2 221.8 232.6 238.6 235.8
Attention-3D [17] 26.1 339.2 338.4 334.1 331.0 328.8 327.2 325.3 325.4
Baseline 27.6 34.7 35.0 37.0 39.2 44.4 44.7 49.8 55.4
T3D CNN 27.2 28.4 30.4 33.1 36.6 39.9 42.3 44.6 52.7
TABLE III: Mean Per Joint Position Error (MPJPE) comparison for 3D pose estimation with missing random joints on Human M dataset. Baseline consists of Temporal Dilated Network without occlusion guidance.

The second pose based action recognition method is motivated by Zhang et al. [31] as shown in Fig. 4

b. It consists of an end-to-end network which utilizes joint-level information consisting of joint type as affinity matrix encoding human body structure, and frame-level information consisting of relations across multiple frames while maintaining the frame order. Further details of both of these methods are given in the supplementary material.

Methods L Arm R Leg Head Lower Body
Protocol 1
RN-Hier-Drop[23] 74.5 70.4 - -
Sem-GCN [33] 391.2 278.0 430.2 216.4
VideoPose3D [24] 310.5 259.8 221.7 207.4
Attention-3D [17] 929.6 263.9 834.9 979.7
T3D CNN 53.4 46.9 38.4 57.0
Protocol 2
FConv [21] 109.4 100.2 - -
RN-Hier-Drop [23] 63.0 55.2 - -
Sem-GCN [33] 313.2 239.7 257.2 186.7
VideoPose3D [24] 217.2 193.7 205.1 163.0
Attention-3D [17] 358.8 197.9 289.5 262.2
T3D CNN 49.0 36.4 29.9 47.2

TABLE IV: MPJPE comparison for 3D pose estimation with missing fixed body parts on Human M dataset.
Protocol 1
VideoPose3D [24] P1 46.9 49.3 50.9
Attention-3D [17] 353.8 490.0 852.4
T3D CNN 38.5 38.4 39.6
Protocol 2
VideoPose3D [24] 31.4 32.2 33.1
Attention-3D [17] 266.1 362.8 285.8
T3D CNN 28.8 28.8 29.3
TABLE V: MPJPE comparison for 3D pose estimation with complete occlusion scenarios: , , and shows one, three, and five consecutive occluded frames in Human 3.6M dataset.

Iv Experiments and Results

A large number of experiments are performed to evaluate the proposed T3D CNN algorithm on three publicly available datasets including Human [14], NTU RGB+D [26] and SYSU 3D Human-Object Interaction dataset [13] (see Table II). We have compared our results with five existing state-of-the-art methods including static methods Rel-Hier-Drop [23], Sem-GCN [33], and FConv [21] as well as the methods exploiting temporal information including VideoPose3D [24] and Attention-3D [17]. Sem-GCN [33] uses Graph CNN and exploits semantic information to better estimate 3D poses. VideoPose3D [24] exploits temporal dilated convolution and Attention-3D [17] exploits attention mechanism to better handle occlusion cases. A baseline version of the proposed algorithm is also compared to evaluate the significance of the occlusion guidance mechanism. All results are reported for 243 frames temporal sequence length, unless stated otherwise.

We use MPJPE for quantification of occlusion handling capability of different methods. It measures average error between joints in ground truth and in estimated 3D poses in millimeters (mm) after aligning root joints of both poses. We evaluate our framework using two protocols. In protocol 1, MPJPE is computed without any alignment while in protocol 2, scaling, rotation and translation is applied to the predicted 3D pose to align it with the ground truth before calculating MPJPE. In addition to MPJPE we also employee action recognition accuracy as a measure of estimated 3D pose quality in the presence of occlusion. As the estimation quality degrades, the action recognition performance also degrades.

Iv-a Experiments on Human 3.6M Dataset

It contains Million human poses for 15 different actions performed by 7 subjects and captured by 4 cameras. For evaluation, standard split is used: subjects {1, 3, 5, 7} are used for training and {9, 11} for testing. Occlusion handling capability of different methods is quantified by formulating partial and complete occlusion schemes. In partial occlusion, we formulate two different cases, random and fixed joints occlusion. In the first case, the number of randomly missing joints is varied as , while in the second case left arm (3 joints), right leg (3 joints), head (1 joint) and lower body (4 joints) are missing. In complete occlusion case all joints are missing in consecutive frames.

Table III shows comparison of the proposed T3D CNN and Baseline with existing state-of-the-art methods [24, 33, 17, 23] with random missing joints. RN-hier-drop [23] has provided results for random 2 missing, while for the remaining algorithms we have performed experiments using their pre-trained networks. As the number of missing joints increases, the error in all compared methods also increases, however T3D CNN has consistently remained the best performer for the estimation of 3D pose with missing 2D joints.

Moreover, Gu et al. [11] has reported 73.2mm MPJPE for 50% random occlusion using 2D detections, which is significantly larger than 43.7mm obtained by T3D CNN. Zhang et al. [32] have reported results with {30%, 70%} person occlusion which may be considered equivalent to random missing joints. For both levels of occlusion, it obtained {56.4, 68} MPJPE which is again higher than {33.1, 56.4} obtained by T3D CNN.

Method None Rand 1 Rand 4 Rand 7 Rand 13 Rand 15 Rand 18 Rand 21 Rand 23
Protocol 1
VideoPose3D [24] 115.6 198.4 315.7 372.9 435.0 446.5 459.6 467.9 471.6
Sem-GCN [33] 174.8 781.9 979.2 967.1 1080.1 1135.1 1218.4 1288.4 1329.8
Attention-3D [17] 109.5 268.8 479.0 596.7 711.9 716.9 695.3 631.3 1704.3
Baseline 115.6 124.8 140.7 141.9 148.2 153.3 159.2 164.7 164.9
T3D CNN 115.6 124.5 140.4 140.9 146.2 148.2 147.8 152.5 154.6
Protocol 2
VideoPose3D [24] 71.6 127.5 184.6 184.6 239.2 246.3 255.5 263.5 268.3
Sem-GCN [33] 100.1 187.1 251.2 281.7 313.3 319.7 327.3 333.1 336.2
Attention-3D [17] 66.6 169.9 234.7 264.9 300.3 307.9 317.5 325.0 282.1
Baseline 71.2 76.1 85.4 86.8 90.5 92.9 95.0 97.6 96.9
T3D CNN 71.2 75.9 85.1 86.3 89.1 90.0 90.2 92.2 93.0
TABLE VI: MPJPE comparison for the 3D pose estimation using random missing joints on NTU RGB+D dataset.
Pose Est. VideoPose3D T3D CNN
Action Recog. CNN SGN CNN SGN
Missing Joints Accuracy %
0 61.33 76.27 61.33 76.27
1 42.91 28.04 56.98 68.17
4 25.02 6.31 52.98 62.33
7 17.86 4.02 51.51 60.00
13 10.68 3.14 49.04 55.98
15 9.07 2.99 48.37 55.12
18 8.32 2.77 48.19 54.20
21 7.13 2.64 46.62 52.19
23 6.48 2.56 45.86 50.96
TABLE VII: Action recognition accuracy (%) with varying occlusion on NTU RGB+D dataset.

Table IV shows the comparisons for fixed partial occlusion. Estimation improves by increasing the length of temporal sequence from 81 to 243 frames. It is because handling fixed missing joints is relatively easier than the random missing joints. Also an increase in the length of sequence increases the information about the action being performed. For the case of missing head estimation, error is minimum because given the position of shoulders, the head is within a small region. We have also experimented with complete occlusion case when 1, 3, and 5 consecutive frames are completely missing. We have compared our proposed method with [24, 17] under Protocol 1 and 2 in Table V. In these experiments, the proposed framework was able to estimate good quality 3D poses despite severe occlusion, outperforming the existing methods by a significant margin.

Iv-B Experiments on NTU RGB+D Dataset

NTU RGB+D is one of the largest skeleton dataset consisting of videos performed by subjects across different action classes and three different views. As proposed by the existing state of the art methods, we used cross-subject evaluation in which subjects are equally divided into train and test splits. This dataset consists of 25 joint positions in 2D and 3D poses. 2D joints are input to the proposed T3D CNN while 3D joints are used as ground truth. We have evaluated proposed framework by randomly missing joints from each 2D pose.

Table VI shows the comparison of the proposed approach with existing methods [24, 33, 17]. Attention-3D [17] shows good performance when no joint is occluded but its performance degrades in case of missing joints. Sem-GCN [33] performance also drops due to occlusion because it is not utilizing temporal information. Our proposed T3D CNN shows consistently good performance even in heavily occluded scenarios such as missing 23 out of 25 joints.

Table VII shows the action classification accuracy of predicted 3D poses using CNN and SGN methods. Predicted 3D poses using the proposed CNN have shown improved classification accuracy under occlusion scenario whereas performance for predicted 3D poses by [24] decreases much faster as the number of missing joints increases. It shows the significance of our occlusion guided 3D pose estimation method.

Iv-C Experiments on SYSU Dataset

SYSU dataset consists of video sequences across 12 different actions performed by 40 different subjects. It contains skeletons with both 2D and 3D with 20 joint positions. In our setting half subjects are used for training and the others for testing. We have evaluated proposed framework with random missing joints. Table VIII shows the MPJPE comparison of the proposed T3D CNN with existing state-of-the-art methods [24, 17]. By increasing the number of random missing joints, error increases in all methods, however, T3D CNN has consistently shown much improved performance than the compared methods. Experiments are also performed with OpenPose (OP) [3] 2D detections and confidence score as an occlusion guidance. We observe significant performance increase when occlusion guidance was used in T3D-OP compared to Basline-OP. Table IX shows the action recognition accuracy using baseline CNN and SGN over the predicted 3D poses by VideoPose3D and proposed T3D CNN. In general, the action recognition score degrades as the number of missing joints increase, however, T3D CNN have consistently shown significantly improved scores due to effective occlusion handling.

Fig. 5: Qualitative comparisons on Human M dataset: (Left) 82% random joints missing, (Middle) lower body occluded and (Right) Failure cases: 97.2% random joints miss with high error (non-occluded joints are shown by red markers).
Method None Rand 2 Rand 4 Rand 8 Rand 12 Rand 14 Rand 16 Rand 18
Protocol 1
VideoPose3D [24] 98.1 181.2 246.3 441.8 655.3 780.8 894.1 974.2
Attention-3D [17] 111.8 361.9 590.7 574.9 883.6 1155.1 1427.9 1704.3
Baseline-OP 112.6 116.9 125.7 128.8 139.1 150.8 156.9 166.1
T3D-OP 112.6 114.4 116.3 119.3 120.8 125.3 132.8 142.6
Baseline 94.8 113.7 121.7 132.1 134.8 145.8 148.6 149.1
T3D CNN 94.8 110.0 114.9 120.9 123.1 125.9 125.4 131.8
Protocol 2
VideoPose3D [24] 52.0 102.4 141.9 201.1 240.8 254.4 266.2 272.4
Attention-3D [17] 49.9 121.8 183.3 233.4 255.7 264.7 273.7 282.1
BaselineOP 66.9 70.0 71.2 74.2 76.5 83.6 87.9 91.6
T3D conf 66.9 68.4 69.9 71.0 71.81 75.7 80.6 87.0
Baseline 50.3 58.7 66.5 67.7 70.9 73.3 76.3 81.0
T3D CNN 50.3 57.6 60.0 63.2 66.2 67.4 68.4 68.5
TABLE VIII: MPJPE comparison for 3D pose estimation on SYSU dataset with random missing joints. Baseline consists of Temporal Dilated Network without occlusion guidance module. Baseline-OP used Open-Pose [3] 2D detections while T3D-OP also used OpenPose confidence score for occlusion guidance.
Pose Est. VideoPose3D T3D CNN
Action Recog. CNN SGN CNN SGN
Missing Joints Accuracy %
0 62.32 98.24 62.32 98.24
2 20.96 21.87 59.49 93.30
4 19.83 17.86 59.49 91.51
8 12.75 9.82 55.81 87.05
12 13.03 11.16 54.67 87.03
14 11.05 9.37 53.26 86.16
16 11.33 8.48 51.56 82.14
18 11.61 8.92 48.44 79.46
TABLE IX: Action recognition accuracy (%) with varying occlusion on SYSU dataset.

Iv-D Qualitative Comparisons

Some visual results for missing joint estimation on Human M are presented in Fig. 5 for fixed and random partial occlusion scenarios. Known joints are highlighted in red color. It can be seen that the proposed T3D CNN is able to estimate quite similar 3D poses as the ground truth, which demonstrates the effectiveness of T3D CNN while other compared methods including VideoPose3D, SemGCN, and Attention-3D have shown degraded performance for random partial occlusion and also these methods are unable to reconstruct lower body when it is occluded. Our proposed T3D CNN has performed quite well due to occlusion guidance that helps to identify missing joints. Most of the visual results in Fig. 5 demonstrate quite reasonable estimation of missing joints in addition to estimating the third dimension for all joints. We observe that our algorithm shows graceful degradation in case of complex actions. For example, in ‘Sitting Down’ and ‘Photo’ actions with 97.2 occlusion, we obtain MPJPE of 117.8mm and 101.2mm respectively under protocol 1. Two frames with high error are shown in the last columns of Fig. 5. Some visual results on NTU RGB+D and SYSU are shown in supplementary material.

Fig. 6: 3D pose estimation error (MPJPE) variation with varying temporal sequence size on SYSU dataset.

Iv-E Ablation Study

To evaluate the significance of each component, different ablation studies are performed. Baseline results show the performance of proposed method using temporal sequence and without occlusion guidance mechanism. On Human M (Table III) Baseline observed up to 8% performance degradation compared to T3D CNN. Similarly, on NTU RGB+D Dataset (Table VI) 12.2% and on SYSU up to (Table VIII) degradation is observed. It shows the significant contribution of the proposed occlusion guidance mechanism in the performance.

The occlusion guidance mechanism using confidence score obtained by OpenPose [3] is evaluated as ‘T3D-OP’ as shown in Table VIII on SYSU dataset. For comparison, Baseline experiments using detected 2D pose has also been performed as ‘Baseline-OP’. T3D-OP obtained {25.5%, 24.1%, 23.5%} improvement over Baseline-OP for {14, 16, 18} missing joints for Protocol 1. It also demonstrates the effectiveness of using confidence in the occlusion guidance module. Due to errors in the detected 2D poses by OpenPose compared to the ground truth values, direct comparison with T3D CNN may not be fair.

Performance variation is observed by varying the temporal sequence size as { 3, 21, 81, 147, 243 } and experiment is repeated for { 4, 8, 14, 18 } random missing joints on SYSU dataset (Fig. 6). Error difference between different sequence length increases as the number of missing joints increases. However, we consistently observe minimum error for the longest sequence of size 243 frames.

V Conclusion

Estimation of 2D human pose from RGB images is quite mature and many pose detectors with excellent performance have been proposed. In the current work, a temporal dilated network based framework is proposed to estimate 3D human pose using 2D joint positions in the presence of missing joints. The input to the proposed framework is a temporal 2D pose sequence with occlusion guidance for missing joints and the output is a 3D pose corresponding to the central pose in the input sequence. The proposed method efficiently maps data from 2D pose space to the 3D pose space utilizing the temporal information and occlusion guidance. In a large number of experiments, the proposed method outperformed existing state-of-the-art methods using random missing joints, fixed partial occlusion, and completely missing frames. As a future direction, one may integrate attention mechanism with our proposed occlusion guidance to better estimate 3D poses. Also, occlusion handling in the wild, occlusion in complex poses, and unsupervised occlusion handling are still open problems.


  • [1] F. Angelini, Z. Fu, Y. Long, L. Shao, and S. M. Naqvi (2019) 2D pose-based real-time human action recognition with occlusion-handling. IEEE TMM 22 (6), pp. 1433–1446. Cited by: §I.
  • [2] Q. Bao, W. Liu, Y. Cheng, B. Zhou, and T. Mei (2020) Pose-guided tracking-by-detection: robust multi-person pose tracking. IEEE TMM 23, pp. 161–175. Cited by: §I.
  • [3] Z. Cao, G. Hidalgo, T. Simon, S. Wei, and Y. Sheikh (2019) OpenPose: realtime multi-person 2D pose estimation using part affinity fields. IEEE TPAMI 43 (1), pp. 172–186. Cited by: §IV-C, §IV-E, TABLE VIII.
  • [4] C. Chen, A. Tyagi, A. Agrawal, D. Drover, S. Stojanov, and J. M. Rehg (2019) Unsupervised 3D pose estimation with geometric self-supervision. In IEEE CVPR, Cited by: §I, §II.
  • [5] T. Chen, C. Fang, X. Shen, Y. Zhu, Z. Chen, and J. Luo (2020) Anatomy-aware 3D human pose estimation in videos. arXiv:2002.10322. Cited by: §II.
  • [6] Y. Cheng, B. Wang, B. Yang, and R. Tan (2021) Graph and temporal convolutional networks for 3D multi-person pose estimation in monocular videos. In AAAI, Cited by: §II.
  • [7] Y. Cheng, B. Yang, B. Wang, and R. T. Tan (2020) 3D human pose estimation using spatio-temporal networks with explicit occlusion training. In AAAI, Cited by: §I, §II.
  • [8] Y. Cheng, B. Yang, B. Wang, W. Yan, and R. T. Tan (2019) Occlusion-aware networks for 3D human pose estimation in video. In IEEE ICCV, Cited by: §I, §II.
  • [9] S. Das, S. Sharma, R. Dai, F. Bremond, and M. Thonnat (2020) Vpn: learning video-pose embedding for activities of daily living. In ECCV, Cited by: §I.
  • [10] H. Fang, Y. Xu, W. Wang, X. Liu, and S. Zhu (2018) Learning pose grammar to encode human body configuration for 3D pose estimation. In AAAI, Cited by: §II.
  • [11] R. Gu, G. Wang, and J. Hwang (2021) Exploring severe occlusion: multi-person 3D pose estimation with gated convolution. In ICPR, Cited by: §I, §II, §IV-A.
  • [12] M. Hossain and J. J. Little (2018) Exploiting temporal information for 3D human pose estimation. In ECCV, Cited by: §I, §II.
  • [13] J. Hu, W. Zheng, J. Lai, and J. Zhang (2015) Jointly learning heterogeneous features for rgb-d activity recognition. In IEEE CVPR, Cited by: §I, §IV.
  • [14] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu (2014) Human3.6m: large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE PAMI 36 (7), pp. 1325–1339. Cited by: §I, §IV.
  • [15] K. Lee, I. Lee, and S. Lee (2018) Propagating lstm: 3D pose estimation based on joint interdependency. In ECCV, Cited by: §II.
  • [16] M. Li, Z. Zhou, and X. Liu (2019) Multi-person pose estimation using bounding box constraint and lstm. IEEE TMM 21 (10), pp. 2653–2663. Cited by: §I.
  • [17] R. Liu, J. Shen, H. Wang, C. Chen, S. Cheung, and V. Asari (2020) Attention mechanism exploits temporal contexts: real-time 3D human pose reconstruction. In IEEE CVPR, Cited by: §I, §II, TABLE III, TABLE IV, TABLE V, §IV-A, §IV-A, §IV-B, §IV-C, TABLE VI, TABLE VIII, §IV.
  • [18] A. Markovitz, G. Sharir, I. Friedman, L. Zelnik-Manor, and S. Avidan (2020) Graph embedded pose clustering for anomaly detection. In IEEE CVPR, Cited by: §I.
  • [19] J. Martinez, R. Hossain, J. Romero, and J. J. Little (2017) A simple yet effective baseline for 3D human pose estimation. In IEEE ICCV, Cited by: §II.
  • [20] D. Misra (2019) Mish: a self regularized non-monotonic neural activation function. arXiv:1908.08681 4, pp. 2. Cited by: §III-A.
  • [21] F. Moreno-Noguer (2017) 3D human pose estimation from a single image via distance matrix regression. In IEEE CVPR, Cited by: §I, §II, TABLE III, TABLE IV, §IV.
  • [22] W. Nie, W. Jia, W. Li, A. Liu, and S. Zhao (2020)

    3D pose estimation based on reinforce learning for 2D image-based 3D model retrieval

    IEEE TMM 23, pp. 1021–1034. Cited by: §II.
  • [23] S. Park and N. Kwak (2018) 3D human pose estimation with relational networks. In BMVC, Cited by: §I, §II, TABLE III, TABLE IV, §IV-A, §IV.
  • [24] D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli (2019) 3D human pose estimation in video with temporal convolutions and semi-supervised training. In IEEE CVPR, Cited by: §I, §II, TABLE III, TABLE IV, TABLE V, §IV-A, §IV-A, §IV-B, §IV-B, §IV-C, TABLE VI, TABLE VIII, §IV.
  • [25] A. Qammaz and A. Argyros (2021) Occlusion-tolerant and personalized 3D human pose estimation in RGB images. In ICPR, Cited by: §I, §II.
  • [26] A. Shahroudy, J. Liu, T. Ng, and G. Wang (2016) Ntu rgb+ d: a large scale dataset for 3D human activity analysis. In IEEE CVPR, Cited by: §I, §IV.
  • [27] V. Srivastav, A. Gangi, and N. Padoy (2020) Self-supervision on unlabelled or data for multi-person 2D/3D human pose estimation. In MICCAI, Cited by: §I.
  • [28] B. Wandt and B. Rosenhahn (2019) Repnet: weakly supervised training of an adversarial reprojection network for 3D human pose estimation. In IEEE CVPR, Cited by: §II.
  • [29] C. Weng, B. Curless, and I. K-Shlizerman (2019) Photo wake-up: 3D character animation from a single photo. In IEEE CVPR, Cited by: §I.
  • [30] Y. Xu, W. Wang, T. Liu, X. Liu, J. Xie, and S. Zhu (2021) Monocular 3D pose estimation via pose grammar and data augmentation. IEEE PAMI. Cited by: §II.
  • [31] P. Zhang, C. Lan, W. Zeng, J. Xing, J. Xue, and N. Zheng (2020) Semantics-guided neural networks for efficient skeleton-based human action recognition. In IEEE CVPR, Cited by: §I, §III-C, §III-C.
  • [32] T. Zhang, B. Huang, and Y. Wang (2020) Object-occluded human shape and pose estimation from a single color image. In IEEE CVPR, Cited by: §II, §IV-A.
  • [33] L. Zhao, X. Peng, Y. Tian, M. Kapadia, and D. N. Metaxas (2019) Semantic graph convolutional networks for 3D human pose regression. In IEEE CVPR, Cited by: §I, TABLE III, TABLE IV, §IV-A, §IV-B, TABLE VI, §IV.