Dynamic Kernel Distillation for Efficient Pose Estimation in Videos

08/24/2019 ∙ by Xuecheng Nie, et al. ∙ National University of Singapore ByteDance Inc. berkeley college 9

Existing video-based human pose estimation methods extensively apply large networks onto every frame in the video to localize body joints, which suffer high computational cost and hardly meet the low-latency requirement in realistic applications. To address this issue, we propose a novel Dynamic Kernel Distillation (DKD) model to facilitate small networks for estimating human poses in videos, thus significantly lifting the efficiency. In particular, DKD introduces a light-weight distillator to online distill pose kernels via leveraging temporal cues from the previous frame in a one-shot feed-forward manner. Then, DKD simplifies body joint localization into a matching procedure between the pose kernels and the current frame, which can be efficiently computed via simple convolution. In this way, DKD fast transfers pose knowledge from one frame to provide compact guidance for body joint localization in the following frame, which enables utilization of small networks in video-based pose estimation. To facilitate the training process, DKD exploits a temporally adversarial training strategy that introduces a temporal discriminator to help generate temporally coherent pose kernels and pose estimation results within a long range. Experiments on Penn Action and Sub-JHMDB benchmarks demonstrate outperforming efficiency of DKD, specifically, 10x flops reduction and 2x speedup over previous best model, and its state-of-the-art accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Comparison between (a) our DKD model and (b) the traditional model for video-based human pose estimation. DKD online distills coherent pose knowledge and simplifies body joint localization into a matching procedure, facilitating small networks to efficiently estimate human pose in videos while achieving outperforming accuracy. See text for details.

Human pose estimation in videos aims to generate framewise joint localization of the human body. It is important for many applications including surveillance [8], computer animation [18], and AR/VR [19]. Compared to its still-image based counterpart, this task is more challenging due to its low-latency requirement and various distracting factors, e.g., motion blur, pose variation and viewpoint change.

Prior CNN based methods to solve this task [10, 21, 25, 20] usually use a large network to extract representative features for every frame and localize body joints based on them via pixel-wise classification. Some recent works also incorporate temporal cues from optical flow [9] or RNN units [30] to improve the performance, as shown in Fig. 1 (b). Despite their notable accuracy, these methods suffer expensive computation cost from the large model size, and hardly meet the low-latency requirement for realistic applications. The efficiency of video-based pose estimation still needs to be largely enhanced.

In this paper, we propose to enhance efficiency of human pose estimation in videos by fully leveraging temporal cues to enable small networks to localize body joints accurately. Such an idea is motivated by observing the computational bottleneck for prior models. Considering the temporal consistency across adjacent frames, it is not necessary to pass every frame through a large network for feature extraction. Instead, the model only needs to learn how to effectively transfer knowledge of pose localization in previous frames to the subsequent frames. Such transfer can help alleviate the requirements of large models and reduce the overall computational cost.

To implement the above idea, we design a novel Dynamic Kernel Distillation (DKD) model. As shown in Fig. 1 (a), DKD online distills pose knowledge from the previous frame into pose kernels through a light-weight distillator. Then, DKD simplifies body joint localization into a matching procedure between the pose kernels and the current frame through simple convolution. In this way, DKD fast re-uses pose knowledge from one frame and provides compact guidance for a small network to learn discriminative features for accurate human pose estimation.

In particular, DKD introduces a light-weight CNN based pose kernel distillator. It takes features and pose estimations of the previous frame as input and infers pose kernels suitable for the current frame. These pose kernels carry knowledge of body joint configuration patterns from the previous frame to the current frame, and guide a small network to learn compact features matchable to the pose kernels for efficient pose estimation. Accordingly, body joint localization is cast as a matching procedure via applying pose kernels on feature maps output from small networks with simple convolution to search for regions with similar patterns. Since it gets rid of the need for using large networks, DKD performs significantly faster than prior models. In addition, this 2D convolution based matching scheme is significantly cheaper than additional optical flow [9], the decoding phase of an RNN unit [30] or the expensive 3D convolutions [16]. Moreover, the distillator framewisely updates the pose kernels according to current joint representations and configurations. This dynamic feature makes DKD more flexible and robust in analyzing various scenarios in videos.

To further leverage temporal cues to facilitate the distillator to infer suitable pose kernels, DKD introduces a temporally adversarial training method that adopts a discriminator to help estimate consistent poses in consecutive frames. The temporally adversarial discriminator learns to distinguish the groundtruth change of joint confidence maps over neighboring frames from the predicted change, and thus supervises DKD to generate temporally coherent poses. In contrast to previous adversarial training methods [6, 5] that learn structure priors in the spatial dimension for recognition over still images, our method constrains the pose variations in the temporal dimension of videos, enforcing plausible changes of estimated poses in videos. In addition, this discriminator can be removed during the inference phase, thus introducing no additional computation.

The whole framework of the proposed DKD model is end-to-end learnable. Comprehensive experiments on two widely used benchmarks Penn Action [33] and Sub-JHMDB [15] demonstrate the efficiency and effectiveness of our DKD model for resolving human pose estimation in videos. Our main contributions are in three folds: 1) We propose a novel model to facilitate small networks in video-based pose estimation with lifted efficiency, by using a light-weight distillator to online distill the pose knowledge and simplifying body joint localization into a matching procedure with simple convolution. 2) We introduce the first temporally adversarial training strategy for encouraging the coherence of estimated poses in the temporal dimension of videos. 3) Our model achieves outperforming efficiency, i.e. 10x flops reduction and 2x speedup over previous best model, also with state-of-the-art accuracy.

2 Related work

For human pose estimation in videos, existing CNN based methods [12, 11, 25, 20, 10] usually focus on leveraging temporal cues to extract complementary information for refining the preliminary results output from a large network for every frame. In [14], Iqbal et al.

incorporate deep learned representations into an action conditioned pictorial structured model to refine pose estimation results of each frame. In 

[12] and [10], 3D convolutions are exploited on video clips for implicitly capturing the temporal contexts between frames. In [25], Song et al. propose a Thin-Slicing network that uses dense optical flow to warp and align heatmaps of neighboring frames and then performs spatial-temporal inference via message passing through the graph constructed by joint candidates and their relationships among aligned heatmaps. [11] and [20] sequentially estimate human poses in videos following the Encoder-RNN-Decoder framework. Given a frame, this kind of framework first uses an encoder network to learn high-level image representations, then RNN units to explicitly propagate temporal information between neighboring frames and produce hidden states, and finally a decoder network to take hidden states as input and output pose estimation results of current frame. For ensuring good performance, however, these methods always require large network to compactly learn intermediate representations or preliminary poses. Their efficiency is rather limited.

Different from existing methods, our DKD model distills coherent pose knowledge from temporal cues and simplifies body joint localization as a matching problem, thus allowing small networks to accurately and efficiently estimate human poses in videos, which is explained in more detail below.

Figure 2: The architecture of the proposed Dynamic Kernel Distillation model. (a) The overall framework of the DKD model for inferencing human poses in videos. denotes the convolution operation and the concatenation. (b) The network backbone utilized in the pose initializer and frame encoder. (c) The network architecture of the pose kernel distillator.

3 Proposed approach

3.1 Formulation

We first mathematically formulate the proposed Dynamic Kernel Distillation (DKD) model for human pose estimation in videos. For a video including frames, we use to denote its th frame, where and are the height and width of , respectively. DKD aims to estimate a set of confidence maps for all frames in . The is of spatial size , where is the number of body joints, and each of its elements encodes the confidence of a joint at the corresponding position. Accordingly, DKD performs online human pose estimation frame-by-frame in a sequential manner, by leveraging temporal cues between neighboring frames. In particular, its core is composed of a pose kernel distillator with a temporally adversarial training strategy.

Pose kernel distillation

Given a frame , DKD introduces a pose kernel distillator to transfer pose knowledge provided by to guide pose estimation in the next frame . In particular, it leverages temporal cues represented with the combination of feature maps and confidence maps , to online distill pose kernels via a simple feed-forward computation

(1)

where and is the kernel size. The distilled pose kernels encode knowledge of body joint patterns and provide compact guidance for pose estimation in the posterior frame, which is learnable with light-weight networks. Accordingly, DKD exploits a small frame encoder to learn high-level image representations of frame to match these distilled pose kernels, alleviating the demand of large networks that troubles prior works [25, 20]. Then, DKD applies the distilled pose kernels on feature maps , in a sliding window manner to search for the region with similar patterns as each body joint, namely,

(2)

where denotes the convolution operation, and , are the confidence map and pose kernels of the th joint, respectively. With the above formulation, DKD casts human pose estimation to a matching problem and locates the position with maximum response on in the th frame as the th body joint.

In this way, the pose kernel distillator equips DKD with the capability of transferring pose knowledge among neighboring frames and enables small network to estimate human pose in videos. Its distilled pose kernels can be applied to fast localize body joints with simple convolution, further improving the efficiency. In addition, it can directly leverage temporal cues of one frame to assist body joint localization in the following frame, without requiring auxiliary optical flow models [25] or decoders appended to RNN units [20]. It can also fast distill pose kernels in a one-shot manner, avoiding complex iterating utilized by previous online kernel learning models [4, 27]. Moreover, it framewisely updates pose kernels and improves the robustness of our model to joint appearance and configuration variations.

It is worth noting that, for the first frame, due to the lack of preceding temporal cues, we utilize another pose model , usually larger than , to initialize its confidence map, i.e., . In particular, together with and instantiate the pose generator. Given pose annotations , to learn the pose generator, we define the loss as

(3)

where denotes the Mean Square Error loss.

Temporally adversarial training

To further leverage temporal cues, DKD adopts the adversarial training strategy to learn proper supervision in the temporal dimension for improving the pose kernel distillator. Adversarial training was only exploited for images in the spatial dimension in prior works [6, 5]. In contrast, our proposed temporally adversarial training strategy aims to provide constraints for pose changes in the temporal dimension, helping estimate coherent human poses in consecutive frames of videos. Inspired by [6], DKD introduces a discriminator to distinguish the changes of groundtruth confidence maps between neighboring frames from predicted ones. The discriminator takes as input two neighboring confidence maps (either from groundtruth or prediction) concatenated with the corresponding images, and reconstructs the change of the confidence maps. For real (groundtruth) samples and , the discriminator targets at approaching their change , while for fake (predicted) samples and , keeping the reconstructed change away from . Therefore, the discriminator can better differentiate groundtruth change from erroneous predictions. In this way, the discriminator criticizes pixel-wise variations of confidence maps and judges whether joint positions are in rational movements, to encourage the pose kernel distillator to distill suitable pose kernels and ensure consistency of estimated poses between neighboring frames. To train the discriminator

, we define its loss function as

(4)

where denotes the output from the discriminator for real samples and denotes the one for fake samples. is a variable for dynamically balancing the relative learning speed between the pose generator and temporally adversarial discriminator.

The temporally adversarial training conventionally follows a two-player minmax game. Therefore, the final objective function of the DKD model is written as

(5)

where is a constant for weighting generator loss and discriminator loss, set as 0.1. The training process to optimize the above object function will be illustrated in Section 3.3.

3.2 Network architecture

Pose initializer

For the first frame , DKD utilizes a pose initializer to directly estimate its confidence maps . Here, exploits the network following [29], which achieves outstanding performance with a simple architecture. The network follows a U-shape architecture. It first encodes down-sized feature maps from the input image, and then gradually recovers high-resolution feature maps by appending several deconvolution layers, as shown in Fig. 2 (b). In particular, we use ResNet [13]

as the backbone and append two deconvolution layers, resulting in a total stride of the network of 8. The other settings follow 

[29].

Frame encoder

DKD utilizes an encoder to extract high-level features of frame to match the pose kernels from the pose kernel distillator. Here, we design with the same network architecture as the pose initializer , with only the last classification layer removed from . Note, the backbone of is much smaller than .

Pose kernel distillator

The pose kernel distillator in DKD takes as input the temporal information, represented by the concatenation of feature maps and confidence maps , and distills the pose kernels in a one-shot feed-forward manner. We implement

with a CNN, including three convolution layers followed by BatchNorm and ReLU layers and two pooling layers. Its architecture is shown in Fig. 

2 (c). This light-weight CNN guarantees the efficiency of . However, it is inefficient and infeasible for to directly learn all kernels due to their large scale which brings high computational complexity and also the risk of overfitting. To avoid these issues, inspired by [3], DKD exploits to learn the kernel bases instead of full size via performing the following factorization:

(6)

where is the convolution operation, the channel-wise convolution, and , are coefficients over the kernel bases . In this way, the size of actual outputs from the pose kernel distillator is smaller than original by a magnitude, thus enhancing the efficiency of the DKD model.

To generate the confidence maps of , the calculation between and is implemented with convolution layers. In particular, we first use a convolution parameterized by on . Then we apply in a dynamic convolution layer [22], which is the same with traditional convolution layer, just replacing the pre-learned static convolution kernels with the dynamically learned ones. Finally, we adopt another convolution with to produce . To scale the estimation results with the pose kernels, we add a BatchNorm layer in the last to facilitate the training.

Temporally adversarial discriminator

DKD utilizes the temporally adversarial discriminator to enhance the learning process of the pose kernel distillator with confidence map variations as auxiliary temporal supervision. We design with the same network backbone as the frame encoder to balance the learning capability between pose generator and discriminator.

3.3 Training and inference

In this subsection, we will explain the training and inference process of the DKD model for human pose estimation in videos. Specifically, DKD exploits a temporally adversarial training strategy. The discriminator is optimized via maximizing the loss function defined in Eqn. (5) for distinguishing the changes of groundtruth confidence maps from estimated ones between neighboring frames. On the other hand, the generator produces a set of confidence maps for consecutive frames in a video and meanwhile fools the discriminator via making the changes of estimated poses approach those of groundtruth ones. To synchronize the learning speed between generator and discriminator, we follow [2, 6] to update in Eqn. (4) for each iteration :

(7)

where is a hyper-parameter controlling the update rate and set as 0.1. is initialized as 0 and bounded in . As defined in Eqn. (7), when the generator successfully fools the discriminator, will be increased to make the optimizer emphasize improving the discriminator, and vice versa. The overall training process is illustrated in Algorithm 1.

During inference, the discriminator is removed. Given a video, DKD first utilizes the pose initializer to estimate the confidence maps of the first frame. Then, is combined with the feature maps from the encoder as input to the pose kernel distillator for distilling the initial pose kernels . For the second and subsequent frames, DKD applies the framewisely updated pose kernels on the feature maps of the posterior frame to estimate the confidence maps . Finally, DKD outputs body joint positions for each frame by localizing the maximum responses on the corresponding confidence maps. The overall inference procedure of DKD is given in Fig. 2 (a).

input : video , groundtruth , iteration number
initialization: ,
for iteration , to  do
        Forward pose initializer
        Update loss
        for frame , to  do
               if  equals 1 then
                      Encode image representations
                     
               end if
              else
                      Forward discriminator
                      Update loss
                      Update pose kernels
                      Encode image representations
                      Estimate confidence map with Eqn. (2)
                      Update loss
                      Forward discriminator
                      Update loss
                      Update loss
                     
               end if
              
        end for
       Update discriminator with

via backpropagation

        Update , , and with via backpropagation
        Update with Eqn. (7)
end for
Algorithm 1 Training process for our DKD model.

4 Experiments

4.1 Experimental setup

Datasets

We evaluate our model on two widely used benchmarks: Penn Action [33] and Sub-JHMDB [15]. Penn Action dataset is a large-scale unconstrained video dataset. It contains 2,326 video clips, 1,258 for training and 1,068 for testing. Each person in a frame is annotated with 13 body joints, including the coordinates and visibility. Following conventions, evaluations on the Penn Action dataset only consider the visible joints. Sub-JHMDB is another dataset for video based human pose estimation. It provides labels for 15 body joints. Different from Penn Action dataset, it only annotates visible joints for complete bodies. It contains 316 video clips with 11,200 frames in total. The ratio for the number of training and testing videos is roughly 3:1. In addition, it includes three different split schemes. Following previous works [20, 25], we separately conduct evaluations on these three splits and report the average precision.

Data augmentation

For both the Penn Action dataset and Sub-JHMDB dataset, we perform data augmentation following conventional strategies, including random scaling with a factor from , random rotation in

and random flipping. The same augmentation setting is applied to all the frames in a training video clip. In addition, each frame is cropped based on the person center on the original image and padded to

as input for training.

Implementation

For fair comparison with previous works [20, 25], we first pre-train the pose initializer and the frame encoder for single-person pose estimation on the MPII [1]

dataset. Then, we fine-tune the pre-trained models together with the randomly initialized pose kernel distillator and the temporally adversarial discriminator on Penn Action dataset and Sub-JHMDB dataset for 40 epochs, respectively. In particular, each training sample contains 5 frames, which are consecutively sampled from a video. We set the channel number

of the pose kernels as 256 and the kernel size

as 7. We implement our DKD model with Pytorch 

[24]

and use RMSprop as the optimizer 

[26]. We set the initial learning rate as 0.0005 and drop it with a multiplier 0.1 at the 15th and 25th epochs. For evaluation, we perform seven-scale testing with flipping.

Evaluation metrics

We evaluate the performance with PCK [32]—the localization of a body joint is considered to be correct if it falls within pixels of the groundtruth. controls the relative threshold and conventionally set as 0.2. is the reference distance, set as following prior works [20, 25] with and being height and width of the person bounding box. We term this metric as PCK normalized by person size. This metric is somewhat loose to precisely evaluate the model performance as person size is usually relatively large. Thereby, we follow the conventions of still-image based pose estimation [17, 7, 31, 28], and also adopt another metric that takes torso size as reference distance. We term it as PCK normalized by torso size.

Methods Flops(G) Head Sho. Elb. Wri. Hip Knee Ank. PCK
Baseline(ResNet101) 11.02 96.1 90.7 91.4 89.5 86.2 92.2 88.9 90.7
DKD(ResNet50) 8.65 96.6 93.7 92.9 91.2 88.8 94.3 93.7 92.9
DKD(ResNet50)-w/o-TAT 8.65 96.6 92.6 92.9 90.8 87.5 93.4 92.4 92.1
DKD(ResNet50)-w/o-PKD 7.66 96.0 91.8 92.4 90.4 88.3 93.5 89.8 91.6
Baseline(ResNet50) 7.66 96.0 90.5 89.4 87.6 83.8 89.7 86.0 88.8
DKD(ResNet34) 7.68 96.4 91.9 93.0 90.8 88.6 93.5 91.9 92.1
DKD(ResNet34)-w/o-TAT 7.68 96.4 91.2 92.7 89.9 87.3 93.3 90.9 91.4
DKD(ResNet34)-w/o-PKD 6.69 95.9 91.1 91.9 89.3 87.7 92.5 90.3 91.0
Baseline(ResNet34) 6.69 95.8 88.7 88.5 86.7 83.6 89.6 85.3 87.3
DKD(ResNet18) 5.27 95.7 90.0 92.2 89.4 86.8 92.3 89.5 90.6
DKD(ResNet18)-w/o-TAT 5.27 95.5 89.3 91.9 89.1 85.0 91.6 89.0 89.9
DKD(ResNet18)-w/o-PKD 4.28 95.0 89.1 92.4 88.7 85.5 91.4 87.7 89.7
Baseline(ResNet18) 4.28 94.7 86.0 87.7 84.6 81.1 87.4 84.3 86.1
Table 1:

Ablation studies on Penn Action dataset with PCK normalized by torso size as evaluation metric.

Figure 3: Comparison of confidence maps estimated from the proposed model DKD(ResNet34) and the baseline one Baseline(ResNet34). (a) are input frames. (b) and (d) are estimated confidence maps from our model for right elbow and right hip, respectively, and (c) and (e) from baseline. Best viewed in color.

4.2 Ablation analysis

We first conduct ablation studies on Penn Action dataset to analyze the efficacy of each core component of our DKD model: the pose kernel distillator and the temporally adversarial training. We fix the backbone of the pose initializer as ResNet101. We vary the backbone of frame encoder ranging in ResNet18/34/50, since it dominates the computational cost of pose estimation of our model. We use DKD(ResNet) to denote our full model, where represents the backbone depth of the frame encoder. We use DKD(ResNet)-w/o-TAT to denote the model without the temporally adversarial training and DKD(ResNet)-w/o-PKD the model without the pose kernel distillator. We use Baseline(ResNet) to denote the single-image pose estimation model without using temporal cues. Results are shown in Tab. 1.

From Tab. 1, we can see that DKD(ResNet34) and DKD(ResNet50) use smaller networks for frame feature learning while achieve much better performance than Baseline(ResNet101) which is much deeper. We can also see DKD(ResNet18) achieves comparable performance to Baseline(ResNet101) ( PCK vs PCK), with up to flop reduction (5.27G vs 11.02G Flops). These results verify the efficacy of DKD to enable small networks to estimate human pose in videos, bring efficiency enhancement while achieving outperforming accuracy.

By comparing the DKD(ResNet)-w/o-TATs and the Baseline(ResNet)s, we find that the computation overhead of the pose kernel distillator is small, only bringing slight flops increase, e.g., with ResNet50 as backbone, from 7.66G to 8.65G. We can also find the pose kernel distillator improves frame-level performance for human pose estimation over baselines by in average. Besides, DKD(ResNet)-w/o-TATs always outperform DKD(ResNet)-w/o-PKDs, this implies the distilled pose kernels carry knowledge of body joint patterns and provide compact guidance for pose estimation between neighboring frames, which are absent in still-image based inference. The above results verify the efficacy of the pose kernel distillator for efficiently transferring pose knowledge to assist poses estimation in videos.

By comparing the time cost of the DKD(ResNet)-w/o-PKDs and the Baseline(ResNet)s, we find temporally adversarial training does not hurt inference speed, since the discriminator is used only in training. In addition, the temporally adversarial training consistently improves the baseline performance for all body joints, in particular for the joints difficult to localize, e.g., DKD(ResNet34)-w/o-PKD improves the accuracy of ankles from PCK to PCK. This demonstrates the proposed temporally adversarial training is effective for regularizing temporal changes over pose predictions during model training.

Combining temporally adversarial training with the pose kernel distillator, the full DKD model further boosts the performance over all the ablated models, showing they are complementary to each other. Especially, DKD(ResNet)s achieves average performance gain over the corresponding vanilla baselines Baseline(ResNet)s.

To better reveal the advantages of our DKD model over single-frame based models, we visualize the confidence maps estimated from DKD(ResNet34) and Baseline (ResNet34) for the elbow and ankle in Fig. 3. By comparing Fig. 3 (b) and (c), we can observe that our DKD model produces pose kernels of the correct person of interest with more accurate response. In contrast, the baseline model produces false alarms on the elbow of another person in the frame. We can also see that the proposed model can produce consistent confidence maps for the hip in Fig. 3 (d) while the baseline model produces unstable estimations even with fixed hip Fig. 3 (e). These results further validate the capability of the proposed model for generating accurate and temporally consistent human pose estimations in videos.

Methods Flops(G) Head Sho. Elb. Wri. Hip Knee Ank. PCK
DKD(ResNet34) 7.68 96.4 91.9 93.0 90.8 88.6 93.5 91.9 92.1
DKD(ResNet34)-w-SAT 7.68 96.4 91.4 92.8 90.1 87.7 93.4 91.2 91.6
DKD(ResNet34)-w-LSTM 10.16 95.7 89.5 92.9 90.2 86.9 93.5 90.1 91.1
Table 2: Comparison of temporally vs. spatially adversarial training, and pose kernel distillator vs. Convolutional LSTM. The accuracy is measured with PCK normalized by torso size.

Next, we analyze how well our pose kernel distillator performs for propagating temporal information via comparing it with the state-of-the-art Convolutional LSTMs [20]. We also compare our temporally adversarial training with the spatially one in [6]. All the compared models adopt the ResNet101 as the backbone of the pose initializer and ResNet34 as the frame encoder. Except for the compared components, all the other settings are the same. Results are shown in Tab. 2. We use DKD(ResNet34)-w-LSTM to denote the model utilizing Convolutional LSTM for temporal cues propagation instead of our pose kernel distillator in the DKD model. We can observe that DKD(ResNet34)-w-LSTM degrades the accuracy of DKD(ResNet34) for all body joints, especially for wrist and ankle. In addition, it increases the flops from 7.68G to 10.16G. These results evaluate the superiority of the pose kernel distillator in both efficiency and efficacy for transferring pose knowledge between neighboring frames over traditional RNN units.

We use DKD(ResNet34)-w-SAT to denote the model in which our temporally adversarial training is replaced with the spatially one in [6]. Specifically, [6] introduces a discriminator to distinguish the single-frame groundtruth confidence maps from estimated ones for obtaining structural spatial constraints on poses. We can see DKD(ResNet34) consistently outperforms DKD(ResNet34)-w-SAT. In addition, by comparing DKD(ResNet34)-w-SAT with DKD(ResNet34)-w/o-TAT in Tab. 1, spatially adversarial training only brings limited improvement. These results further verify the efficacy of using adversarial training in temporal dimension.

4.3 Comparisons with state-of-the-arts

Tab. 3 show the comparisons of our DKD model with state-of-the-arts on Penn Action dataset. In particular, the method proposed in [20] follows the Encoder-RNNs-Decoder framework with Convolutional LSTMs, while [25] exploits optical flow models to align confidence maps of neighboring frames. We report the performance of our model with both person and torso size as reference distance under the PCK evaluation metric. For comparison with current best model [20], we report both its performance with PCK normalized by torso size, flops and running time111We reproduce the results of [20] with PCK normalized by torso size via running the codes released by the authors on the repo: https://github.com/lawy623/LSTM_Pose_Machines. The running time is counted on GPU GTX 1080ti for both [20] and our model.. For our DKD model, we fix the backbone of the pose initializer as ResNet101. We vary the backbone of frame encoder ranging in ResNet18/34/50. Since both of state-of-the-arts [20] and [25] use the same network as Convolutional Pose Machines (CPM) [28], we also experiment our DKD model with a frame encoder as a simplified version of CPM by replacing its kernels with size larger than 3 to kernels, denoted as DKD(SmallCPM), to further verifying the efficacy of DKD to facilitate small networks in video-based pose estimation.

Methods Flops(G) Time(ms) Head Sho. Elb. Wri. Hip Knee Ank. PCK
Normalized by Person Size
Park et al. [23] - - 62.8 52.0 32.3 23.3 53.3 50.2 43.0 45.3
Nie et al. [21] - - 64.2 55.4 33.8 24.4 56.4 54.1 48.0 48.0
Iqal et al. [14] - - 89.1 86.4 73.9 73.0 85.3 79.9 80.3 81.1
Gkioxari et al. [11] - - 95.6 93.8 90.4 90.7 91.8 90.8 91.5 91.8
Song et al. [25] - - 98.0 97.3 95.1 94.7 97.1 97.1 96.9 96.5
Luo et al. [20] 70.98 25 98.9 98.6 96.6 96.6 98.2 98.2 97.5 97.7
DKD(SmallCPM) 9.96 12 98.4 97.3 96.1 95.5 97.0 97.3 96.6 96.8
DKD(ResNet50) 8.65 11 98.8 98.7 96.8 97.0 98.2 98.1 97.2 97.8
Normalized by Torso Size
Luo et al. [20] 70.98 25 96.0 93.6 92.4 91.1 88.3 94.2 93.5 92.6
DKD(SmallCPM) 9.96 12 96.0 93.5 92.0 90.6 87.8 94.0 93.1 92.4
DKD(ResNet50) 8.65 11 96.6 93.7 92.9 91.2 88.8 94.3 93.7 92.9
Table 3: Comparison with state-of-the-arts on Penn Action dataset.

figureExtensive analysis for comparing our method with state-of-the-art [20] on (a) PCK over different thresholds with ranging from 0 to 0.2; (b) speed vs. accuracy.

Figure 4: Qualitative results on (a) Penn Action dataset and (b) Sub-JHMDB dataset. Best viewed in color and 2x zoom.

From Tab. 3, we can observe that our best model DKD(ResNet50) reduces the computation flops by a magnitude over [20] (8.65G vs 70.98G) and achieves 2x faster speed (11ms vs 25ms per image), verifying the outperforming efficiency of our model. In addition, we can see under PCK normalized by person size, DKD(ResNet50) achieves comparable accuracy with state-of-the-art [20]. When using PCK normalized by torso size, DKD(ResNet50) achieves superior accuracy over [20] ( PCK vs PCK) and with better performance for all of the body joints. We also compare our model with [20] via evaluating the performance with PCK normalized by torso size when varying threshold from 0 to 0.2 with 0.01 as the step size, and results are shown in Fig. 4.3 (a). We can see that DKD consistently outperforms [20] under more critic metrics by decreasing . These results demonstrate the superior speed and accuracy of our model for human pose estimation in videos.

By comparing DKD(SmallCPM) with [20], we can find our DKD model maintains high accuracy ( PCK vs PCK) in case of significant simplification to the network (9.96G vs 70.98G Flops). This result verifies the effectiveness of our DKD model for alleviating the demands of large networks for video-based human pose estimation.

To evaluate the effects of different frame encoder backbones on the efficiency and efficacy of DKD, we plot speed vs. accuracy analysis for different models in Fig. 4.3 (b). We can observe that reducing depth of frame encoder backbone from ResNet50 to ResNet18 slightly degrades the accuracy, but speeds up 2x from 11ms to 6.5ms per image. In addition, we can see that DKD(ResNet18) achieves comparable performance with [20] but 4x faster. These results further validate the efficacy of our DKD model to facilitate small networks in video-based pose estimation.

Methods Head Sho. Elb. Wri. Hip Knee Ank. PCK
Normalized by Person Size
Park et al. [23] 79.0 60.3 28.7 16.0 74.8 59.2 49.3 52.5
Nie et al. [21] 80.3 63.5 32.5 21.6 76.3 62.7 53.1 55.7
Iqal et al. [14] 90.3 76.9 59.3 55.0 85.9 76.4 73.0 73.8
Song et al. [25] 97.1 95.7 87.5 81.6 98.0 92.7 89.8 92.1
Luo et al. [20] 98.2 96.5 89.6 86.0 98.7 95.6 90.9 93.6
DKD(ResNet50) 98.3 96.6 90.4 87.1 99.1 96.0 92.9 94.0
Normalized by Torso Size
Luo et al. [20] 92.7 75.6 66.8 64.8 78.0 73.1 73.3 73.6
DKD(ResNet50) 94.4 78.9 69.8 67.6 81.8 79.0 78.8 77.4
Table 4: Comparison with state-of-the-arts on Sub-JHMDB dataset.

Tab. 4 show the comparisons of our DKD model with state-of-the-arts on Sub-JHMDB dataset. We can see that our DKD model achieves new state-of-the-art PCK and performs best for all the body joints. When using the stricter metric PCK normalized by torso size, the superiority of our model over [20] is more significant, achieving over improvement ( PCK vs

PCK) on average. In addition, we can find that our model well applies to small-scale datasets, such as Sub-JHMDB with only 316 videos. These small datasets are challenging since they provide only limited training samples, while in our DKD model, the one-shot pose kernel distillator is able to fast adapt pose kernels, without requiring a large number of training samples for iteratively tuning classifiers as in existing methods.

Qualitative results

Fig. 4 shows the qualitative results to visualize efficacy of the DKD model for human pose estimation in videos on Penn Action and Sub-JHMDB, respectively. We can observe DKD can accurately estimate human poses in various challenging scenarios, e.g., cluttered backgrounds (the 1st row of Fig. 4 (a)), scale variations (the 1st row of Fig. 4 (b)), motion blur (the 2nd rows of Fig. 4 (a) and (b)). In addition, it can leverage temporal cues to handle occasional disappearance of a body joint caused by occlusion, as shown in the 3rd row of Fig. 4 (a), and encourage pose consistency in presence of fast and large-degree pose variations, as shown in the 3rd and 4th rows of Fig. 4 (b). Moreover, it is robust to various view-point and lighting conditions, as shown in the 5th rows of Fig. 4 (a) and (b), respectively. These results further verify the effectiveness of DKD.

5 Conclusion

This paper presents a Dynamic Kernel Distillation (DKD) model for improving efficiency of human pose estimation in videos. In particular, it adopts a pose kernel distillator to online distill the pose kernels from temporal cues of one frame in a one-shot feed-forward manner. The distilled pose kernels encode knowledge of body joint patterns and provide compact guidance for pose estimation in the posterior frame. With these pose kernels, DKD simplifies body joint localization into a matching procedure with simple convolution. In this way, DKD fast transfers pose knowledge between neighboring frames and enables small networks to accurately estimate human poses in videos, thus significantly lifting the efficiency. DKD also introduces the temporally adversarial training strategy via constraining the changes of estimated confidence maps between neighboring frames. The whole framework can be end-to-end trained and inferred. Experiments on two benchmarks demonstrate that our model achieves state-of-the-art efficiency with only 1/10 flops and 2x faster speed of the previous best model, and also outperforming accuracy for human pose estimation in videos.

Acknowledgement

Jiashi Feng was partially supported by NUS IDS R-263-000-C67-646, ECRA R-263-000-C87-133 and MOE Tier-II R-263-000-D17-112.

References

  • [1] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele (2014) 2D human pose estimation: new benchmark and state of the art analysis. In CVPR, Cited by: §4.1.
  • [2] D. Berthelot, T. Schumm, and L. Metz (2017) BEGAN: boundary equilibrium generative adversarial networks. arXiv:1703.10717. Cited by: §3.3.
  • [3] L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi (2016) Learning feed-forward one-shot learners. In NIPS, Cited by: §3.2.
  • [4] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr (2016) Fully-convolutional siamese networks for object tracking. In ECCV Workshop, Cited by: §3.1.
  • [5] Y. Chen, C. Shen, X. Wei, L. Liu, and J. Yang (2017) Adversarial posenet: a structure-aware convolutional network for human pose estimation. In ICCV, Cited by: §1, §3.1.
  • [6] C. Chou, J. Chien, and H. Chen (2017) Self adversarial training for human pose estimation. In CVPR Workshop, Cited by: §1, §3.1, §3.3, §4.2, §4.2.
  • [7] X. Chu, W. Ouyang, H. Li, and X. Wang (2016) Structured feature learning for pose estimation. In CVPR, Cited by: §4.1.
  • [8] M. Cristani, R. Raghavendra, A. Del Bue, and V. Murino (2013) Human behavior analysis in video surveillance: a social signal processing perspective. Neurocomputing 100, pp. 86–97. Cited by: §1.
  • [9] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox (2015) Flownet: learning optical flow with convolutional networks. In ICCV, Cited by: §1, §1.
  • [10] R. Girdhar, G. Gkioxari, L. Torresani, M. Paluri, and D. Tran (2018) Detect-and-track: efficient pose estimation in videos. In CVPR, Cited by: §1, §2.
  • [11] G. Gkioxari, A. Toshev, and N. Jaitly (2016)

    Chained predictions using convolutional neural networks

    .
    In ECCV, Cited by: §2, Table 3.
  • [12] A. Grinciunaite, A. Gudi, E. Tasli, and M. den Uyl (2016) Human pose estimation in space and time using 3d cnn. In ECCV Workshops, Cited by: §2.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §3.2.
  • [14] U. Iqbal, M. Garbade, and J. Gall (2017) Pose for action-action for pose. In FG, Cited by: §2, Table 3, Table 4.
  • [15] H. Jhuang, J. Gall, S. Zuffi, C. Schmid, and M. J. Black (2013) Towards understanding action recognition. In ICCV, Cited by: §1, §4.1.
  • [16] S. Ji, W. Xu, M. Yang, and K. Yu (2013) 3D convolutional neural networks for human action recognition. IEEE Trans. on Pattern Anal. Mach. Intell. 35 (1), pp. 221–231. Cited by: §1.
  • [17] S. Johnson and M. Everingham (2011) Learning effective human pose estimation from inaccurate annotation. In CVPR, Cited by: §4.1.
  • [18] J. Lee, J. Chai, P. S. Reitsma, J. K. Hodgins, and N. S. Pollard (2002) Interactive control of avatars animated with human motion data. In ACM Trans. on Graphics, Vol. 21, pp. 491–500. Cited by: §1.
  • [19] H. Lin and T. Chen (2010) Augmented reality with human body interaction based on monocular 3d pose estimation. In ACIVS, Cited by: §1.
  • [20] Y. Luo, J. Ren, Z. Wang, W. Sun, J. Pan, J. Liu, J. Pang, and L. Lin (2018) LSTM pose machines. In CVPR, Cited by: §1, §2, §3.1, §3.1, §4.1, §4.1, §4.1, §4.2, §4.3, §4.3, §4.3, §4.3, §4.3, §4.3, Table 3, Table 4, footnote 1.
  • [21] X. Nie, C. Xiong, and S. Zhu (2015) Joint action recognition and pose estimation from video. In CVPR, Cited by: §1, Table 3, Table 4.
  • [22] X. Nie, J. Feng, and S. Yan (2018) Mutual learning to adapt for joint human parsing and pose estimation. In ECCV, Cited by: §3.2.
  • [23] D. Park and D. Ramanan (2011) N-best maximal decoders for part models. In ICCV, Cited by: Table 3, Table 4.
  • [24] A. Paszke, S. Gross, and S. Chintala (2017) PyTorch. Cited by: §4.1.
  • [25] J. Song, L. Wang, L. Van Gool, and O. Hilliges (2017) Thin-slicing network: a deep structured model for pose estimation in videos. In CVPR, Cited by: §1, §2, §3.1, §3.1, §4.1, §4.1, §4.1, §4.3, Table 3, Table 4.
  • [26] T. Tieleman and G. Hinton (2012) Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude.

    COURSERA: Neural Networks for Machine Learning

    .
    Cited by: §4.1.
  • [27] J. Valmadre, L. Bertinetto, J. Henriques, A. Vedaldi, and P. H. Torr (2017) End-to-end representation learning for correlation filter based tracking. In CVPR, Cited by: §3.1.
  • [28] S. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh (2016) Convolutional pose machines. In CVPR, Cited by: §4.1, §4.3.
  • [29] B. Xiao, H. Wu, and Y. Wei (2018) Simple baselines for human pose estimation and tracking. In ECCV, Cited by: §3.2.
  • [30] S. Xingjian, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo (2015) Convolutional lstm network: a machine learning approach for precipitation nowcasting. In NIPS, Cited by: §1, §1.
  • [31] W. Yang, W. Ouyang, H. Li, and X. Wang (2016) End-to-end learning of deformable mixture of parts and deep convolutional neural networks for human pose estimation. In CVPR, Cited by: §4.1.
  • [32] Y. Yang and D. Ramanan (2013) Articulated human detection with flexible mixtures of parts. IEEE Trans. on Pattern Anal. Mach. Intell. 35 (12), pp. 2878–2890. Cited by: §4.1.
  • [33] W. Zhang, M. Zhu, and K. G. Derpanis (2013) From actemes to action: a strongly-supervised representation for detailed action understanding. In ICCV, Cited by: §1, §4.1.