15 Keypoints Is All You Need

12/05/2019 ∙ by Michael Snower, et al. ∙ Brown University 28

Pose tracking is an important problem that requires identifying unique human pose-instances and matching them temporally across different frames of a video. However, existing pose tracking methods are unable to accurately model temporal relationships and require significant computation, often computing the tracks offline. We present an efficient Multi-person Pose Tracking method, KeyTrack, that only relies on keypoint information without using any RGB or optical flow information to track human keypoints in real-time. Keypoints are tracked using our Pose Entailment method, in which, first, a pair of pose estimates is sampled from different frames in a video and tokenized. Then, a Transformer-based network makes a binary classification as to whether one pose temporally follows another. Furthermore, we improve our top-down pose estimation method with a novel, parameter-free, keypoint refinement technique that improves the keypoint estimates used during the Pose Entailment step. We achieve state-of-the-art results on the PoseTrack'17 and the PoseTrack'18 benchmarks while using only a fraction of the computation required by most other methods for computing the tracking information.



There are no comments yet.


page 1

page 4

page 7

page 8

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multi-person Pose Tracking is an important problem for human action recognition and video understanding. It occurs in two steps: first, estimation, where keypoints of individual persons are localized; second, the tracking step, where each keypoint is assigned to a unique person. Pose tracking methods rely on deep convolutional neural networks for the first step 

[toshev2014deeppose, tompson2015efficient, yang2017learning, wei2016convolutional], but approaches in the second step vary. This is a challenging problem because tracks must be created for each unique person, while overcoming occlusion and complex motion. Moreover, individuals may appear visually similar because they are wearing the same uniform. It is also important for tracking to be performed online. Commonly used methods, such as optical flow and graph convolutional networks (GCNs) are effective at modeling spatio-temporal keypoint relationships [HRNet][ning2019lighttrack], but are dependent on high spatial resolution, making them computationally costly. Non-learning based methods, such as spatial consistency, are faster than the convolution-based methods, but are not as accurate.

Figure 1: They look alike, how do we decide who's who? In the Pose Entailment framework, given a video frame, we track individuals by comparing pairs of poses, using temporal motion cues to determine who’s who. Using a novel tokenization scheme to create pose pair inputs interpretable by Transformers [vaswani2017attention], our network divides its attention equally between both poses in matching pairs, and focuses more on a single pose in non-matching pairs because motion cues between keypoints are not present. We visualize this above; bright red keypoints correspond to high attention.

To address the above limitations, we propose an efficient pose tracking method, KeyTrack, that leverages temporal relationships to improve multi-person pose estimation and tracking. Hence, KeyTrack follows the tracking by detection approach by first localizing humans, estimating human pose keypoints and then encoding the keypoint information in a novel entailment setting using transformer building blocks [vaswani2017attention]. Similar to the textual entailment task where one has to predict if one sentence follows one another, we propose the Pose Entailment task, where the model learns to make a binary classification if two keypoint poses temporally follow or entail each other. Hence, rather than extracting information from a high-dimensional image representation using deep CNNs, we extract information from a sentence of 15 tokens, and each token corresponds to a keypoint on a pose. Similar to how BERT tokenizes words [devlin2018bert]

, we propose an embedding scheme for pose data that captures spatio-temporal relationships and feed our transformer network these embeddings. Since these embeddings contain information beyond spatial location, our network outperforms convolution based approaches in terms of accuracy and speed, particularly at very low resolutions.

Additionally, in order to improve the keypoint estimates used by the transformer network, we propose a Temporal Object Keypoint Similarity (TOKS) method. TOKS refines the pose estimation output by augmenting missed detections and thresholding low quality keypoint estimates using a keypoint similarity metric. TOKS adds no learned parameters to the estimation step, and is superior to existing bounding box propagation methods that often rely on NMS and optical flow. KeyTrack makes the following contributions:

1. KeyTrack introduces Pose Entailment, where a binary classification is made as to whether two poses from different timesteps are the same person. We model this task in a transformer-based network which learns temporal pose relationships even in datasets with complex motion. Furthermore, we present a tokenization scheme for pose information that allows transformers to outperform convolutions at low spatial resolutions when tracking keypoints.

2. KeyTrack introduces a temporal method for improving keypoint estimates. TOKS is more accurate than bounding box propagation, faster than a detector ensemble, and does not require learned parameters.

Using the above methods, we develop an efficient multi-person pose tracking pipeline which sets a new SOTA on the PoseTrack test set. We achieve 61.2% tracking accuracy on the PoseTrack’17 Test Set and 66.6% on the PoseTrack’18 Val set using a model that consists of just 0.43M parameters in the tracking step, making this portion of our pipeline 500X more efficient than than the leading optical flow method [HRNet]. Our training is performed on a single NVIDIA 1080Ti GPU. Not reliant on RGB or optical flow information in the tracking step, our model is suitable to perform pose tracking using other non-visual pose estimation sensors that only provide 15 keypoints for each person [alarifi2016ultra].

2 Related Work

We are inspired by related work on pose estimation and tracking methods, and recent work on applying the transformer network to video understanding.

Figure 2: a) Keypoints are estimated with HRNet. b) TOKS improves detection accuracy. c) Pose pairs are collected from multiple past timesteps. Poses of the same color have the same track id, the color black indicates the track id is unknown. d) Each pair is tokenized independently from the other pairs. e) Our Transformer Matching Network calculates match scores independently for each pair. f) The maximum match score is greedily chosen and the corresponding track id is assigned.

Pose estimation

Early work on pose estimation has focused on using graphical models to learn spatial correlations and interactions between various joints [andriluka2009pictorial, felzenszwalb2005pictorial]. These models often perform poorly due to occlusions and long range temporal relationships, which need to be explicitly modeled in this framework [dantone2013human, sigal2006measure, wang2008multiple]. More recent work involves using convolutional neural networks (CNNs) to directly regress cartesian co-ordinates of the joints [toshev2014deeppose]

or to generate heatmaps of the probability of a joint being at a specific location 

[tompson2015efficient, yang2017learning, wei2016convolutional]

. A majority of the convolutional approaches can be classified into top-down and bottom-up methods – the top-down methods use a separate detection step to identify person candidates 

[he2017mask, papandreou2017towards, chen2018cascaded, huang2017coarse, papandreou2017towards]. The single person pose estimation step is then performed on these person candidates. Bottom-up methods calculate keypoints from all candidates and then correlate these keypoints into individual human joints [xia2017joint, hwang2019pose]. The latter method is more efficient since all keypoints are calculated in a single step; however, the former is more accurate since the object detection step limits the regression boundaries. However, top-down methods work poorly on small objects and recent work (HRNet) [HRNet] uses parallel networks at different resolutions to maximize spatial information. PoseWarper [bertasius2019learning]

uses a pair of labeled and unlabeled frames to predict human pose by learning the pose-warping using deformable convolutions. Finally, since the earliest applications of deep learning to pose estimation 

[toshev2014deeppose], iterative predictions have improved accuracy. Pose estimation has shown to benefit from cascaded predictions [chen2018cascaded] and pose-refinement methods [fieraru2018learning, moon2019posefix] refine the pose estimation results of previous stages using a separate post-processing network. In that spirit, our work, KeyTrack relies on HRNet to generate keypoints and refines keypoint estimates by temporally aggregating and suppressing low confidence keypoints with TOKS instead of commonly used bounding box propagation approaches.

max width=0.48 Method Estimation Detection Improvement Tracking Ours HRNet Temporal OKS Pose Entailment HRNet [HRNet] HRNet BBox Prop. Optical Flow Top-Down POINet [Ruan:2019:PPO:3343031.3350984] VGG, T-VGG - Ovonic Insight Net MDPN [Guo_2019] MDPN Ensemble Optical Flow LightTrack [ning2019lighttrack] Simple Baselines Ensemble/BBox Prop. GCN ProTracker [girdhar2018detect] 3D Mask RCNN - IoU Affinity Fields [raaj2019efficient] VGG/STFields - STFields Bottom-Up STEmbeddings [jin2019multi] STEmbeddings - STEmbeddings JointFlow Siamese CNN - Flow Fields

Table 1: How different approaches address each step of the Pose Tracking problem. Our contributions are in bold.

Pose tracking Methods

Pose tracking methods assign unique IDs to individual keypoints, estimated with techniques described in the previous subsection, to track them through time [PoseTrack, insafutdinov2017arttrack, iqbal2017posetrack, PoseTrack2017Leaderboard]. Some methods perform tracking by learning spatio-temporal pose relationships across video frames using convolutions  [wang2019learning, Ruan:2019:PPO:3343031.3350984, ning2019lighttrack]. [Ruan:2019:PPO:3343031.3350984], in an end-to-end fashion, predicts track ids with embedded visual features from its estimation step, making predictions in multiple temporal directions. [ning2019lighttrack] uses a GCN to track poses based on spatio-temporal keypoint relationships. These networks require high spatial resolutions. In contrast, we create keypoint embeddings from the keypoint’s spatial location and other information. This makes our network less reliant on spatial resolution, and thus more efficient, and gives our network the ability to model more fine-grained spatio-temporal relationships.

Among non-learned tracking methods, optical flow is effective. Here, poses are propagated from one frame to the next with optical flow to determine which pose they are most similar to in the next frame [HRNet, Guo_2019]. This improves over spatial consistency, which measures the IoU between bounding boxes of poses from temporally adjacent frames  [girdhar2018detect]. Other methods use graph-partitioning based approaches to group pose tracks  [insafutdinov2017arttrack, iqbal2017posetrack, jin2017towards]. Another method, PoseFlow [xiu2018pose], uses inter/intra-frame pose distance and NMS to construct pose flows. However, our method does not require hard-coded parameters during inference, this limits the ability of non-learned methods to model scenes with complex motion and requires time-intensive manual tuning. Table 1 shows top-down methods similar to our work as well as competitive bottom-up methods.

Transformer Models

Recently, there have been successful implementations of transformer based models for image and video input modalities often substituting convolutions and recurrence mechanisms. These methods can efficiently model higher-order relationships between various scene elements unlike pair-wise methods [dai2017detecting, hu2016modeling, santoro2017simple, xu2019spatial]. They have been applied for image classification [ramachandran2019stand], visual question-answering [li2019visualbert, lu2019vilbert, tan2019lxmert, zhou2019unified], action-recognition [huangdynamic, ma2018attend], video captioning [sun2019contrastive, zhou2018end] and other video problems. Video-Action Transformer [girdhar2019video] solves the action localization problem using transformers by learning the context and interactions for every person in the video. BERT [BERT]

uses transformers by pretraining a transformer-based network in a multi-task transfer learning scheme over the unsupervised tasks of predicting missing words or next sentences. Instead, in a supervised setting, KeyTrack uses transformers to learn spatio-temporal keypoint relationships for the visual problem of pose tracking.

3 Method

Figure 3: Orange box: Visualizations to intuitively explain our tokenization. In the Position column, the matching poses are spatially closer together than the non-matching ones. This is because their spatial locations in the image are similar. The axis limit is because the image has been downsampled to . In the following column, the matching contours are similar, since the poses are in similar orientations. The Segment axis in the last column represents the temporal distance of the pair. Green box: A series of transformers (Tx) compute self-attention, extracting the temporal relationship between the pair. Binary classification follows.

3.1 Overview of Our Approach

We now describe the keypoint estimation and tracking approach used in KeyTrack as shown in Figure 2. For frame at timestep , we wish to assign a track id to the th pose . First, each of the pose’s keypoints are detected. This is done by localizing a bounding box around each pose with an object detector and then estimating keypoint locations in the box. Keypoint predictions are improved with temporal OKS (TOKS). Please see 3.3 for more details. From here, this pose with no tracking id, , is assigned its appropriate one. This is based on the pose’s similarity to a pose in a previous timestep, which has an id, . Similarity is measured with the match score, , using Pose Entailment (3.2).

False negatives are an inevitable problem in keypoint detection, and hurt the downstream tracking step because poses with the correct track id may appear to be no longer in the video. We mitigate this by calculating match scores for poses in not just one previous frame, but multiple frames . Thus, we compare to each pose where and . In practice, we limit the number of poses we compare to in a given frame to the spatially nearest poses. This is just as accurate as comparing to everyone in the frame and bounds our runtime to . This gives us a set of match scores , and we assign the track id corresponding to the maximum match score , where . Thus, we assign the tracking id to the pose, .

3.2 Pose Entailment

To effectively solve the multi-person pose tracking problem, we need to understand how human poses move through time based on spatial joint configurations as well as in the presence of multiple persons and occluding objects. Hence, to correctly learn temporal transformations through time, we need to learn if a pose in timestep , can be inferred from timestep . Textual entailment provides us with a similar framework in the NLP domain where one needs to understand if one sentence can be implied from the next. More specifically, the textual entailment model classifies whether a premise sentence implies a hypothesis sentence in a sentence pair [bowman2015large]. The typical approach to this problem consists of first projecting the pair of sentences to an embedding space and then feeding them through a neural network which outputs a binary classification for the sentence pair.

Hence, we propose the Pose Entailment problem. More formally, we seek to classify whether a pose in a timestep , i.e. the premise, and a pose in timestep , i.e. the hypothesis, are the same person. To solve this problem, instead of using visual feature based similarity that incurs large computational cost, we use the set of human keypoints, , detected by our pose estimator. It is computationally efficient to use these as there are a limited number of them (in our case ), and they are not affected by unexpected visual variations such as lighting changes in the tracking step. In addition, as we show in the next section, keypoints are amenable to tokenization. Thus, during the tracking stage, we use only the keypoints estimated by the detector as our pose representation.

Tokenizing Pose Pairs

The goal of tokenization is to transform pose information into a representation that facilitates learning spatio-temporal human pose relationships. To achieve this goal, for each pose token, we need to provide (i) the spatial location of each keypoint in the scene to allow the network to spatially correlate keypoints across frames, (ii) type information of each keypoint (i.e. head, shoulder etc.) to learn spatial joint relationships in each human pose, and finally (iii) the temporal location index for each keypoint within a temporal window , to learn temporal keypoint transitions. Hence, we use three different types of tokens for each keypoint as shown in Figure 3. There are poses, and thus tokens of each type. Each token is linearly projected to an embedding, where is the transformer hidden size. Embeddings are a learned lookup table. We now describe the individual tokens in detail:

Position Token: The absolute spatial location of each keypoint is the Position token, , and its values fall in the range . In practice, the absolute spatial location of a downsampled version of the original frame is used. This not only improves the efficiency of our method, but also makes it more accurate, as is discussed in 5.2. A general expression for the Position tokens of poses and is below, where corresponds to the Position token of the keypoint of :


Type Token: The Type token corresponds to the unique type of the keypoint: e.g. the head, left shoulder, right ankle, etc… The Type keypoints fall in the range . These add information about the orientation of the pose and are crucial for achieving high accuracy at low resolution, when keypoints have similar spatial locations. A general expression for the Type tokens of poses and is below, where corresponds to the Type token of the keypoint of :


Segment Token: The Segment token indicates the number of timesteps the pose is from the current one. The segment token is in range , where is a chosen constant. (For our purposes, we set to be 4.) This also allows our method to adapt to irregular frame rates. Or, if a person is not detected in a frame, we can look back two timesteps, conditioning our model on temporal token value of instead of .


After each token is embedded, we sum the embeddings, , to combine the information from each class of token. This is fed to our Transformer Matching Network.

width=1.0 Tracking Method Detection Method AP % IDSW MOTA Total Head Shou Elb Wri Hip Knee Ankl Total Total Pose Entailment GT Boxes, GT Keypoints 100 0.7 0.7 0.6 0.6 0.6 0.7 0.7 0.7 99.3 GCN 1.4 1.4 1.4 1.5 1.4 1.6 1.6 1.5 98.5 Optical Flow 1.1 1.2 1.2 1.2 1.2 1.3 1.4 1.2 98.7 Pose Entailment GT Boxes, Predicted Keypoints 86.7 0.9 0.9 0.8 0.8 0.7 0.8 0.8 0.8 72.2 GCN 1.6 1.6 1.6 1.6 1.3 1.5 1.4 1.5 71.6 Optical Flow 1.2 1.2 1.2 1.1 1.0 1.1 1.1 1.1 71.8 Pose Entailment Predicted Boxes, Predicted Keypoints 81.6 0.9 1.0 0.9 0.8 0.7 0.8 0.8 0.8 66.6 GCN 1.7 1.7 1.7 1.7 1.4 1.5 1.4 1.6 65.9 Optical Flow 1.3 1.2 1.2 1.2 1.1 1.1 1.1 1.1 66.3

Figure 4: Compares accuracy of tracking methods on the PoseTrack 18 Val set, given the same keypoints. GT stands for Ground Truth, “predicted” means a neural net is used. Lower % IDSW is better, higher MOTA is better. “Total” averages all joint scores.

Transformer Matching Network:

The goal of our network is to learn motion cues indicative of whether a pose pair matches. The self-attention mechanism of transformers allows us to accomplish this by learning which temporal relationships between the keypoints are representative of a match. Transformers compute scaled dot-product attention over a set of Queries (), Keys (), and Values() each of which is a linear projection of the input . We compute the softmax attention with respect to every keypoint embedding in the pair, with the input to the softmax operation being of dimensions . In fact, we can generate heatmaps from the attention distribution over the pair’s keypoints, as displayed in 5.3. In practice, we use multi-headed attention, which leads to the heads specializing, also visualized.

Additionally, we use an attention mask to account for keypoints which are not visible due to occlusion. This attention mask is implemented exactly as the attention mask in [vaswani2017attention], resulting in no attention being paid to the keypoints which are not visible due to occlusion. The attention equation is as follows, and we detail each operation in a single transformer in Table  5:


After computing self-attention through a series of stacked transformers, similar to BERT, we feed this representation to a Pooler, which “pools” the input, by selecting the first token in the sequence and then inputting that token into a learned linear projection. This is fed to another linear layer, functioning as a binary classifier, which outputs the likelihood two given poses match. We govern training with a binary cross entropy loss providing our network only with the supervision of whether the pose pair is a match. See Figure 3 for more details.

3.3 Improved Multi-Frame Pose Estimation

We now describe how we improve keypoint estimation. Top-down methods suffer from two primary classes of errors from the object detector: 1. Missed bounding boxes 2. Imperfect bounding boxes. We use the box detections from adjacent timesteps in addition to the one in the current timestep to make pose predictions, thereby combating these issues. This is based on the intuition that the spatial location of each person does not change dramatically from frame to frame when the frame rate is relatively high, typical in most modern datasets and cameras. Thus, pasting a bounding box for the person in frame, , , in its same spatial location in frame is a good approximation of the true bounding box for person . Bounding boxes are enlarged by a small factor to account for changes in spatial location from frame to frame. Previous approaches, such as  [xiao2018simple], use standard non-maximal suppression (NMS) to choose which of these boxes to input into the estimator. Though this addresses the 1st issue of missed boxes, it does not fully address the second issue. NMS relies on the confidence score of the boxes. We make pose predictions for the box in the current frame and temporally adjacent boxes. Then we use object-keypoint similarity (OKS) to determine which of the poses should be kept. This is more accurate than using NMS because we use the confidence scores of the keypoints, not the bounding boxes. The steps of TOKS are enumerated below:

  1. Retrieve bounding box, , enclosing , and dilate by a factor,
  2. Estimate a new pose, , in from
  3. Use OKS to determine which pose to keep,
Algorithm 1 Temporal OKS

4 Experiments

4.1 The PoseTrack Dataset

The PoseTrack 2017 training, validation, and test sets consist of 250, 50, and 208 videos, respectively. Annotations for the test set are held out. We evaluate on the PoseTrack 17 Test set because the PoseTrack 18 Test set has yet to be released. We use the official evaluation server on the test set, which can be submitted to up to 4 times.  [PoseTrack, PoseTrack2017Leaderboard] We conduct the rest of comparisons on the PoseTrack ECCV 2018 Challenge Validation Set, a superset of PoseTrack 17 with 550 training, 74 validation, and 375 test videos [PoseTrackECCVChallenge].

Metrics Per-joint Average Precision (AP) is used to evaluate keypoint estimation based on the formulation in  [MPII]. Multi-Object Tracking Accuracy (MOTA [bernardin2008evaluating][MOTA]) scores tracking and penalizes False Negatives (FN), False Positives (FP), and ID Switches (IDSW); its formulation for the keypoint is given below, where is the current timestep in the video. Our final MOTA is the average of all keypoints :

Our approach assigns track ids and estimates keypoints independently of one another. This is also true of competing methods with MOTA scores closest to ours. In light of this, we use the same keypoint estimations to compare our Pose Entailment based tracking assignment to competing methods in 4.2. This makes the IDSW the only component of the MOTA metric that changes, and we calculate . In 4.3, we compare our estimation method to others without evaluating tracking. Finally, in 4.4, we compare our entire tracking pipeline to other pipelines.

4.2 Improving Tracking with Pose Entailment

width=1.0                  PoseTrack 2018 ECCV Challenge Val Set No. Method Extra Data AP AP FPS MOTA 1. KeyTrack (ours) 74.3 81.6 1.0 66.6 2. MIPAL [hwang2019pose] 74.6 - - 65.7 3. LightTrack (offline) [ning2019lighttrack] 71.2 77.3 E 64.9 4. LightTrack (online) [ning2019lighttrack] 72.4 77.2 0.7 64.6 5. Miracle [yu2018multi] - 80.9 E 64.0 6. OpenSVAI [Ning_2019] 69.7 76.3 - 62.4 7. STAF [raaj2019efficient] 70.4 - 3 60.9 8. MDPN [Guo_2019] 71.7 75.0 E 50.6                   PoseTrack 2017 Test Set Leaderboard No. Method Extra Data AP FPS MOTA 1. KeyTrack (ours) 74.0 1.0 61.2 2. POINet [Ruan:2019:PPO:3343031.3350984] 72.5 - 58.4 3. LightTrack [ning2019lighttrack] 66.7 E 58.0 4. HRNet [HRNet] 75.0 0.2 57.9 5. FlowTrack [xiao2018simple] 74.6 0.2 57.8 6. MIPAL [hwang2019pose] 68.8 - 54.5 7. STAF [raaj2019efficient] 70.3 2 53.8 8. JointFlow [doering2018joint] 63.6 0.2 53.1

Figure 5: Top scores on the PoseTrack leaderboards. E indicates an ensemble of detectors is used, and results in the method being offline. A check indicates external training data is used beyond COCO and PoseTrack. A “-” indicates the information has not been made publicly available. FPS calculations for JointFlow and FlowTrack are taken from [zhang2019fastpose]. HRNet FPS is approximated from FlowTrack since the methods are very similar. The AP column has the best AP score. AP is the AP score after tracking post-processing.
Figure 6: Qualitative results of KeyTrack, on the PoseTrack 18 Validation Set (top row) and PoseTrack 17 Test Set (bottom row).

We compare with the optical flow tracking method [xiao2018simple], and the Graph Convolutional Network [ning2019lighttrack] (GCN) as shown in Figure 4. We do not compare with IoU because our other baselines, GCN and optical flow [ning2019lighttrack], [xiao2018simple] have shown to outperform it, nor do we compare to the network from [Ruan:2019:PPO:3343031.3350984] because it is trained in an end-to-end fashion. We follow the method in [xiao2018simple] for Optical Flow and use the pre-trained GCN provided by [ning2019lighttrack]. IDSW is calculated with three sets of keypoints. Regardless of the keypoint AP, we find that KeyTrack's Pose Entailment maintains a consistent improvement over other methods. We incur approximately half as many IDSW as the GCN and 30% less than Optical Flow.

Our improvement over GCN stems from the fact that it relies only on keypoint spatial locations. By using additional information beyond the spatial location of each keypoint, our model can make better inferences about the temporal relationship of poses. The optical flow CNNs are not specific to pose tracking and require manual tuning. For example, to scale the CNN’s raw output, which is normalized from -1 to 1, to pixel flow offsets, a universal constant, given by the author of the original optical flow network (not [xiao2018simple]), must be applied. However, we found that this constant did not produce good results and required manual adjustment. In contrast, our learned method requires no tuning during inference.

4.3 Improving Detection with TOKS

Detection Method AP
Head Shou Elb Wri Hip Knee Ankl Total
GT 90.2 91.4 88.7 83.6 81.4 86.1 83.7 86.7
Det. 68.8 72.8 73.1 68.4 68.0 72.4 69.8 70.4
Det. + Box Prop. 79.3 82.0 80.8 75.6 72.4 76.5 72.4 77.1
Det. + TOKS@0.3 83.6 86.6 84.9 78.9 76.4 80.2 76.2 81.1
Det. + TOKS@0.35 (ours) 84.1 87.2 85.3 79.2 77.1 80.6 76.5 81.6
Det. + TOKS@0.5 83.9 87.2 85.2 79.1 77.1 80.7 76.4 81.5
Table 2: Per-joint AP when the pose estimator is conditioned on different boxes. GT indicates ground truth boxes are used, and serves as an upper bound for accuracy. Det. indicates a detector was used to estimate boxes. @OKS* is the OKS threshold used.

Table 2 shows offers a greater improvement in keypoint detection quality than other methods. In the absence of bounding box improvement, the AP performance is 6.6% lower, highlighting the issue of False Negatives. The further improvement from TOKS emphasizes the usefulness of estimating every pose. By using NMS, bounding box propagation methods miss the opportunity to use the confidence scores of the keypoints, which lead to better pose selection.

4.4 Tracking Pipeline Comparison to the SOTA

Now that we have analyzed the benefits of Pose Entailment and TOKS, we put them together and compare to other approaches. Figure 5 shows that we achieve the highest MOTA score. We improve over the original HRNet paper by 3.3 MOTA points on the Test set. [hwang2019pose], nearest our score on the 2018 Validation set, is much further away on the 2017 Test set. Additionally, our FPS is improved over all methods with similar MOTA scores, with many methods being offline due to their use of ensembles. (Frames per second (FPS) is calculated by diving the number of frames in the dataset by the runtime of the approach.) Moreover, our method outperforms all others in terms of AP, showing the benefits of TOKS. AP is also reported, which is the AP score after tracking post-processing has been applied. This post-processing is beneficial to the MOTA score, but lowers AP. See A.3 for more details on this post-processing. As we have the highest AP, but not the highest AP it appears the effect of tracking post-processing varies from paper to paper. Only AP is given on the test set because each paper is given 4 submissions, so these are used to optimize MOTA, rather than AP.


Our tracking approach is efficient, not reliant on optical flow or RGB data. When processing an image at our optimal resolution, 24x18, we reduce the GFLOPS required by optical flow, which processes images at full size, from 52.7 to 0.1. [ning2019lighttrack]’s GCN does not capture higher-order interactions over keypoints and can be more efficient than our network with local convolutions. However, this translates to a 1ms improvement in GPU runtime. In fact, with other optimizations, our tracking pipeline demonstrates a 30% improvement in end-to-end runtime over [ning2019lighttrack], shown in 4.4. We have the fastest FPS of Top-down approaches. Bottom-up models such as STAF, are more efficient but have poor accuracy. Also, we do not rely on optical flow to improve bounding box propagation as [xiao2018simple, HRNet] do, instead we use TOKS. This contributes to our 5x FPS improvement over [xiao2018simple, HRNet]. Further details on the parameters and FLOPS of the GCN, Optical Flow Network, and our Transformer Matching Network are in Table 6.

5 Analysis

5.1 Tracking Pipeline

width=0.45 Num Tx Hidden Size Int. Size Num Heads Parameters (M) % IDSW 2 128 512 4 0.40 1.0 4 128 512 4 0.43 0.8 6 128 512 4 1.26 1.1 4 64 256 4 0.23 0.9 4 128 512 4 0.43 0.8 4 256 1024 4 3.31 1.1 4 128 128 4 0.43 0.8 4 128 512 4 0.86 0.8 4 128 128 2 0.43 0.9 4 128 128 4 0.43 0.8 4 128 128 6 0.43 0.8   

Figure 7: Left: Transformer network hyper-parameters are varied. Right: A plot of IDSW rate vs. image resolution. The table on the left shows the input to each method, the conv+visual input is blurry because images are downsampled.

Varying Tokenization Schemes and Transformer Hyper-parameters We examine the benefits of each embedding. As evident in Table 3, Segment embeddings are crucial because they enable the network to distinguish between the Poses being matched. Token embeddings give the network information about the orientation of a pose and help it interpret keypoints which are in close spatial proximity; i.e. keypoints that have the same or similar position embedding. We also train a model that uses the relative keypoint distance from the pose center rather than the absolute distance of the keypoint in the entire image. We find that match accuracy deteriorates with this embedding. This is likely because many people perform the same activity, such as running, in the PoseTrack dataset, leading to them having nearly identical relative pose positions. We vary the number of transformer blocks, the hidden size in the transformer block, and number of heads in Table 7. Decreasing the number of transformer blocks, the hidden size, and attention heads hurts performance.

Abs. Position Type Segment Rel. Position Match % Accuracy
93.2 (ours)
Table 3: Match accuracies for various embedding schemes.
Figure 8: Attention heatmaps from two of our network’s attention heads are shown. These are the 0th, and 3rd heads from our final transformer. The two pairs above the dotted line are a matching pair, while the pair below the dotted line are not (and are also from separate videos). is the frame timestep.

Number of Timesteps and Other Factors We find that reducing the number of timesteps adversely effects the MOTA score. It drops up to 0.3 points when using only a single timestep because we are less robust to detection errors. Also, in replacement of our greedy algorithm, we experimented with the Hungarian algorithm used in [girdhar2018detect]. This algorithm is effective with ground truth information, but is not accurate when using detected poses.

5.2 Comparing Self-Attention to Convolutions

We compare transformers and CNNs by replacing our Transformer Matching Network with two separate convolution-based networks. One takes visual features from bounding box pose pairs as input while the other takes only keypoints as input, where each unique keypoint is colored via a linear interpolation, a visual version of our

Type tokens. Both approaches use identical CNNs, sharing an architecture inspired by VGG [simonyan2014deep], and have approximately 4x more parameters than our transformer-based model because this was required for stable training. See A.5 for details.

Transformers outperform CNNs for the tracking task, as shown in Figure 7

. However, we find two areas where CNNs can be competitive. First, at higher resolutions, transformers often need a large number of parameters to match CNN’s performance. In NLP, when using large vocabularies, a similar behavior is observed where transformers need multiple layers to achieve good performance. Second, we also find that convolutions optimize more quickly than the transformers, reaching their lowest number of ID Switches within the first 2 epochs of training. Intuitively, CNNs are more easily able to take advantage of spatial proximity. The transformers receive spatial information via the position embeddings, which are 1D linear projections of 2D locations. This can be improved by using positional embedding schemes that better preserve spatial information 


In summary, CNNs are accurate at high resolutions given its useful properties such as translation invariance and location invariance. However, there is an extra computational cost of using them. The extra information, beyond the spatial location of keypoints, included in our keypoint embeddings, coupled with the transformer’s ability to model higher-order interactions allows it to function surprisingly well at very low resolutions. Thus, the advantage of CNNs is diminished and our transformer-based network outperforms them in the low resolution case.

5.3 Visualizing Attention Heatmaps

We visualize our network’s attention heatmaps in Fig. 8. When our network classifies a pair as non-matching, its attention is heavily placed on one of the poses over the other. Also, we find it interesting that one of our attention heads primarily places its attention on keypoints near the person’s head. This specialization suggests different attention heads are attuned to specific keypoint motion cues.

6 Conclusion

In this paper, we present an efficient Multi-person Pose Tracking method. Our proposed Pose Entailment method achieves SOTA performance on the PoseTrack datasets by using keypoint information in the tracking step without the need of optical flow or CNNs. KeyTrack also benefits from improved keypoint estimates using our temporal refinement method that outperforms commonly used bounding box propagation methods. Finally, we also demonstrate how to tokenize and embed human pose information in the transformer architecture that can be re-used for other tasks such as pose-based action recognition.


Appendix A Supplementary Material for KeyTrack

a.1 Test Set Scores

We submitted to the PoseTrack 2017 test set twice. We first achieved a 60.1 MOTA score, but then decreased the TOKS box expansion value from to . This increased our our score to 61.2. is also the value we used on the 2018 Validation Set.

a.2 Additional Qualitative Results

We provide additional qualitative results of our model on the PoseTrack 18 Validation Set in Figure 10.

a.3 Keypoint Postprocessing

The post-processing performed when evaluating AP and MOTA is different. Specifically, we use a different keypoint confidence threshold, where keypoints above the threshold are kept and keypoints below the threshold are ignored. The confidence metric used is the per-keypoint confidence score from the pose estimator. The threshold optimal for MOTA is much higher than AP. Interestingly, ID Switches are not much worse, indicating the majority of the error stems from the estimation step. Results are in Table 4.

Confidence Threshold AP % IDSW MOTA
0.05 81.6 1.0 42.0
0.35 79.6 0.9 63.3
0.5 76.7 0.9 66.5
0.57 74.3 0.8 66.6
0.6 72.8 0.9 66.0
Table 4: Effect of postprocessing on the 2018 Validation Set.

a.4 Implementation Details


To fine-tune the detector, separate models are fine-tuned on PoseTrack 17 and 18 datasets for 1 epoch with a learning rate of and batch size of 4. Training was conducted on 4 NVIDIA GTX Titans. To fine tune the pose estimator, originally trained on COCO, we follow [HRNet].

During tracking training, we use a linear warm up schedule for learning rate, warming up to

, for a fraction of 0.01 of total training steps, then linearly decay to 0 over 25 epochs. Batch size is 32. Cross entropy loss is used to train the model. Since there are more non-matching poses than matching poses in a pair of given frames, we use Pytorch’s WeightedRandomSampler to sample from matching and non-matching classes equally, accounting for class imbalance. When assigning a track ID to a pose, we choose the maximum match score from the previous 4 timesteps. All models are trained on 1 NVIDIA GTX 1080Ti GPU.


The detector processes images with a batch size of 1. The detections are fed to the pose estimator, which processes all of the bounding box detections for a frame in a single batch. Flip testing is used. Temporal OKS is computed for every frame with an OKS threshold of 0.35. The bounding box scores are ignored when computing OKS. Bounding boxes are thresholded at a minimum confidence score of 0.2, and keypoints are thresholded at a minimum confidence score of 0.1. We found decreasing the bounding box confidence and keypoint thresholds to 0 did not improve AP, but hurt runtime. Boxes are enlarged by factor . All code is written in Python, and we use 1 NVIDIA GTX 1080ti. As done by [HRNet, girdhar2018detect], we train on the PoseTrack 2017 Train and Validation sets before evaluating on the heldout Test Set.

Details of the Tracking Pipeline Analysis

The ablation studies from 5.1 were conducted using the predicted keypoints and predicted boxes with our best model on the PoseTrack 2018 Validation Set. Match accuracy, the metric we use in Table 3 is similar to , i.e. the IDs which are not switched. The methods would be in the same order if measured with IDSW.

a.5 Architecture Details

Detector and Estimator

We use the implementation of the COCO-pretrained Hybrid Task Cascade Network [chen2019hybrid] with Deformable Convolutions and Multi Scale predictions from [mmdetection]. For our pose estimator, we use the most accurate model from [HRNet], HRNetW48-384x288.

Transformer Matching Network

We use an effective image resolution of for a total of unique Position tokens. There are Type tokens and Segment Tokens.

Each pair of poses has tokens total. These are projected to embeddings with dimension , where is the transformer hidden size (this is also the transformer intermediate size). The sum of each token’s embedding is input to our Transformer Matching Network. The network’s backbone consists of 4 transformers in series, each with 4 attention heads. We use a probability of 0.1 for dropout, applying it throughout our Network as [BERT]. Weights are initialized from a standard normal, . The output is pooled, then fed to a binary classification layer, . The network has a total of 0.41M parameters, we adapt code from [HuggingFace]. 5 gives details of our transformer, which follows the original architecture. The inputs are the hidden states, , where is batch size, and an attention mask, . The extra dimensions in the attention mask are for broadcasting in matrix multiplication. The FLOP counts for our Transformer Matching Network are in Table 6.

width=0.5 element 1 op element 2 output hidden states x hidden states x hidden states x resize - resize - resize - + attention mask softmax - dropout - x context context resize - context

Table 5: A look inside our transformer. 32 is the batch size. x is matrix multiplication., are the query, key, and value, respectively. are the learned weights corresponding to the query, key, or value. “multi” refers to a multi-headed representation. are the raw attention scores, and is the distribution of attention scores resulting from the softmax operation.

width=0.48 Network Module Parameters (M) FLOPS (M) Embeddings 0.06 0.35 Transformers (x4) 0.40 5.84 Pooler 0.02 0.015 Classifier 0.01 0.02 Transformer Matching Network 0.41 6.20 GCN 0.1 1.30 Optical Flow 38.7

Table 6: FLOP and parameter comparison of our Transformer Matching Network to alternative tracking methods. The first four rows give details of each component of our network. (M) indicates millions. Its computational cost is similar to a GCN, only amounting to 1ms increase on the GPU, and much more efficient than Optical Flow. As we showed earlier, our method is more accurate than both alternatives.

width=1.0 width=1.0

Figure 9: Two videos which highlight the limitations of our model. In the top example, the individuals are very near each other and are moving in a synchronized fashion. Thus, our model incorrectly ids people in the middle of the group. In the bottom row, a man walks in front of boys on trampolines. They are occluded for a few frames and are given incorrect ids after he walks away from them. Also, some of the individuals in the back are given incorrect ids because they are small, in close proximity, and moving in similar fashions.

We also give details about the other tracking methods we compare to in Table 4. Though our method is slightly more computationally expensive than the GCN, it is much more accurate. Both Transformers and the GCN are far less computationally expensive than Optical Flow.

CNN Pose Entailment Networks

The input is projected to 64 channels in the first layer of the CNN. All convolutions use kernel size 3 and padding 1. BatchNorm is applied after each convolutional layer. The input is downsampled via a maxpooling operation with a stride of 2. The number of filters are doubled after downsampling. Two Linear layers complete the network. The hidden size is dependent on the resolution of the input image. The second layer outputs a binary classification, corresponding to the likelihood of the poses being a match or non-match.

The number of convolutions layers is equal to , where is the long edge of the input image. The batch size, learning rate, and number of training epochs are the same as those we used for the transformers. We experimented with other learning rates, but did not see improvement.

The scheme to color the “visual tokens” is accomplished by fixing the Hue and Saturation, then adjusting the Value via a linear interpolation from in increments of .

a.6 Limitations

Our approach can struggle to track people who are in close proximity and are moving in similar patterns. This is similar to how CNNs struggle with people in close proximity who look visually similar, such as the case where they are wearing the same uniform. Another challenging case for our model is people who are hidden for long periods of time. It is difficult to re-identify them without visual features, and we would need to take longer video clips into context than we currently do to successfully re-identify these individuals. We visualize both these failure modes in Figure 9.

width=1.0 width=1.0 width=1.0 width=1.0 width=1.0 width=1.0 width=1.0

Figure 10: Additional qualitative results of our model succeeding in scenarios despite occlusions, unconventional poses, and varied lighting conditions. Every 4th frame is sampled so more extensive motion can be shown. Solid circles represent predicted keypoints. Hollow squares represent keypoint predictions that are not used due to low confidence.