Keras-tensorflow implementation of PersonLab (https://arxiv.org/abs/1803.08225)
We present a box-free bottom-up approach for the tasks of pose estimation and instance segmentation of people in multi-person images using an efficient single-shot model. The proposed PersonLab model tackles both semantic-level reasoning and object-part associations using part-based modeling. Our model employs a convolutional network which learns to detect individual keypoints and predict their relative displacements, allowing us to group keypoints into person pose instances. Further, we propose a part-induced geometric embedding descriptor which allows us to associate semantic person pixels with their corresponding person instance, delivering instance-level person segmentations. Our system is based on a fully-convolutional architecture and allows for efficient inference, with runtime essentially independent of the number of people present in the scene. Trained on COCO data alone, our system achieves COCO test-dev keypoint average precision of 0.665 using single-scale inference and 0.687 using multi-scale inference, significantly outperforming all previous bottom-up pose estimation systems. We are also the first bottom-up method to report competitive results for the person class in the COCO instance segmentation task, achieving a person category average precision of 0.417.READ FULL TEXT VIEW PDF
We present a bottom-up approach for the task of object instance segmenta...
We introduce associative embedding, a novel method for supervising
Human keypoints are a well-studied representation of people.We explore h...
We propose a method for multi-person detection and 2-D pose estimation t...
We study the problems of multi-person pose segmentation in natural image...
We present an approach to efficiently detect the 2D pose of multiple peo...
Current methods of multi-person pose estimation typically treat the
Keras-tensorflow implementation of PersonLab (https://arxiv.org/abs/1803.08225)
The rapid recent progress in computer vision has allowed the community to move beyond classic tasks such as bounding box-level face and body detection towards more detailed visual understanding of people in unconstrained environments. In this work we tackle in a unified manner the tasks of multi-person detection, 2-D pose estimation, and instance segmentation. Given a potentially cluttered and crowded ‘in-the-wild’ image, our goal is to identify every person instance, localize its facial and body keypoints, and estimate its instance segmentation mask. A host of computer vision applications such as smart photo editing, person and activity recognition, virtual or augmented reality, and robotics can benefit from progress in these challenging tasks.
There are two main approaches for tackling multi-person detection, pose estimation and segmentation. The top-down approach starts by identifying and roughly localizing individual person instances by means of a bounding box object detector, followed by single-person pose estimation or binary foreground/ background segmentation in the region inside the bounding box. By contrast, the bottom-up
approach starts by localizing identity-free semantic entities (individual keypoint proposals or semantic person segmentation labels, respectively), followed by grouping them into person instances. In this paper, we adopt the latter approach. We develop a box-free fully convolutional system whose computational cost is essentially independent of the number of people present in the scene and only depends on the cost of the CNN feature extraction backbone.
In particular, our approach first predicts all keypoints for every person in the image in a fully convolutional way. We also learn to predict the relative displacement between each pair of keypoints, also proposing a novel recurrent scheme which greatly improves the accuracy of long-range predictions. Once we have localized the keypoints, we use a greedy decoding process to group them into instances. Our approach starts from the most confident detection, as opposed to always starting from a distinguished landmark such as the nose, so it works well even in clutter.
In addition to predicting the sparse keypoints, our system also predicts dense instance segmentation masks for each person. For this purpose, we train our network to predict instance-agnostic semantic person segmentation maps. For every person pixel we also predict offset vectors to each of thekeypoints of the corresponding person instance. The corresponding vector fields can be thought as a geometric embedding representation and induce basins of attraction around each person instance, leading to an efficient association algorithm: For each pixel , we predict the locations of all keypoints for the corresponding person that belongs to; we then compare this to all candidate detected people
(in terms of average keypoint distance), weighted by the keypoint detection probability; if this distance is low enough, we assign pixelto person .
We train our model on the standard COCO keypoint dataset [keypointchallenge], which annotates multiple people with 12 body and 5 facial keypoints. We significantly outperform the best previous bottom-up approach to keypoint localization [newell2017associative], improving the keypoint AP from 0.655 to 0.687. In addition, we are the first bottom-up method to report competitive results on the person class for the COCO instance segmentation task. We get a mask AP of 0.417, which outperforms the strong top-down FCIS method of [dai2017fully], which gets 0.386. Furthermore our method is very simple and hence fast, since it does not require any second stage box-based refinement, or clustering algorithm. We believe it will therefore be quite useful for a variety of applications, especially since it lends itself to deployment in mobile phones.
Proir to the recent trend towards deep convolutional networks [LeCun1998, krizhevsky2012imagenet], early successful models for human pose estimation centered around inference mechanisms on part-based graphical models [Fischler73, FelzenszwalbDPM], representing a person by a collection of configurable parts. Following this work, many methods have been proposed to develop tractable inference algorithms for solving the energy minimization that captures rich dependencies among body parts [andriluka2009pictorial, BetterAppearancePic, Sapp2010, yang11cvpr, dantone13cvpr, johnson11cvpr, pishchulin13cvpr, modec2013, armlets2013]. While the forward inference mechanism of this work differs to these early DPM-based models, we similarly propose a bottom-up approach for grouping part detections to person instances.
Recently, models based on modern large scale convolutional networks have achieved state-of-art performance on both single-person pose estimation [deeppose, jainiclr2014, tompsonnips2014, Chen_NIPS14, tompson2015efficient, stackedhourglass, andriluka14cvpr, bulat2016, zisserman2016, chain16] and multi-person pose estimation [deepcut, deeper_cut, insafutdinov2016articulated, iqbal2016multi, wei2016convolutional, cmu_mscoco, papandreou2017towards, he2017mask]. Broadly speaking, there are two main approaches to pose-estimation in the literature: top-down (person first) and bottom-up (parts first). Examples of the former include G-RMI [papandreou2017towards], CFN [huang2017coarse], RMPE [fang2017rmpe], Mask R-CNN [he2017mask], and CPN [chen2017cascaded]. These methods all predict key point locations within person bounding boxes obtained by a person detector (e.g., Fast-RCNN [girshick2015fast], Faster-RCNN [ren2015faster] or R-FCN [dai2016rfcn]).
In the bottom-up approach, we first detect body parts and then group these parts to human instances. Pishchulin et al. [deepcut], Insafutdinov et al. [deeper_cut, insafutdinov2016articulated], and Iqbal et al. [iqbal2016multi]
formulate the problem of multi-person pose estimation as part grouping and labeling via a Linear Program. Caoet al. [cmu_mscoco] incorporate the unary joint detector modified from [wei2016convolutional] with a part affinity field and greedily generate person instance proposals. Newell et al. [newell2017associative] propose associative embedding to identify key point detections from the same person.
The approaches for instance segmentation can also be categorized into the two top-down and bottom-up paradigms.
Top-down methods exploit state-of-art detection models to either classify mask proposals[carreira2012cpmc, arbelaez2014multiscale, hariharan2014simultaneous, pinheiro2015learning, dai2015convolutional, pinheiro2016learning, dai2016instancesensitive] or to obtain mask segmentation results by refining the bounding box proposals [dai2016instance, dai2017fully, he2017mask, peng2018megdet, masklab2018, liu2018path].
Ours is a bottom-up approach, in which we associate pixel-level predictions to each object instance. Many recent models propose similar forms of instance-level bottom-up clustering. For instance, Liang et al. use a proposal-free network [liang2015proposal] to cluster semantic segmentation results to obtain instance segmentation. Uhrig et al. [uhrig2016pixel] first predict each pixel’s direction towards its instance center and then employ template matching to decode and cluster the instance segmentation result. Zhang et al. [zhang2015monocular, zhang2016instance] predict instance ID by encoding the object depth ordering within a patch and use this depth ordering to cluster instances. Wu et al. [wu2016bridging] use a prediction network followed by a Hough transform-like approach to perform prediction instance clustering. In this work, we similarly perform a Hough voting of multiple predictions. In a slightly different formulation, Liu et al. [liu2016multi] segment and aggregate segmentation results from dense multi-scale patches, and aggregate localized patches into complete object instances. Levinkov et al. [levinkov2017joint]
formulate the instance segmentation problem as a combinatorial optimization problem that consists of graph decomposition and node labeling and propose efficient local search algorithms to iteratively refine an initial solution. InstanceCut[kirillov2017instancecut] and the work of [jin2016object] propose to predict object boundaries to separate instances. [newell2017associative, fathi2017semantic, de2017semantic] group pixel predictions that have similar values in the learned embedding space to obtain instance segmentation results. Bai and Urtasun [bai2017deep]
propose a Watershed Transform Network which produces an energy map where object instances are represented as basin. Liuet al. [liu2017sgn] propose the Sequential Grouping Network which decomposes the instance segmentation problem into several sub-grouping problems.
Figure 1 gives an overview of our system, which we describe in detail next.
We develop a box-free bottom-up approach for person detection and pose estimation. It consists of two sequential steps, detection of keypoints, followed by grouping them into person instances. We train our network in a supervised fashion, using the ground truth annotations of the face and body parts in the COCO dataset.
The goal of this stage is to detect, in an instance-agnostic fashion, all visible keypoints belonging to any person in the image.
For this purpose, we follow the hybrid classification and regression approach of [papandreou2017towards], adapting it to our multi-person setting. We produce heatmaps (one channel per keypoint) and offsets (two channels per keypoint for displacements in the horizontal and vertical directions). Let be the 2-D position in the image, where is indexing the position in the image and is the number of pixels. Let be a disk of radius centered around . Also let be the 2-D position of the -th keypoint of the -th person instance, with , where is the number of person instances in the image.
For every keypoint type , we set up a binary classification task as follows. We predict a heatmap such that if for any person instance , otherwise . We thus have independent dense binary classification tasks, one for each keypoint type. Each amounts to predicting a disk of radius around a specific keypoint type of any person in the image. The disk radius value is set to pixels for all experiments reported in this paper and is independent of the person instance scale. We have deliberately opted for a disk radius which does not scale with the instance size in order to equally weigh all person instances in the classification loss. During training, we compute the heatmap loss as the average logistic loss along image positions and we back-propagate across the full image, only excluding areas that contain people that have not been fully annotated with keypoints (person crowd areas and small scale person segments in the COCO dataset).
In addition to the heatmaps, we also predict short-range offset vectors whose purpose is to improve the keypoint localization accuracy. At each position within the keypoint disks and for each keypoint type , the short-range 2-D offset vector points from the image position to the -th keypoint of the closest person instance , as illustrated in Fig. 1. We generate such vector fields, solving a 2-D regression problem at each image position and keypoint independently. During training, we penalize the short-range offset prediction errors with the loss, averaging and back-propagating the errors only at the positions in the keypoint disks. We divide the errors in the short-range offsets (and all other regression tasks described in the paper) by the radius pixels in order to normalize them and make their dynamic range commensurate with the heatmap classification loss.
We aggregate the heatmap and short-range offsets via Hough voting into 2-D Hough score maps , using independent Hough accumulators for each keypoint type. Each image position casts a vote to each keypoint channel with weight equal to its activation probability,
denotes the bilinear interpolation kernel. The resulting highly localized Hough score mapsare illustrated in Fig. 1.
The local maxima in the score maps serve as candidate positions for person keypoints, yet they carry no information about instance association. When multiple person instances are present in the image, we need a mechanism to “connect the dots” and group together the keypoints belonging to each individual instance. For this purpose, we add to our network a separate pairwise mid-range 2-D offset field output designed to connect pairs of keypoints. We compute such offset fields, one for each directed edge connecting pairs of keypoints which are adjacent to each other in a tree-structured kinematic graph of the person, see Figs. 1 and 2. Specifically, the supervised training target for the pairwise offset field from the -th to the -th keypoint is given by , since its purpose is to allow us to move from the -th to the -th keypoint of the same person instance . During training, this target regression vector is only defined if both keypoints are present in the training example. We compute the average loss of the regression prediction errors over the source keypoint disks and back-propagate through the network.
Particularly for large person instances, the edges of the kinematic graph connect pairs of keypoints such as RightElbow and RightShoulder which may be several hundred pixels away in the image, making it hard to generate accurate regressions. We have successfully addressed this important issue by recurrently refining the mid-range pairwise offsets using the more accurate short-range offsets, specifically:
as illustrated in Fig. 2. We repeat this refinement step twice in our experiments. We employ bilinear interpolation to sample the short-range offset field at the intermediate position and back-propagate the errors through it along both the mid-range and short-range input offset branches. We perform offset refinement at the resolution of CNN output activations (before upsamling to the original image resolution), making the process very fast. The offset refinement process drastically decreases the mid-range regression errors, as illustrated in Fig.2. This is a key novelty in our method, which greatly facilitates grouping and significantly improves results compared to previous papers [cmu_mscoco, deeper_cut] which also employ pairwise displacements to associate keypoints.
We have developed an extremely fast greedy decoding algorithm to group keypoints into detected person instances. We first create a priority queue, shared across all keypoint types, in which we insert the position and keypoint type of all local maxima in the Hough score maps which have score above a threshold value (set to 0.01 in all reported experiments). These points serve as candidate seeds for starting a detection instance. We then pop elements out of the queue in descending score order. At each iteration, if the position of the current candidate detection seed of type is within a disk of the corresponding keypoint of previously detected person instances , then we reject it; for this we use a non-maximum suppression radius of pixels. Otherwise, we start a new detection instance with the -th keypoint at position serving as seed. We then follow the mid-range displacement vectors along the edges of the kinematic person graph to greedily connect pairs of adjacent keypoints, setting .
It is worth noting that our decoding algorithm does not treat any keypoint type preferentially, in contrast to other techniques that always use the same keypoint type (e.g. Torso or Nose) as seed for generating detections. Although we have empirically observed that the majority of detections in frontal facing person instances start from the more easily localizable facial keypoints, our approach can also handle robustly cases where a large portion of the person is occluded.
We have experimented with different methods to assign a keypoint- and instance-level score to the detections generated by our greedy decoding algorithm. Our first keypoint-level scoring method follows [papandreou2017towards] and assigns to each keypoint a confidence score
. A drawback of this approach is that the well-localizable facial keypoints typically receive much higher scores than poorly localizable keypoints like the hip or knee. Our second approach attempts to calibrate the scores of the different keypoint types. It is motivated by the object keypoint similarity (OKS) evaluation metric used in the COCO keypoints task[keypointchallenge], which uses different accuracy thresholds to penalize localization errors for different keypoint types.
Specifically, consider a detected person instance with keypoint coordinates . Let be the square root of the area of the bounding box tightly containing all detected keypoints of the -th person instance. We define the Expected-OKS score for the -th keypoint by
where is the Hough score normalized in . The expected OKS keypoint-level score is the product of our confidence that the keypoint is present, times the OKS localization accuracy confidence, given the keypoint’s presence.
We use the average of the keypoint scores as instance-level score , followed by non-maximum suppression (NMS). We have experimented both with hard OKS-based NMS [papandreou2017towards] as well as a soft-NMS scheme adapted for the keypoints tasks from [bodla2017soft], where we use as final instance-level score the sum of the scores of the keypoints that have not already been claimed by higher scoring instances, normalized by the total number of keypoints:
Given the set of keypoint-level person instance detections, the task of the instance segmentation stage is to identify pixels that belong to people (recognition) and associate them with the detected person instances (grouping). We describe next the respective semantic segmentation and association modules, illustrated in Fig. 4.
We treat semantic person segmentation in the standard fully-convolutional fashion [long2014fully, chen2016deeplab]
. We use a simple semantic segmentation head consisting of a single 1x1 convolutional layer that performs dense logistic regression and compute at each image pixelthe probability
that it belongs to at least one person. During training, we compute and backpropagate the average of the logistic loss over all image regions that have been annotated with person segmentation maps (in the case of COCO we exclude the crowd person areas).
The task of this module is to associate each person pixel identified by the semantic segmentation module with the keypoint-level detections produced by the person detection and pose estimation module.
Similar to [newell2017associative, fathi2017semantic, de2017semantic], we follow the embedding-based approach for this task. In this framework, one computes an embedding vector
at each pixel location, followed by clustering to obtain the final object instances. In previous works, the representation is typically learned by computing pairs of embedding vectors at different image positions and using a loss function designed to attract the two embedding vectors if they both come from the same object instance and repel them if they come from different person instances. This typically leads to embedding representations which are difficult to interpret and involves solving a hard learning problem which requires careful selection of the loss function and tuning several hyper-parameters such as the pair sampling protocol.
Here, we opt instead for a considerably simpler, geometric approach. At each image position inside the segmentation mask of an annotated person instance with 2-D keypoint positions , we define the long-range offset vector which points from the image position to the position of the -th keypoint of the corresponding instance . (This is very similar to the short-range prediction task, except the dynamic range is different, since we require the network to predict from any pixel inside the person, not just from inside a disk near the keypoint. Thus these are like two ”specialist” networks. Performance is worse when we use the same network for both kinds of tasks. ) We compute such 2-D vector fields, one for each keypoint type. During training, we penalize the long-range offset regression errors using the loss, averaging and back-propagating the errors only at image positions which belong to a single person object instance. We ignore background areas, crowd regions, and pixels which are covered by two or more person masks.
The long-range prediction task is challenging, especially for large object instances that may cover the whole image. As in Sec. 3.1, we recurrently refine the long-range offsets, twice by themselves and then twice by the short-range offsets
back-propagating through the bilinear warping function during training. Similarly with the mid-range offset refinement in Eq. 2, recurrent long-range offset refinement dramatically improves the long-range offset prediction accuracy.
In Fig. 3 we illustrate the long-range offsets corresponding to the Nose keypoint as computed by our trained CNN for an example image. We see that the long-range vector field effectively partitions the image plane into basins of attraction for each person instance. This motivates us to define as embedding representation for our instance association task the dimensional vector with components .
Our proposed embedding vector has a very simple geometric interpretation: At each image position semantically recognized as a person instance, the embedding represents our local estimate for the absolute position of every keypoint of the person instance it belongs to, i.e., it represents the predicted shape of the person. This naturally suggests shape metric as candidates for computing distances in our proposed embedding space. In particular, in order to decide if the person pixel belongs to the -th person instance, we compute the embedding distance metric
where is the position of the -th detected keypoint in the -th instance and is the probability that it is present. Weighing the errors by the keypoint presence probability allows us to discount discrepancies in the two shapes due to missing keypoints. Normalizing the errors by the detected instance scale allows us to compute a scale invariant metric. We set equal to the square root of the area of the bounding box tightly containing all detected keypoints of the -th person instance. We emphasize that because we only need to compute the distance metric between the pixels and the person instances, our algorithm is very fast in practice, having complexity instead of of standard embedding-based segmentation techniques which, at least in principle, require computation of embedding vector distances for all pixel pairs.
To produce the final instance segmentation result: (1) We find all positions marked as person in the semantic segmentation map, i.e. those pixels that have semantic segmentation probability . (2) We associate each person pixel with every detected person instance for which the embedding distance metric satisfies ; we set the relative distance threshold for all reported experiments. It is important to note that the pixel-instance assignment is non-exclusive: Each person pixel may be associated with more than one detected person instance (which is particularly important when doing soft-NMS in the detection stage) or it may remain an orphan (e.g., a small false positive region produced by the segmentation module). We use the same instance-level score produced by the previous person detection and pose estimation stage to also evaluate on the COCO segmentation task and obtain average precision performance numbers.
The standard COCO dataset does not contain keypoint annotations in the training set for the small person instances, and ignores them during model evaluation. However, it contains segmentation annotations and evaluates mask predictions for those small instances. Since training our geometric embeddings requires keypoint annotations for training, we have run the single-person pose estimator of [papandreou2017towards]
(trained on COCO data alone) in the COCO training set on image crops around the ground truth box annotations of those small person instances to impute those missing keypoint annotations. We treat those imputed keypoints as regular training annotations during our PersonLab model training. Naturally, this missing keypoint imputation step is particularly important for our COCO instance segmentation performance on small person instances, as shown in Appendix0.A. We emphasize that, unlike [radosavovic2017data], we do not use any data beyond the COCO train split images and annotations in this process. Data distillation on additional images as described in [radosavovic2017data] may yield further improvements.
We evaluate the proposed PersonLab system on the standard COCO keypoints task [keypointchallenge] and on COCO instance segmentation [lin2014microsoft]
for the person class alone. For all reported results we only use COCO data for model training (in addition to Imagenet pretraining). Ourtrain set is the subset of the 2017 COCO training set images that contain people (64115 images). Our val set coincides with the 2017 COCO validation set (5000 images). We only use train for training and evaluate on either val or the test-dev split (20288 images).
We report experimental results with models that use either ResNet-101 or ResNet-152 CNN backbones [He2016ResNets] pretrained on the Imagenet classification task [imagenet2015]
. We discard the last Imagenet classification layer and add 1x1 convolutional layers for each of our model-specific layers. During model training, we randomly resize a square box tightly containing the full image by a uniform random scale factor between 0.5 and 1.5, randomly translate it along the horizontal and vertical directions, and left-right flip it with probability 0.5. We sample and resize the image crop contained under the resulting perturbed box to an 801x801 image that we feed into the network. We use a batch size of 8 images distributed across 8 Nvidia Tesla P100 GPUs in a single machine and perform synchronous training for 1M steps with stochastic gradient descent with constant learning rate equal to 1e-3, momentum value set to 0.9, and Polyak-Ruppert model parameter averaging. We employ batch normalization[ioffe2015batch]
but fix the statistics of the ResNet activations to their Imagenet values. Our ResNet CNN network backbones have nominal output stride (i.e., ratio of the input image to output activations size) equal to 32 but we reduce it to 16 during training and 8 during evaluation using atrous convolution [chen2016deeplab]. During training we also make model predictions using as features activations from a layer in the middle of the network, which we have empirically observed to accelerate training. To balance the different loss terms we use weights equal to
for the heatmap, segmentation, short-range, mid-range, and long-range offset losses in our model. For evaluation we report both single-scale results (image resized to have larger side 1401 pixels) and multi-scale results (pyramid with images having larger side 601, 1201, 1801, 2401 pixels). We have implemented our system in Tensorflow[tensorflow2015-whitepaper]. All reported numbers have been obtained with a single model without ensembling.
Table 1 shows our system’s person keypoints performance on COCO test-dev. Our single-scale inference result is already better than the results of the CMU-Pose [cmu_mscoco] and Associative Embedding [newell2017associative] bottom-up methods, even when they perform multi-scale inference and refine their results with a single-person pose estimation system applied on top of their bottom-up detection proposals. Our results also outperform top-down methods like Mask-RCNN [he2017mask] and G-RMI [papandreou2017towards]. Our best result with 0.687 AP is attained with a ResNet-152 based model and multi-scale inference. Our result is still behind the winners of the 2017 keypoints challenge (Megvii) [chen2017cascaded] with 0.730 AP, but they used a carefully tuned two-stage, top-down model that also builds on a significantly more powerful CNN backbone.
|CMU-Pose [cmu_mscoco] (+refine)||0.618||0.849||0.675||0.571||0.682||0.665||0.872||0.718||0.606||0.746|
|Assoc. Embed. [newell2017associative] (multi-scale)||0.630||0.857||0.689||0.580||0.704||-||-||-||-||-|
|Assoc. Embed. [newell2017associative] (mscale, refine)||0.655||0.879||0.777||0.690||0.752||0.758||0.912||0.819||0.714||0.820|
|G-RMI COCO-only [papandreou2017towards]||0.649||0.855||0.713||0.623||0.700||0.697||0.887||0.755||0.644||0.771|
Tables 2 and 3 show our person instance segmentation results on COCO test-dev and val, respectively. We use the small-instance missing keypoint imputation technique of Sec. 3.3 for the reported instance segmentation experiments, which significantly increases our performance for small objects. Our results without missing keypoint imputation are shown in Appendix 0.A.
Our method only produces segmentation results for the person class, since our system is keypoint-based and thus cannot be applied to the other COCO classes. The standard COCO instance segmentation evaluation allows for a maximum of 100 proposals per image for all 80 COCO classes. For a fair comparison when comparing with previous works, we report test-dev results of our method with a maximum of 20 person proposals per image, which is the convention also adopted in the standard COCO person keypoints evaluation protocol. For reference, we also report the val results of our best model when allowed to produce 100 proposals.
We compare our system with the person category results of top-down instance segmentation methods. As shown in Table 2, our method on the test split outperforms FCIS [dai2017fully] in both single-scale and multi-scale inference settings. As shown in Table 3, our performance on the val split is similar to that of Mask-RCNN [he2017mask] on medium and large person instances, but worse on small person instances. However, we emphasize that our method is the first box-free, bottom-up instance segmentation method to report experiments on the COCO instance segmentation task.
|FCIS (baseline) [dai2017fully]||0.334||0.641||0.318||0.090||0.411||0.618||0.153||0.372||0.393||0.139||0.492||0.688|
|FCIS (multi-scale) [dai2017fully]||0.386||0.693||0.410||0.164||0.481||0.621||0.161||0.421||0.451||0.221||0.562||0.690|
|ResNet101 (1-scale, 20 prop)||0.377||0.659||0.394||0.166||0.480||0.595||0.162||0.415||0.437||0.207||0.536||0.690|
|ResNet152 (1-scale, 20 prop)||0.385||0.668||0.404||0.172||0.488||0.602||0.164||0.422||0.444||0.215||0.544||0.698|
|ResNet101 (mscale, 20 prop)||0.411||0.686||0.445||0.215||0.496||0.626||0.169||0.453||0.489||0.278||0.571||0.735|
|ResNet152 (mscale, 20 prop)||0.417||0.691||0.453||0.223||0.502||0.630||0.171||0.461||0.497||0.287||0.578||0.742|
|ResNet101 (1-scale, 20 prop)||0.382||0.661||0.397||0.164||0.476||0.592||0.162||0.416||0.439||0.204||0.532||0.681|
|ResNet152 (1-scale, 20 prop)||0.387||0.667||0.406||0.169||0.483||0.595||0.163||0.423||0.446||0.213||0.539||0.686|
|ResNet101 (mscale, 20 prop)||0.414||0.684||0.447||0.213||0.492||0.621||0.170||0.454||0.492||0.278||0.566||0.728|
|ResNet152 (mscale, 20 prop)||0.418||0.688||0.455||0.219||0.497||0.621||0.170||0.460||0.497||0.284||0.573||0.730|
|ResNet152 (mscale, 100 prop)||0.429||0.711||0.467||0.235||0.511||0.623||0.170||0.460||0.539||0.346||0.612||0.741|
In Fig. 5 we show representative person pose and instance segmentation results on COCO val images produced by our model with single-scale inference.
We have developed a bottom-up model which jointly addresses the problems of person detection, pose estimation, and instance segmentation using a unified part-based modeling approach. We have demonstrated the effectiveness of the proposed method on the challenging COCO person keypoint and instance segmentation tasks. A key limitation of the proposed method is its reliance on keypoint-level annotations for training on the instance segmentation task. In the future, we plan to explore ways to overcome this limitation, via weakly supervised part discovery.
We perform a series of ablation experiments examining the effect of different model choices to the system’s performance. In all corresponding Tables we indicate with boldface type the model variant employed in the results reported in Sec. 4. For all ablation experiments we use a ResNet-101 model and single-scale inference.
Given a trained PersonLab model, we have two key knobs that we can use to control its speed/accuracy tradeoff. Table 4 shows our system’s person keypoints performance on COCO val when varying the input image size (we resize the input image so that its largest side equals the specified value) and the output activation stride (we control the output stride by employing atrous convolution; larger output stride value leads to faster inference and smaller output stride value improves the accuracy of the results).
We observe that model performance increases significantly when we compute output activations more densely, using atrous convolution to decrease the output stride from 32 down to 16 pixels. Decreasing the output stride further from 16 down to 8 pixels brings a further small performance improvement, yet it significantly increases the model’s computation cost. For large person instances, we get reasonably good keypoint AP performance for as small as 601 or 801 pixels input image size. However, accurately capturing small person instances requires us to use higher resolution input images.
In terms of model inference speed as measured on a Titan X using as input a 801x529 image, inference time is 341 msec for output stride equal to 32, 355 msec for output stride equal to 16, and 464 msec for output stride equal to 8. This refers to end-to-end timing to produce both the keypoint and instance segmentation final outputs. We see that using output stride equal to 16 pixels strikes an excellent speed-accuracy tradeoff.
|Output stride 32:|
|Output stride 16:|
|Output stride 8:|
We examine the effect of the two keypoint scoring mechanisms examined in Sec. 3.1, namely using the Hough scores sampled at the keypoint positions as in [papandreou2017towards] vs. the proposed Expected-OKS scoring of Eq. 3. We also compare the performance of the hard-NMS (using the OKS-based hard NMS scheme of [papandreou2017towards] with threshold set to 0.5) vs. the proposed soft-NMS of Eq. 4.
We show the results for the four alternative model configurations in Table 5. Both proposed components, Expected-OKS keypoint scoring and soft-NMS, bring significant improvements in AP over their alternatives from [papandreou2017towards] and work well together.
We examine the effect of mid- and long-range offset refinement on the quality of the keypoint and segmentation results. For this purpose, we build a version of our model with offset refinement disabled during both training and evaluation. Results on the COCO val split for the keypoints and segmentation tasks are shown in Tables 6 and 7, respectively. We see that offset refinement improves model keypoint AP by 3.3% and segmentation AP by 2.2%. In both cases, the largest improvement can be observed for large object instances, +5.4% for keypoints and +9.1% for segmentation, since large objects span a significant portion of the image for which accurate regression without refinement is challenging.
|Without offset refinement||0.632||0.856||0.689||0.603||0.678||0.679||0.883||0.736||0.639||0.735|
|With offset refinement||0.665||0.862||0.719||0.623||0.732||0.707||0.887||0.757||0.656||0.779|
|Without offset refinement||0.355||0.646||0.354||0.166||0.461||0.501||0.146||0.393||0.417||0.209||0.525||0.597|
|With offset refinement||0.382||0.661||0.397||0.164||0.476||0.592||0.162||0.416||0.439||0.204||0.532||0.681|
We examine the effect of imputing the keypoints of small COCO person instances and using them for model training.
When evaluating the model on the COCO keypoints task, keypoint imputation slightly decreases performance by 0.8%, as seen in Table 8. The reason is that the COCO keypoints evaluation protocol does not include the small person instances in the evaluation.
However, when evaluating the model on the COCO segmentation task, keypoint imputation significantly improves performance by 4.4%, as shown in Table 9. As expected, most of the performance improvement comes for small objects, whose AP more than doubles, increasing from 7.6% to 16.4%.