UPSNet: A Unified Panoptic Segmentation Network

In this paper, we propose a unified panoptic segmentation network (UPSNet) for tackling the newly proposed panoptic segmentation task. On top of a single backbone residual network, we first design a deformable convolution based semantic segmentation head and a Mask R-CNN style instance segmentation head which solve these two subtasks simultaneously. More importantly, we introduce a parameter-free panoptic head which solves the panoptic segmentation via pixel-wise classification. It first leverages the logits from the previous two heads and then innovatively expands the representation for enabling prediction of an extra unknown class which helps better resolve the conflicts between semantic and instance segmentation. Additionally, it handles the challenge caused by the varying number of instances and permits back propagation to the bottom modules in an end-to-end manner. Extensive experimental results on Cityscapes, COCO and our internal dataset demonstrate that our UPSNet achieves state-of-the-art performance with much faster inference.


page 8

page 12

page 13

page 14


Panoptic Segmentation with a Joint Semantic and Instance Segmentation Network

We present an end-to-end method for the task of panoptic segmentation. T...

Nuclear Instance Segmentation using a Proposal-Free Spatially Aware Deep Learning Framework

Nuclear segmentation in histology images is a challenging task due to si...

Panoptic SegFormer

We present Panoptic SegFormer, a general framework for end-to-end panopt...

Hierarchical Mask Calibration for Unified Domain Adaptive Panoptic Segmentation

Domain adaptive panoptic segmentation aims to mitigate data annotation c...

SUNet: Scale-aware Unified Network for Panoptic Segmentation

Panoptic segmentation combines the advantages of semantic and instance s...

SOGNet: Scene Overlap Graph Network for Panoptic Segmentation

The panoptic segmentation task requires a unified result from semantic a...

Amodal Panoptic Segmentation

Humans have the remarkable ability to perceive objects as a whole, even ...

Code Repositories


UPSNet: A Unified Panoptic Segmentation Network

view repo



view repo

1 Introduction

Relying on the advances in deep learning, computer vision systems have been substantially improved, especially in tasks such as semantic segmentation 

[39] and instance segmentation [15]

. The former focuses on segmenting amorphous image regions which share similar texture or material such as grass, sky and road, whereas the latter focuses on segmenting countable objects such as people, bicycle and car. Since both tasks aim at understanding the visual scene at the pixel level, a shared model or representation could arguably be beneficial. However, the dichotomy of these two tasks lead to very different modeling strategies despite the inherent connections between them. For example, fully convolutional neural networks 

[26] are often adopted for semantic segmentation while proposal based detectors [30] are frequently exploited for instance segmentation.

As an effort to leverage the possible complementariness of these two tasks and push the segmentation systems further towards real-world application, Kirillov et al[17] unified them and proposed the so-called panoptic segmentation task. It is interesting to note that tasks with the same spirit have been studied under various names before deep learning became popular. Notable ones include image parsing [32], scene parsing [32]

and holistic scene understanding 

[35]. In panoptic segmentation, countable objects (those that map to instance segmentation tasks well) are called things whereas amorphous and uncountable regions (those that map to semantic segmentation tasks better) are called stuff. For any pixel, if it belongs to stuff, the goal of a panoptic segmentation system is simply to predict its class label within the stuff classes. Otherwise the system needs to decide which instance it belongs to as well as which thing class it belongs to. The challenge of this task lies in the fact that the system has to give a unique answer for each pixel.

In this paper, we propose a unified panoptic segmentation network (UPSNet) to approach the panoptic segmentation problem as stated above. Unlike previous methods [17, 19] which have two separate branches designed for semantic and instance segmentation individually, our model exploits a single network as backbone to provide shared representations. We then design two heads on top of the backbone for solving these tasks simultaneously. Our semantic head builds upon deformable convolution [9] and leverages multi-scale information from feature pyramid networks (FPN) [22]. Our instance head follows the Mask R-CNN [15] design and outputs mask segmentation, bounding box and its associated class. As shown in experiments, these two lightweight heads along with the single backbone provide good semantic and instance segmentations which are comparable to separate models. More importantly, we design a panoptic head which predicts the final panoptic segmentation via pixel-wise classification of which the number of classes per image could vary. It exploits the logits from the above two heads and adds a new channel of logits corresponding to an extra unknown class. By doing so, it provides a better way of resolving the conflicts between semantic and instance segmentation. Moreover, our parameter-free panoptic head is very lightweight and could be used with various backbone networks. It facilitates end-to-end training which is not the case for previous methods [17, 19]. To verify the effectiveness of our UPSNet, we perform extensive experiments on two public datasets: Cityscapes [6] and COCO [23]. Furthermore, we test it on our internal dataset which is similar in spirit to Cityscapes (i.e., images are captured from ego-centric driving scenarios) but with significantly larger () size. Results on these three datasets manifest that our UPSNet achieves state-of-the-art performances and enjoys much faster inference compared to recent competitors.

2 Related Work

Semantic Segmentation: Semantic segmentation is one of the fundamental computer vision tasks that has a long history. Earlier work [11, 27]

focused on introducing datasets for this task and showed the importance of global context by demonstrating the gains in bayesian frameworks, whether structured or free-form. Recent semantic segmentation methods that exploit deep convolutional feature extraction mainly approach this problem from either multi-scale feature aggregation 

[39, 26, 5, 14], or end-to-end structured prediction [2, 40, 1, 25, 4] perspectives. As context is crucial for semantic segmentation, one notable improvement to most convolutional models emerged from dilated convolutions [36, 37] which allows for a larger receptive field without the need of more free parameters. Pyramid scene parsing network (PSPNet) [39] that uses dilated convolutions in its backbone, and its faster variant [38] for real-time applications are widely utilized in practical applications. Based on FPN and PSPNet, a multi-task framework is proposed in [34] and demonstrated to be versatile in segmenting a wide range of visual concepts.

Instance Segmentation: Instance segmentation deals not only with identifying the semantic class a pixel is associated with, but also the specific object instance that it belongs to. Beginning with the introduction of region-based CNN (R-CNN) [12], many early deep learning approaches to instance segmentation attacked the problem by casting the solution to instance segmentation as a two stage approach where a number of segment proposals are made, which is then followed by a voting between those proposals to choose the best [33, 7, 8, 13, 14]. The common denominator for these methods is that the segmentation comes before classification, and are therefore slower. Li et al[21] proposed a fully convolutional instance-aware segmentation method, where instance mask proposals [7] are married with fully convolutional networks [26]. Most recently, Mask R-CNN [15] introduced a joint approach to both mask prediction and recognition where one of the two parallel heads are dedicated to each task.

Panoptic Segmentation: Instance segmentation methods that focus on detection bounding box proposals, as mentioned above, ignore the classes that are not well suited for detection, e.g., sky, street. On the other hand, semantic segmentation does not provide instance boundaries for classes like pedestrian and bicycle in a given image. Panoptic segmentation task, first coined by Kirillov et al[17] unifies these tasks and defines an ideal output for thing classes as instance segmentations, as well as for stuff classes as semantic segmentation. The baseline panoptic segmentation method introduced in [17] processes the input independently for semantic segmentation via a PSPNet, and for instance segmentation utilizing a Mask R-CNN [15]

, followed by simple heuristic decisions to produce a single void, stuff, or thing instance label per pixel. Recently, Li 

et al[19] introduced a weakly- and semi-supervised panoptic segmentation method where they relieve some of the ground truth constraints by supervising thing classes using bounding boxes, and stuff classes by utilizing image level tags. De Gaus et al[10] uses a single feature extraction backbone for the pyramid semantic segmentation head [39], and the instance segmentation head [15], followed by heuristics for merging pixel level annotations, effectively introducing an end-to-end version of [17] due to the shared backbone for the two task networks. Li et al[20] propose the attention-guided unified network (AUNet) which leverages proposal and mask level attention to better segment the background. Similar post-processing heuristics as in [17] are used to generate the final panoptic segmentation. Li et al[18] propose things and stuff consistency network (TASCNet) which constructs a binary mask predicting things vs. stuff for each pixel. An extra loss is added to enforce the consistency between things and stuff prediction.

In contrast to most of these methods, we use a single backbone network to provide both semantic and instance segmentation results. More importantly, we develop a simple yet effective panoptic head which helps accurately predict the instance and class label.

Figure 1: Overall architecture of our UPSNet.

3 Unified Panoptic Sementation Network

In this section, we first introduce our model and then explain the implementation details. Following the convention of [17], we divide the semantic class labels into stuff and thing. Specifically, thing refers to the set of labels of instances (e.g. pedestrian, bicycle), whereas stuff refers to the rest of the labels that represent semantics without clear instance boundaries (e.g. street, sky). We denote the number of stuff and thing classes as and respectively.

3.1 UPSNet Architecture

UPSNet consists of a shared convolutional feature extraction backbone and multiple heads on top of it. Each head is a sub-network which leverages the features from the backbone and serves a specific design purpose that is explained in further detail below. The overall model architecture is shown in Fig. 1.


We adopt the original Mask R-CNN [15] backbone as our convolutional feature extraction network. This backbone exploits a deep residual network (ResNet) [16] with a feature pyramid network (FPN) [22].

Instance Segmentation Head:

The instance segmentation head follows the Mask R-CNN design with a bounding box regression output, a classification output, and a segmentation mask output. The goal of the instance head is to produce instance aware representations that could identify thing classes better. Ultimately these representations are passed to the panoptic head to contribute to the logits for each instance.

Semantic Segmentation Head:

The goal of the semantic segmentation head is to segment all semantic classes without discriminating instances. It could help improving instance segmentation where it achieves good results of thing classes. Our semantic head consists of a deformable convolution [9] based sub-network which takes the multi-scale feature from FPN as input. In particular, we use , , and feature maps of FPN which contain channels and are , , and of the original scale respectively. These feature maps first go through the same deformable convolution network independently and are subsequently upsampled to the scale. We then concatenate them and apply convolutions with softmax to predict the semantic class. The architecture is shown in Fig. 2. As will be experimentally verified later, the deformable convolution along with the multi-scale feature concatenation provide semantic segmentation results as good as a separate model, e.g., a PSPNet adopted in [17]. Semantic segmentation head is associated with the regular pixel-wise cross entropy loss. To put more emphasis on the foreground objects such as pedestrians, we also incorporate a RoI loss. During training, we use the ground truth bounding box of the instance to crop the logits map after the convolution and then resize it to following Mask R-CNN. The RoI loss is then the cross entropy computed over patch which amounts to penalizing more on the pixels within instances for incorrect classification. As demonstrated in the ablation study later, we empirically found that this RoI loss helps improve the performance of panoptic segmentation without harming the semantic segmentation.

Panoptic Segmentation Head:

Given the semantic and instance segmentation results from the above described two heads, we combine their outputs (specifically per pixel logits) in the panoptic segmentation head.

The logits from semantic head is denoted as of which the channel size, height and width are , and respectively.

can then be divided along channel dimension into two tensors

and which are logits corresponding to stuff and thing classes. For any image, we determine the number of instances according to the number of ground truth instances during training. During inference, we rely on a mask pruning process to determine which is explained in Section  3.2. is fixed since number of stuff classes is constant across different images, whereas is not constant since the number of instances per image can be different. The goal of our panoptic segmentation head is to first produce a logit tensor which is of size and then uniquely determine both the class and instance ID for each pixel.

We first assign to the first channels of

to provide the logits for classifying

stuffs. For any instance , we have its mask logits from the instance segmentation head which is of size . We also have its box and class ID . During training and are ground truth box and class ID whereas during inference they are predicted by Mask R-CNN. Therefore, we can obtain another representation of -th instance from semantic head by only taking the values inside box from the channel of corresponding to . is of size and its values outside box

are zero. We then interpolate

back to the same scale as

via bilinear interpolation and pad zero outside the box to achieve a compatible shape with

, denoted as . The final representation of -th instance is . Once we fill in with representations of all instances, we perform a softmax along the channel dimension to predict the pixel-wise class. In particular, if the maximum value falls into the first channel, then it belongs to one of stuff classes. Otherwise the index of the maximum value tells us the instance ID. The architecture is shown in Fig. 3. During training, we generate the ground truth instance ID following the order of the ground truth boxes we used to construct the panoptic logits. The panoptic segmentation head is then associated with the standard pixel-wise cross entropy loss.

During inference, once we predict the instance ID following the above procedure, we still need to determine the class ID of each instance. One can either use the class ID predicted by Mask R-CNN or the one predicted by the semantic head . As shown later in the ablation study, we resort to a better heuristic rule. Specifically, for any instance, we know which pixels correspond to it, i.e., those of which the argmax of along channel dimension equals to its instance ID. Among these pixels, we first check whether and are consistent. If so, then we assign the class ID as . Otherwise, we compute the mode of their , denoting as . If the frequency of the mode is larger than and belongs to stuff, then the predicted class ID is . Otherwise, we assign the class ID as . In short, while facing inconsistency, we trust the majority decision made by the semantic head only if it prefers a stuff class. The justification of such a conflict resolution heuristic is that semantic head typically achieves very good segmentation results over stuff classes.

Figure 2: Architecture of our semantic segmentation head.

Unknown Prediction:

In this section, we explain a novel mechanism to allow UPSNet to classify a pixel as the unknown class instead of making a wrong prediction. To motivate our design, we consider a case where a pedestrian is instead predicted as a bicycle. Since the prediction missed the pedestrian, the false negative (FN) value of pedestrian class will be increased by . On the other hand, predicting it as a bicycle will be increasing the false positive (FP) of bicycle class also by . Recall that, the panoptic quality (PQ) [17] metric for panoptic segmentation is defined as,

which consist of two parts: recognition quality (RQ) and semantic quality (SQ). It is clear that increasing either FN or FP degrades this measurement. This phenomena extends to wrong predictions of the stuff classes as well. Therefore, if a wrong prediction is inevitable, predicting such pixel as unknown is preferred since it will increase FN of one class by without affecting FP of the other class.

To alleviate the issue, we compute the logits of the extra unknown class as where is the concatenation of of all masks along channel dimension and of shape . The maximum is taken along the channel dimension. The rationale behind this is that for any pixel if the maximum of is larger than the maximum of , then it is highly likely that we are missing some instances (FN). The construction of the logits is shown in Fig. 3. To generate the ground truth for the unknown class, we randomly sample ground truth masks and set them as unknown during training. In evaluating the metric, any pixel belonging to unknown is ignored, i.e., setting to void which will not contribute to the results.

Figure 3: Architecture of our panoptic segmentation head.

3.2 Implementation Details

In this section, we explain the implementation details of UPSNet. We follow most of settings and hyper-parameters of Mask R-CNN which will be introduced in the supplementary material. Hereafter, we only explain those which are different.


We implement our model in PyTorch 

[28] and train it with GPUs using the distributed training framework Horovod [31]. Images are preprocessed following [15]. Each mini-batch has image per GPU. As mentioned, we use ground truth box, mask and class label to construct the logits of panoptic head during training. Our region proposal network (RPN) is trained end-to-end with the backbone whereas it was trained separately in [15]. Due to the high resolution of images, e.g., in Cityscapes, logits from semantic head and panoptic head are downsampled to

of the original resolution. Although we do not fine-tune batch normalization (BN) layers within the backbone for simplicity, we still achieve comparable results with the state-of-the-art semantic segmentation networks like PSPNet. Based on common practice in semantic 

[4, 39] and instance segmentation [29, 24], we expect the performance to be further improved with BN layers fine-tuned. Our UPSNet contains loss functions in total: semantic segmentation head (whole image and RoI based pixel-wise classification losses), panoptic segmentation head (whole image based pixel-wise classification loss), RPN (box classification, box regression) and instance segmentation head (box classification, box regression and mask segmentation). Different weighting schemes on these multi-task loss functions could lead to very different training results. As shown in the ablation study, we found the loss balance strategy, i.e., assuring the scales of all losses are roughly on the same order of magnitude, works well in practice.


During inference, once we obtained output boxes, masks and predicted class labels from the instance segmentation head, we apply a mask pruning process to determine which mask will be used for constructing the panoptic logits. In particular, we first perform the class-agnostic non-maximum suppression with the box IoU threshold as

to filter out some overlapping boxes. Then we sort the predicted class probabilities of the remaining boxes and keep those whose probability are larger than

. For each class, we create a canvas which is of the same size as the image. Then we interpolate masks of that class to the image scale and paste them onto the corresponding canvas one by one following the decreasing order of the probability. Each time we copy a mask, if the intersection between the current mask and those already existed over the size of the current mask is larger than a threshold, we discard this mask. Otherwise we copy the non-intersecting part onto the canvas. The threshold of this intersection over itself is set to in our experiments. Logits from the semantic segmentation head and panoptic segmentation head are of the original scale of the input image during inference.

4 Experiments

In this section, we present the experimental results on COCO [23], Cityscapes [6] and our internal dataset.

COCO We follow the setup of COCO 2018 panoptic segmentation task which consists of and classes for thing and stuff respectively. We use train2017 and val2017 subsets which contain approximately k training images and k validation images.

Cityscapes Cityscapes has images of ego-centric driving scenarios in urban settings which are split into , and for training, validation and testing respectively. It consists of and classes for thing and stuff.

Our Dataset We also use an internal dataset which is similar to Cityscapes and consists of training, validation and test images of ego-centric driving scenarios. Our dataset consists of and classes for thing (e.g., car, bus) and stuff (e.g., building, road) respectively.

Models PQ SQ RQ mIoU AP
JSIS-Net [10] 26.9 72.4 35.7 29.3 23.3 - -
RN50-MR-CNN 38.6 76.4 47.5 46.2 27.1 - -
MR-CNN-PSP 41.8 78.4 51.3 47.8 32.8 53.9 34.2
Ours 42.5 78.0 52.4 48.5 33.4 54.3 34.3
Multi-scale PQ SQ RQ mIoU AP
MR-CNN-PSP-M 42.2 78.5 51.7 47.8 33.8 55.3 34.2
Ours-M 43.2 79.2 52.9 49.1 34.1 55.8 34.3
Table 1: Panoptic segmentation results on COCO. Superscripts Th and St stand for thing and stuff. ‘-‘ means inapplicable.
Models backbone PQ SQ RQ
Megvii (Face++) ensemble model 53.2 83.2 62.9 62.2 85.5 72.5 39.5 79.7 48.5
Caribbean ensemble model 46.8 80.5 57.1 54.3 81.8 65.9 35.5 78.5 43.8
PKU_360 ResNeXt-152-FPN 46.3 79.6 56.1 58.6 83.7 69.6 27.6 73.6 35.6
JSIS-Net [10] ResNet-50 27.2 71.9 35.9 29.6 71.6 39.4 23.4 72.3 30.6
AUNet [20] ResNeXt-152-FPN 46.5 81.0 56.1 55.9 83.7 66.3 32.5 77.0 40.7
Ours ResNet-101-FPN 46.6 80.5 56.9 53.2 81.5 64.6 36.7 78.9 45.3
Table 2: Panoptic segmentation results on MS-COCO 2018 test-dev. The top rows contain results of top models taken from the official leadboard.

Experimental Setup For all datasets, we report results on the validation set. To evaluate the performance, we adopt panoptic quality (PQ), recognition quality (RQ) and semantic quality (SQ) [17] as the metrics. We also report average precision (AP) of mask prediction, mean IoU of semantic segmentation on both stuff and thing and the inference run-time for comparison. At last, we show results of ablation study on various design components of our model. Full results with all model variants are shown in the supplementary material.

We set the learning rate and weight decay as and for all datasets. For COCO, we train for K iterations and decay the learning rate by a factor of at K and K iterations. For Cityscapes, we train for K iterations and apply the same learning rate decay at K iterations. For our dataset, we train for K iterations and apply the same learning rate decay at K and K iterations. Loss weights of semantic head are , and on COCO, Cityscapes and ours respectively. RoI loss weights are one fifth of those of semantic head. Loss weights of panoptic head are , and on COCO, Cityscapes and ours respectively. All other loss weights are set to .

We mainly compare with the combined method used in [17]. For a fair comparison, we adopt the model which uses a Mask R-CNN with a ResNet-50-FPN and a PSPNet with a ResNet-50 as the backbone and apply the combine heuristics to compute the panoptic segmentation. We denote our implementation of the combined method as ‘MR-CNN-PSP’ and its multi-scale testing version as ‘MR-CNN-PSP-M’. Unless specified otherwise, the combined method hereafter refers to ‘MR-CNN-PSP’. For PSPNet, we use ‘poly’ learning rate schedule as in [3] and train K, K and K on COCO, Cityscapes and our dataset with mini-batch size . We test all available models with the multi-scale testing. Specifically, we average the multi-scale logits of PSPNet for the combined method and the ones of semantic segmentation head for our UPSNet. For simplicity, we just use single scale testing on Mask R-CNN of the combined method and our instance segmentation head. During evaluation, due to the sensitivity of PQ with respect to RQ, we predict all stuff segments of which the areas are smaller than a threshold as unknown. The thresholds on COCO, Cityscapes and our dataset are , and respectively. To be fair, we apply this area thresholding to all methods.

Models PQ SQ RQ mIoU AP
Li et al. [19] 53.8 - - 42.5 62.1 71.6 28.6
MR-CNN-PSP 58.0 79.2 71.8 52.3 62.2 75.2 32.8
TASCNet [18] 55.9 - - 50.5 59.8 - -
Ours 59.3 79.7 73.0 54.6 62.7 75.2 33.3
TASCNet-COCO [18] 59.2 - - 56.0 61.5 - -
Ours-COCO 60.5 80.9 73.5 57.0 63.0 77.8 37.8
Multi-scale PQ SQ RQ mIoU AP
Kirillov et al. [17] 61.2 80.9 74.4 54.0 66.4 80.9 36.4
MR-CNN-PSP-M 59.2 79.7 73.0 52.3 64.2 76.9 32.8
Ours-M 60.1 80.3 73.5 55.0 63.7 76.8 33.3
Ours-101-M-COCO 61.8 81.3 74.8 57.6 64.8 79.2 39.0
Table 3:

Panoptic segmentation results on Cityscapes. ’-COCO’ means the model is pretrained on COCO. ‘-101‘ means the model uses ResNet-101 as the backbone. Unless specified, all models use ResNet-50 as the backbone and are pretrained on ImageNet.

4.1 Coco

We compare with several recent published methods including JSIS-Net [10], RN50-MR-CNN 111 and the combined method [17] on COCO. Since authors in [17] do not report results on COCO, we use our MR-CNN-PSP model as the alternative to do the experiments. JSIS-Net uses a ResNet-50 wheres RN50-MR-CNN uses two separate ResNet-50-FPNs as the backbone. Our UPSNet adopts a ResNet-50-FPN as the backbone. In order to better leverage context information, we use a global average pooling over the feature map of the last layer in the 5-th stage of ResNet (‘res5’), reduce its dimension to and add back to FPN before producing feature map. Table 7 shows the results of all metrics. The mIoU metric is computed over the classes of stuff and thing in the COCO 2018 panoptic segmentation task which is different from previous classes of COCO-Stuff. We are among the first to evaluate mIoU on this -class subset. From the table, we can see that our UPSNet achieves better performance in all metrics except the SQ. It is typically the case that the an increase in RQ leads to the slight decrease of SQ since we include more TP segments which could have possibly lower IoU. Note that even with multi-scale testing, MR-CNN-PSP is still worse than ours on PQ. Moreover, from the mIoU column, we can see that the performance of our semantic head is even better than a separate PSPNet which verifies its effectiveness. With multi-scale testing, both MR-CNN-PSP and UPSNet are improved and ours is still better. We also add the comparisons on the test-dev of MS-COCO 2018 in Table 2. Although we just use ResNet-101 as the backbone, we achieve slightly better results compared to the recent AUNet [20] which uses ResNeXt-152. We also list the top three results on the leadboard which uses ensemble and other tricks. It is clear from the table that we are on par with the second best model without using any such tricks. In terms of the model size, RN50-MR-CNN, MR-CNN-PSP and UPSNet consists of M, M and M parameters respectively. Therefore, our model is significantly lighter. We show visual examples of panoptic segmentation on this dataset in the first two rows of Fig. 4. From the -st row of the figure, we can see that the combined method completely ignores the cake and other objects on the table whereas ours successfully segments them out. This is due to the inherent limitations of the combine heuristic which first pastes the high confidence segment, i.e., table, and then ignores all highly overlapped objects thereafter.

Models PQ SQ RQ mIoU AP
MR-CNN-PSP 45.5 77.1 57.6 40.1 48.7 70.5 29.6
Ours 47.1 77.1 59.4 43.8 49.0 70.8 30.4
Table 4: Panoptic segmentation results on our dataset.

4.2 Cityscapes

We compare our model on Cityscapes with Li et al. [19], the combined method [17] and TASCNet [18]. Note that the method in [19] uses a ResNet-101 as the backbone whereas all other reported methods use ResNet-50 within their backbones. We use deformable convolution layers for the semantic head. The results are reported in Table 8. It is clear from the table that both our UPSNet and MR-CNN-PSP significantly outperform the method in [19], especially on . This may possibly be caused by the fact that their CRF based instance subnetwork performs worse compared to Mask R-CNN on instance segmentation. Under the same single scale testing, our model achieves better performance than the combined method. Although multi-scale testing significantly improves both the combined method and UPSNet, ours is still slightly better. Results reported in [17] are different from the ones of our MR-CNN-PSP-M since: 1) they use ResNet-101 as the backbone for PSPNet; 2) they pre-train Mask R-CNN on COCO and PSPNet on extra coarsely labeled data. We also have a model variant, denoting as ‘Ours-101-M’, which adopts ResNet-101 as the backbone and pre-trains on COCO. As you can see, it outperforms the reported metrics of the combined method. We show the visual examples of panoptic segmentation on this dataset in the -rd and -th rows of Fig. 4. From the -rd row of the figure, we can see that the combined method tends to produce large black area, i.e., unknown, whenever the instance and semantic segmentation conflicts with each other. In contrast, our UPSNet resolves the conflicts better. Moreover, it is interesting to note that some unknown prediction of our model has the vertical or horizontal boundaries. This is caused by the fact that instance head predicts nothing whereas semantic head predicts something for these out-of-box areas. The logits of unknown class will then stand out by design to avoid contributing to both FP and FN as described in section 3.1.

4.3 Our Dataset

Last but not least, we compare our model on our own dataset with the combined method [17]. All reported methods use ResNet-50 within their backbones. We use deformable convolution layers for the semantic head. The results are reported in Table 9. We can observe that similar to COCO, our model performs significantly better than the combined method on all metrics except SQ. We show the visual examples of panoptic segmentation on this dataset in the last two rows of Fig. 4. From the examples, similar observations as COCO and Cityscapes are found.

Model COCO Cityscapes Our Dataset
Img. Size
(186 + 2)
(583 + 522)
(598 + 418)
Ours 171 236 264
Speedup 10 368 285
Table 5: Run time (ms) comparison. Note the bracket below the MR-CNN-PSP results contains their breakdown into the network inference (left) and the combine heurisitc (right).
(u) Image
(v) Ground truth
(w) Combined method [17]
(x) Ours
Figure 4: Visual examples of panoptic segmentation. -st and -nd rows are from COCO. -rd and -th rows are from Cityscapes. -th and -th rows are from our internal dataset.

4.4 Run Time Comparison

We compare the run time of inference on all three datasets in Table 5. We use a single NVIDIA GeForce GTX 1080 Ti GPU and an Intel Xeon E5-2687W CPU (3.00GHz). All entries are averaged over runs on the same image with single scale test. For COCO, the PSPNet within the combined method uses the original scale. We also list the image size on each dataset. It is clearly seen in the table that as the image size increases, our UPSNet is significantly faster in run time. For example, the combined method takes about time than ours.

4.5 Ablation Study

We perform extensive ablation studies on COCO dataset to verify our design choices as listed in Table 6

. Empty cells in the table indicate the corresponding component is not used. All evaluation metrics are computed over the output of the panoptic head on the validation set.

Panoptic Head: Since the inference of our panoptic head is applicable as long as we have outputs from both semantic and instance segmentation heads, we can first train these two heads simultaneously and then directly evaluate the panoptic head. We compare the results with the ones obtained by training all three heads. By doing so, we can verify the gain of training the panoptic head over treating it as a post processing procedure. From the first two rows of Table 6, we can see that training the panoptic head does improve the PQ metric.

Instance Class Assignment: Here, we focus on different alternatives of assigning instance class. We compare our heuristic as described in section 3.1 with the one which only trusts the predicted class given by the instance segmentation head. As you can see from the -nd and -rd rows of the Table 6, our instance class assignment is better.

Loss Balance: We investigate the weighting scheme of loss functions. Recall that without the proposed RoI loss, our UPSNet contains loss functions. In order to weight them, we follow the principle of loss balance, i.e., making sure their values are roughly on the same order of magnitude. In particular, with loss balance, we set the weights of semantic and panoptic losses as and and the rest ones as . Without loss balance, we set the weights of both semantic and panoptic losses as and the rest ones as . The -rd and -th rows of Table 6 show that introducing the loss balance improves the performance.

RoI Loss & Unknown Prediction: Here, we investigate the effectiveness of our RoI loss function over the semantic head and the unknown prediction. From -th and -th rows of Table 6, one can conclude that adding such a new loss function does slightly boost the . From -th and -th rows of Table 6, along with the RoI loss, predicting unknown class improves the metrics significantly.

Oracle Results: We also explore the room for improvement of the current system by replacing some inference results with the ground truth (GT) ones. Specifically, we study the box, instance class assignment and semantic segmentation results which correspond to GT Box, GT ICA and GT Seg. columns in Table 6. It is clear from the table that using GT boxes along with our predicted class probabilities improves PQ which indicates that better region proposals are required to achieve higher recall. On top of the GT boxes, using the GT class assignment greatly improves the PQ, e.g., . The imperfect indicates that our mask segmentation is not good enough. Moreover, using the GT semantic segmentation gives the largest gain of PQ, i.e., , which highlights the importance of improving semantic segmentation. is imperfect since we resize images during inference which causes the misalignment with labels. It is worth noticing that increasing semantic segmentation also boosts for points. This is because our model leverages semantic segments while producing instance segments. However, it is not the case for the combined method as its predicted instance segments only relies on the instance segmentation network.

41.2 47.6 31.6
41.6 47.6 32.5
41.7 47.7 32.8
42.3 48.4 33.1
42.3 48.3 33.2
42.5 48.5 33.4
46.7 55.3 33.6
53.0 64.6 35.5
72.0 58.6 92.3
Table 6: Ablation study on COCO dataset. ‘Pano.’, ‘Loss Bal.’, ‘Unk.’ and ‘ICA’ stand for training with panoptic loss, loss balance, unknown prediction and instance class assignment respectively.

5 Conclusion

In this paper, we proposed the UPSNet which provides a unified framework for panoptic segmentation. It exploits a single backbone network and two lightweight heads to predict semantic and instance segmentation in one shot. More importantly, our parameter-free panoptic head leverages the logits from the above two heads and has the flexibility to predict an extra unknown class. It handles the varying number of classes per image and enables back propagation for the bottom representation learning. Empirical results on three large datasets show that our UPSNet achieves state-of-the-art performance with significantly faster inference compared to other methods. In the future, we would like to explore more powerful backbone networks and smarter parameterization of panoptic head.


6 Supplementary Material

We first explain the hyper-parameters and then provide full experimental results on all three datasets.

6.1 Hyperparameters

For all our experiments, we exploit a -iteration warm-up phase where the learning rate gradually increases from to . We also initialize all models with ImageNet pre-trained weights released by MSRA 222


We resize the image such that the length of the shorter edge is and the length of the longer edge does not exceed . We do not utilize multi-scale training. For testing, we feed multi-scale images for all models. Specifically, we resize the images to multiple scales of which the shorter edge equals to respectively. We also add left-right flipping. Finally, we average the semantic segmentation logits under different scales. For PSPNet, we do sliding window test with cropped image.


We do multi-scale training where we resize the input image in a way that the length of the shorter edge is randomly sampled from . For multi-scale testing, we use the same protocol as COCO except the set of scales is . For PSPNet, we do sliding window test with cropped image.

Our Dataset:

We utilize multi-scale training where the length of the shorter edge is randomly sampled from . We do not perform multi-scale testing.

6.2 Full Experimental Results

We show the full experimental results including run time in Table 7, Table 8 and Table 9. We also add two more variants of our model as baselines, i.e., UPSNet-C and UPSNet-CP. UPSNet-C is UPSNet without the panoptic head. We train it with just semantic and instance segmentation losses. During test, we use the same combine heuristics as in [17] to generate the final prediction. UPSNet-CP has the same model as UPSNet. We train it in the same way as UPSNet as well. During test, we use the same combine heuristics as in [17] to generate the final prediction.

Models PQ SQ RQ mIoU Run Time (ms)
JSIS-Net [10] 26.9 72.4 35.7 29.3 72.1 39.2 23.3 73.0 30.4 - - - -
RN50-MR-CNN 38.6 76.4 47.5 46.2 80.2 56.2 27.1 70.8 34.5 - - - -
MR-CNN-PSP 41.8 78.4 51.3 47.8 81.3 58.0 32.8 74.1 41.1 53.9 38.1 34.2 186
UPSNet-C 41.5 79.1 50.9 47.5 81.2 57.7 32.6 76.1 40.6 54.5 38.2 34.4 153
UPSNet-CP 41.5 79.2 50.8 47.3 81.3 57.4 32.8 76.0 40.9 54.3 37.8 34.3
UPSNet 42.5 78.0 52.5 48.6 79.4 59.6 33.4 75.9 41.7 54.3 37.8 34.3 167
Multi-scale PQ SQ RQ mIoU Run Time (ms)
MR-CNN-PSP-M 42.2 78.5 51.7 47.8 81.3 58.0 33.8 74.3 42.2 55.3 38.1 34.2 3,624
UPSNet-M 43.0 79.1 52.8 48.9 79.7 59.7 34.1 78.2 42.3 55.7 37.8 34.3 2,433
Table 7: Panoptic segmentation results on COCO. Superscripts Th and St stand for thing and stuff. ‘-‘ means inapplicable. ‘‘ means the run time of UPSNet-CP is the same with the one of UPSNet-C.
Models PQ SQ RQ mIoU Run Time (ms)
Li et al. [19] 53.8 - - 42.5 - - 62.1 - - 71.6 - 28.6 -
MR-CNN-PSP 58.0 79.2 71.8 52.3 77.9 66.9 62.2 80.1 75.4 75.2 37.7 32.8 583
UPSNet-C 58.1 79.0 72.0 52.2 78.0 66.6 62.3 79.8 75.9 75.3 37.9 33.2 198
UPSNet-CP 58.7 79.1 72.8 53.1 77.7 68.1 62.7 80.0 76.3 75.2 39.1 33.3
UPSNet 59.3 79.7 73.0 54.6 79.3 68.7 62.7 80.1 76.2 75.2 39.1 33.3 202
Multi-scale PQ SQ RQ mIoU Run Time (ms)
Kirillov et al. [17] 61.2 80.9 74.4 54.0 - - 66.4 - - 80.9 - 36.4
MR-CNN-PSP-M 59.2 79.7 73.0 52.3 77.9 66.9 64.2 81.0 77.4 76.9 37.7 32.8 18,063
UPSNet-50-M 60.1 80.3 73.6 54.9 79.5 68.9 63.7 80.8 76.9 76.8 39.1 33.3 2,607
UPSNet-101-M 61.8 81.3 74.8 57.6 81.3 70.5 64.8 81.4 77.8 79.2 43.9 39.0 4,126
Table 8: Panoptic segmentation results on Cityscapes.
Models PQ SQ RQ mIoU Run Time
MR-CNN-PSP 45.5 77.1 57.6 40.1 77.2 52.1 48.7 77.1 60.9 70.5 32.2 29.6 598
UPSNet-C 46.4 76.7 58.8 43.1 77.2 55.4 48.3 76.4 60.8 70.6 33.0 30.5 224
UPSNet-CP 46.7 76.8 59.1 42.9 77.1 55.1 48.9 76.7 61.5 70.8 33.2 30.4
UPSNet 47.1 77.1 59.4 43.8 77.7 55.5 49.0 76.7 61.7 70.8 33.2 30.4 225
Table 9: Panoptic segmentation results on our dataset.

6.3 Visual Examples

We show more visual examples of our panoptic segmentation on COCO, Cityscapes and our dataset in Fig. 5, Fig. 6 and Fig. 7 respectively.

(i) Image
(j) Ground truth
(k) Combined method [17]
(l) Ours
Figure 5: Visual examples of panoptic segmentation on COCO.
(q) Image
(r) Ground truth
(s) Combined method [17]
(t) Ours
Figure 6: Visual examples of panoptic segmentation on Cityscapes.
(m) Image
(n) Ground truth
(o) Combined method [17]
(p) Ours
Figure 7: Visual examples of panoptic segmentation on our internal dataset.