. The former focuses on segmenting amorphous image regions which share similar texture or material such as grass, sky and road, whereas the latter focuses on segmenting countable objects such as people, bicycle and car. Since both tasks aim at understanding the visual scene at the pixel level, a shared model or representation could arguably be beneficial. However, the dichotomy of these two tasks lead to very different modeling strategies despite the inherent connections between them. For example, fully convolutional neural networks are often adopted for semantic segmentation while proposal based detectors  are frequently exploited for instance segmentation.
As an effort to leverage the possible complementariness of these two tasks and push the segmentation systems further towards real-world application, Kirillov et al.  unified them and proposed the so-called panoptic segmentation task. It is interesting to note that tasks with the same spirit have been studied under various names before deep learning became popular. Notable ones include image parsing , scene parsing 
and holistic scene understanding. In panoptic segmentation, countable objects (those that map to instance segmentation tasks well) are called things whereas amorphous and uncountable regions (those that map to semantic segmentation tasks better) are called stuff. For any pixel, if it belongs to stuff, the goal of a panoptic segmentation system is simply to predict its class label within the stuff classes. Otherwise the system needs to decide which instance it belongs to as well as which thing class it belongs to. The challenge of this task lies in the fact that the system has to give a unique answer for each pixel.
In this paper, we propose a unified panoptic segmentation network (UPSNet) to approach the panoptic segmentation problem as stated above. Unlike previous methods [17, 19] which have two separate branches designed for semantic and instance segmentation individually, our model exploits a single network as backbone to provide shared representations. We then design two heads on top of the backbone for solving these tasks simultaneously. Our semantic head builds upon deformable convolution  and leverages multi-scale information from feature pyramid networks (FPN) . Our instance head follows the Mask R-CNN  design and outputs mask segmentation, bounding box and its associated class. As shown in experiments, these two lightweight heads along with the single backbone provide good semantic and instance segmentations which are comparable to separate models. More importantly, we design a panoptic head which predicts the final panoptic segmentation via pixel-wise classification of which the number of classes per image could vary. It exploits the logits from the above two heads and adds a new channel of logits corresponding to an extra unknown class. By doing so, it provides a better way of resolving the conflicts between semantic and instance segmentation. Moreover, our parameter-free panoptic head is very lightweight and could be used with various backbone networks. It facilitates end-to-end training which is not the case for previous methods [17, 19]. To verify the effectiveness of our UPSNet, we perform extensive experiments on two public datasets: Cityscapes  and COCO . Furthermore, we test it on our internal dataset which is similar in spirit to Cityscapes (i.e., images are captured from ego-centric driving scenarios) but with significantly larger () size. Results on these three datasets manifest that our UPSNet achieves state-of-the-art performances and enjoys much faster inference compared to recent competitors.
2 Related Work
focused on introducing datasets for this task and showed the importance of global context by demonstrating the gains in bayesian frameworks, whether structured or free-form. Recent semantic segmentation methods that exploit deep convolutional feature extraction mainly approach this problem from either multi-scale feature aggregation[39, 26, 5, 14], or end-to-end structured prediction [2, 40, 1, 25, 4] perspectives. As context is crucial for semantic segmentation, one notable improvement to most convolutional models emerged from dilated convolutions [36, 37] which allows for a larger receptive field without the need of more free parameters. Pyramid scene parsing network (PSPNet)  that uses dilated convolutions in its backbone, and its faster variant  for real-time applications are widely utilized in practical applications. Based on FPN and PSPNet, a multi-task framework is proposed in  and demonstrated to be versatile in segmenting a wide range of visual concepts.
Instance Segmentation: Instance segmentation deals not only with identifying the semantic class a pixel is associated with, but also the specific object instance that it belongs to. Beginning with the introduction of region-based CNN (R-CNN) , many early deep learning approaches to instance segmentation attacked the problem by casting the solution to instance segmentation as a two stage approach where a number of segment proposals are made, which is then followed by a voting between those proposals to choose the best [33, 7, 8, 13, 14]. The common denominator for these methods is that the segmentation comes before classification, and are therefore slower. Li et al.  proposed a fully convolutional instance-aware segmentation method, where instance mask proposals  are married with fully convolutional networks . Most recently, Mask R-CNN  introduced a joint approach to both mask prediction and recognition where one of the two parallel heads are dedicated to each task.
Panoptic Segmentation: Instance segmentation methods that focus on detection bounding box proposals, as mentioned above, ignore the classes that are not well suited for detection, e.g., sky, street. On the other hand, semantic segmentation does not provide instance boundaries for classes like pedestrian and bicycle in a given image. Panoptic segmentation task, first coined by Kirillov et al.  unifies these tasks and defines an ideal output for thing classes as instance segmentations, as well as for stuff classes as semantic segmentation. The baseline panoptic segmentation method introduced in  processes the input independently for semantic segmentation via a PSPNet, and for instance segmentation utilizing a Mask R-CNN 
, followed by simple heuristic decisions to produce a single void, stuff, or thing instance label per pixel. Recently, Liet al.  introduced a weakly- and semi-supervised panoptic segmentation method where they relieve some of the ground truth constraints by supervising thing classes using bounding boxes, and stuff classes by utilizing image level tags. De Gaus et al.  uses a single feature extraction backbone for the pyramid semantic segmentation head , and the instance segmentation head , followed by heuristics for merging pixel level annotations, effectively introducing an end-to-end version of  due to the shared backbone for the two task networks. Li et al.  propose the attention-guided unified network (AUNet) which leverages proposal and mask level attention to better segment the background. Similar post-processing heuristics as in  are used to generate the final panoptic segmentation. Li et al.  propose things and stuff consistency network (TASCNet) which constructs a binary mask predicting things vs. stuff for each pixel. An extra loss is added to enforce the consistency between things and stuff prediction.
In contrast to most of these methods, we use a single backbone network to provide both semantic and instance segmentation results. More importantly, we develop a simple yet effective panoptic head which helps accurately predict the instance and class label.
3 Unified Panoptic Sementation Network
In this section, we first introduce our model and then explain the implementation details. Following the convention of , we divide the semantic class labels into stuff and thing. Specifically, thing refers to the set of labels of instances (e.g. pedestrian, bicycle), whereas stuff refers to the rest of the labels that represent semantics without clear instance boundaries (e.g. street, sky). We denote the number of stuff and thing classes as and respectively.
3.1 UPSNet Architecture
UPSNet consists of a shared convolutional feature extraction backbone and multiple heads on top of it. Each head is a sub-network which leverages the features from the backbone and serves a specific design purpose that is explained in further detail below. The overall model architecture is shown in Fig. 1.
Instance Segmentation Head:
The instance segmentation head follows the Mask R-CNN design with a bounding box regression output, a classification output, and a segmentation mask output. The goal of the instance head is to produce instance aware representations that could identify thing classes better. Ultimately these representations are passed to the panoptic head to contribute to the logits for each instance.
Semantic Segmentation Head:
The goal of the semantic segmentation head is to segment all semantic classes without discriminating instances. It could help improving instance segmentation where it achieves good results of thing classes. Our semantic head consists of a deformable convolution  based sub-network which takes the multi-scale feature from FPN as input. In particular, we use , , and feature maps of FPN which contain channels and are , , and of the original scale respectively. These feature maps first go through the same deformable convolution network independently and are subsequently upsampled to the scale. We then concatenate them and apply convolutions with softmax to predict the semantic class. The architecture is shown in Fig. 2. As will be experimentally verified later, the deformable convolution along with the multi-scale feature concatenation provide semantic segmentation results as good as a separate model, e.g., a PSPNet adopted in . Semantic segmentation head is associated with the regular pixel-wise cross entropy loss. To put more emphasis on the foreground objects such as pedestrians, we also incorporate a RoI loss. During training, we use the ground truth bounding box of the instance to crop the logits map after the convolution and then resize it to following Mask R-CNN. The RoI loss is then the cross entropy computed over patch which amounts to penalizing more on the pixels within instances for incorrect classification. As demonstrated in the ablation study later, we empirically found that this RoI loss helps improve the performance of panoptic segmentation without harming the semantic segmentation.
Panoptic Segmentation Head:
Given the semantic and instance segmentation results from the above described two heads, we combine their outputs (specifically per pixel logits) in the panoptic segmentation head.
The logits from semantic head is denoted as of which the channel size, height and width are , and respectively.
can then be divided along channel dimension into two tensorsand which are logits corresponding to stuff and thing classes. For any image, we determine the number of instances according to the number of ground truth instances during training. During inference, we rely on a mask pruning process to determine which is explained in Section 3.2. is fixed since number of stuff classes is constant across different images, whereas is not constant since the number of instances per image can be different. The goal of our panoptic segmentation head is to first produce a logit tensor which is of size and then uniquely determine both the class and instance ID for each pixel.
We first assign to the first channels of
to provide the logits for classifyingstuffs. For any instance , we have its mask logits from the instance segmentation head which is of size . We also have its box and class ID . During training and are ground truth box and class ID whereas during inference they are predicted by Mask R-CNN. Therefore, we can obtain another representation of -th instance from semantic head by only taking the values inside box from the channel of corresponding to . is of size and its values outside box
are zero. We then interpolateback to the same scale as
via bilinear interpolation and pad zero outside the box to achieve a compatible shape with, denoted as . The final representation of -th instance is . Once we fill in with representations of all instances, we perform a softmax along the channel dimension to predict the pixel-wise class. In particular, if the maximum value falls into the first channel, then it belongs to one of stuff classes. Otherwise the index of the maximum value tells us the instance ID. The architecture is shown in Fig. 3. During training, we generate the ground truth instance ID following the order of the ground truth boxes we used to construct the panoptic logits. The panoptic segmentation head is then associated with the standard pixel-wise cross entropy loss.
During inference, once we predict the instance ID following the above procedure, we still need to determine the class ID of each instance. One can either use the class ID predicted by Mask R-CNN or the one predicted by the semantic head . As shown later in the ablation study, we resort to a better heuristic rule. Specifically, for any instance, we know which pixels correspond to it, i.e., those of which the argmax of along channel dimension equals to its instance ID. Among these pixels, we first check whether and are consistent. If so, then we assign the class ID as . Otherwise, we compute the mode of their , denoting as . If the frequency of the mode is larger than and belongs to stuff, then the predicted class ID is . Otherwise, we assign the class ID as . In short, while facing inconsistency, we trust the majority decision made by the semantic head only if it prefers a stuff class. The justification of such a conflict resolution heuristic is that semantic head typically achieves very good segmentation results over stuff classes.
In this section, we explain a novel mechanism to allow UPSNet to classify a pixel as the unknown class instead of making a wrong prediction. To motivate our design, we consider a case where a pedestrian is instead predicted as a bicycle. Since the prediction missed the pedestrian, the false negative (FN) value of pedestrian class will be increased by . On the other hand, predicting it as a bicycle will be increasing the false positive (FP) of bicycle class also by . Recall that, the panoptic quality (PQ)  metric for panoptic segmentation is defined as,
which consist of two parts: recognition quality (RQ) and semantic quality (SQ). It is clear that increasing either FN or FP degrades this measurement. This phenomena extends to wrong predictions of the stuff classes as well. Therefore, if a wrong prediction is inevitable, predicting such pixel as unknown is preferred since it will increase FN of one class by without affecting FP of the other class.
To alleviate the issue, we compute the logits of the extra unknown class as where is the concatenation of of all masks along channel dimension and of shape . The maximum is taken along the channel dimension. The rationale behind this is that for any pixel if the maximum of is larger than the maximum of , then it is highly likely that we are missing some instances (FN). The construction of the logits is shown in Fig. 3. To generate the ground truth for the unknown class, we randomly sample ground truth masks and set them as unknown during training. In evaluating the metric, any pixel belonging to unknown is ignored, i.e., setting to void which will not contribute to the results.
3.2 Implementation Details
In this section, we explain the implementation details of UPSNet. We follow most of settings and hyper-parameters of Mask R-CNN which will be introduced in the supplementary material. Hereafter, we only explain those which are different.
We implement our model in PyTorch and train it with GPUs using the distributed training framework Horovod . Images are preprocessed following . Each mini-batch has image per GPU. As mentioned, we use ground truth box, mask and class label to construct the logits of panoptic head during training. Our region proposal network (RPN) is trained end-to-end with the backbone whereas it was trained separately in . Due to the high resolution of images, e.g., in Cityscapes, logits from semantic head and panoptic head are downsampled to
of the original resolution. Although we do not fine-tune batch normalization (BN) layers within the backbone for simplicity, we still achieve comparable results with the state-of-the-art semantic segmentation networks like PSPNet. Based on common practice in semantic[4, 39] and instance segmentation [29, 24], we expect the performance to be further improved with BN layers fine-tuned. Our UPSNet contains loss functions in total: semantic segmentation head (whole image and RoI based pixel-wise classification losses), panoptic segmentation head (whole image based pixel-wise classification loss), RPN (box classification, box regression) and instance segmentation head (box classification, box regression and mask segmentation). Different weighting schemes on these multi-task loss functions could lead to very different training results. As shown in the ablation study, we found the loss balance strategy, i.e., assuring the scales of all losses are roughly on the same order of magnitude, works well in practice.
During inference, once we obtained output boxes, masks and predicted class labels from the instance segmentation head, we apply a mask pruning process to determine which mask will be used for constructing the panoptic logits. In particular, we first perform the class-agnostic non-maximum suppression with the box IoU threshold as
to filter out some overlapping boxes. Then we sort the predicted class probabilities of the remaining boxes and keep those whose probability are larger than. For each class, we create a canvas which is of the same size as the image. Then we interpolate masks of that class to the image scale and paste them onto the corresponding canvas one by one following the decreasing order of the probability. Each time we copy a mask, if the intersection between the current mask and those already existed over the size of the current mask is larger than a threshold, we discard this mask. Otherwise we copy the non-intersecting part onto the canvas. The threshold of this intersection over itself is set to in our experiments. Logits from the semantic segmentation head and panoptic segmentation head are of the original scale of the input image during inference.
COCO We follow the setup of COCO 2018 panoptic segmentation task which consists of and classes for thing and stuff respectively. We use train2017 and val2017 subsets which contain approximately k training images and k validation images.
Cityscapes Cityscapes has images of ego-centric driving scenarios in urban settings which are split into , and for training, validation and testing respectively. It consists of and classes for thing and stuff.
Our Dataset We also use an internal dataset which is similar to Cityscapes and consists of training, validation and test images of ego-centric driving scenarios. Our dataset consists of and classes for thing (e.g., car, bus) and stuff (e.g., building, road) respectively.
|Megvii (Face++)||ensemble model||53.2||83.2||62.9||62.2||85.5||72.5||39.5||79.7||48.5|
Experimental Setup For all datasets, we report results on the validation set. To evaluate the performance, we adopt panoptic quality (PQ), recognition quality (RQ) and semantic quality (SQ)  as the metrics. We also report average precision (AP) of mask prediction, mean IoU of semantic segmentation on both stuff and thing and the inference run-time for comparison. At last, we show results of ablation study on various design components of our model. Full results with all model variants are shown in the supplementary material.
We set the learning rate and weight decay as and for all datasets. For COCO, we train for K iterations and decay the learning rate by a factor of at K and K iterations. For Cityscapes, we train for K iterations and apply the same learning rate decay at K iterations. For our dataset, we train for K iterations and apply the same learning rate decay at K and K iterations. Loss weights of semantic head are , and on COCO, Cityscapes and ours respectively. RoI loss weights are one fifth of those of semantic head. Loss weights of panoptic head are , and on COCO, Cityscapes and ours respectively. All other loss weights are set to .
We mainly compare with the combined method used in . For a fair comparison, we adopt the model which uses a Mask R-CNN with a ResNet-50-FPN and a PSPNet with a ResNet-50 as the backbone and apply the combine heuristics to compute the panoptic segmentation. We denote our implementation of the combined method as ‘MR-CNN-PSP’ and its multi-scale testing version as ‘MR-CNN-PSP-M’. Unless specified otherwise, the combined method hereafter refers to ‘MR-CNN-PSP’. For PSPNet, we use ‘poly’ learning rate schedule as in  and train K, K and K on COCO, Cityscapes and our dataset with mini-batch size . We test all available models with the multi-scale testing. Specifically, we average the multi-scale logits of PSPNet for the combined method and the ones of semantic segmentation head for our UPSNet. For simplicity, we just use single scale testing on Mask R-CNN of the combined method and our instance segmentation head. During evaluation, due to the sensitivity of PQ with respect to RQ, we predict all stuff segments of which the areas are smaller than a threshold as unknown. The thresholds on COCO, Cityscapes and our dataset are , and respectively. To be fair, we apply this area thresholding to all methods.
|Li et al. ||53.8||-||-||42.5||62.1||71.6||28.6|
|Kirillov et al. ||61.2||80.9||74.4||54.0||66.4||80.9||36.4|
Panoptic segmentation results on Cityscapes. ’-COCO’ means the model is pretrained on COCO. ‘-101‘ means the model uses ResNet-101 as the backbone. Unless specified, all models use ResNet-50 as the backbone and are pretrained on ImageNet.
We compare with several recent published methods including JSIS-Net , RN50-MR-CNN 111https://competitions.codalab.org/competitions/19507#results and the combined method  on COCO. Since authors in  do not report results on COCO, we use our MR-CNN-PSP model as the alternative to do the experiments. JSIS-Net uses a ResNet-50 wheres RN50-MR-CNN uses two separate ResNet-50-FPNs as the backbone. Our UPSNet adopts a ResNet-50-FPN as the backbone. In order to better leverage context information, we use a global average pooling over the feature map of the last layer in the 5-th stage of ResNet (‘res5’), reduce its dimension to and add back to FPN before producing feature map. Table 7 shows the results of all metrics. The mIoU metric is computed over the classes of stuff and thing in the COCO 2018 panoptic segmentation task which is different from previous classes of COCO-Stuff. We are among the first to evaluate mIoU on this -class subset. From the table, we can see that our UPSNet achieves better performance in all metrics except the SQ. It is typically the case that the an increase in RQ leads to the slight decrease of SQ since we include more TP segments which could have possibly lower IoU. Note that even with multi-scale testing, MR-CNN-PSP is still worse than ours on PQ. Moreover, from the mIoU column, we can see that the performance of our semantic head is even better than a separate PSPNet which verifies its effectiveness. With multi-scale testing, both MR-CNN-PSP and UPSNet are improved and ours is still better. We also add the comparisons on the test-dev of MS-COCO 2018 in Table 2. Although we just use ResNet-101 as the backbone, we achieve slightly better results compared to the recent AUNet  which uses ResNeXt-152. We also list the top three results on the leadboard which uses ensemble and other tricks. It is clear from the table that we are on par with the second best model without using any such tricks. In terms of the model size, RN50-MR-CNN, MR-CNN-PSP and UPSNet consists of M, M and M parameters respectively. Therefore, our model is significantly lighter. We show visual examples of panoptic segmentation on this dataset in the first two rows of Fig. 4. From the -st row of the figure, we can see that the combined method completely ignores the cake and other objects on the table whereas ours successfully segments them out. This is due to the inherent limitations of the combine heuristic which first pastes the high confidence segment, i.e., table, and then ignores all highly overlapped objects thereafter.
We compare our model on Cityscapes with Li et al. , the combined method  and TASCNet . Note that the method in  uses a ResNet-101 as the backbone whereas all other reported methods use ResNet-50 within their backbones. We use deformable convolution layers for the semantic head. The results are reported in Table 8. It is clear from the table that both our UPSNet and MR-CNN-PSP significantly outperform the method in , especially on . This may possibly be caused by the fact that their CRF based instance subnetwork performs worse compared to Mask R-CNN on instance segmentation. Under the same single scale testing, our model achieves better performance than the combined method. Although multi-scale testing significantly improves both the combined method and UPSNet, ours is still slightly better. Results reported in  are different from the ones of our MR-CNN-PSP-M since: 1) they use ResNet-101 as the backbone for PSPNet; 2) they pre-train Mask R-CNN on COCO and PSPNet on extra coarsely labeled data. We also have a model variant, denoting as ‘Ours-101-M’, which adopts ResNet-101 as the backbone and pre-trains on COCO. As you can see, it outperforms the reported metrics of the combined method. We show the visual examples of panoptic segmentation on this dataset in the -rd and -th rows of Fig. 4. From the -rd row of the figure, we can see that the combined method tends to produce large black area, i.e., unknown, whenever the instance and semantic segmentation conflicts with each other. In contrast, our UPSNet resolves the conflicts better. Moreover, it is interesting to note that some unknown prediction of our model has the vertical or horizontal boundaries. This is caused by the fact that instance head predicts nothing whereas semantic head predicts something for these out-of-box areas. The logits of unknown class will then stand out by design to avoid contributing to both FP and FN as described in section 3.1.
4.3 Our Dataset
Last but not least, we compare our model on our own dataset with the combined method . All reported methods use ResNet-50 within their backbones. We use deformable convolution layers for the semantic head. The results are reported in Table 9. We can observe that similar to COCO, our model performs significantly better than the combined method on all metrics except SQ. We show the visual examples of panoptic segmentation on this dataset in the last two rows of Fig. 4. From the examples, similar observations as COCO and Cityscapes are found.
4.4 Run Time Comparison
We compare the run time of inference on all three datasets in Table 5. We use a single NVIDIA GeForce GTX 1080 Ti GPU and an Intel Xeon E5-2687W CPU (3.00GHz). All entries are averaged over runs on the same image with single scale test. For COCO, the PSPNet within the combined method uses the original scale. We also list the image size on each dataset. It is clearly seen in the table that as the image size increases, our UPSNet is significantly faster in run time. For example, the combined method takes about time than ours.
4.5 Ablation Study
We perform extensive ablation studies on COCO dataset to verify our design choices as listed in Table 6
. Empty cells in the table indicate the corresponding component is not used. All evaluation metrics are computed over the output of the panoptic head on the validation set.
Panoptic Head: Since the inference of our panoptic head is applicable as long as we have outputs from both semantic and instance segmentation heads, we can first train these two heads simultaneously and then directly evaluate the panoptic head. We compare the results with the ones obtained by training all three heads. By doing so, we can verify the gain of training the panoptic head over treating it as a post processing procedure. From the first two rows of Table 6, we can see that training the panoptic head does improve the PQ metric.
Instance Class Assignment: Here, we focus on different alternatives of assigning instance class. We compare our heuristic as described in section 3.1 with the one which only trusts the predicted class given by the instance segmentation head. As you can see from the -nd and -rd rows of the Table 6, our instance class assignment is better.
Loss Balance: We investigate the weighting scheme of loss functions. Recall that without the proposed RoI loss, our UPSNet contains loss functions. In order to weight them, we follow the principle of loss balance, i.e., making sure their values are roughly on the same order of magnitude. In particular, with loss balance, we set the weights of semantic and panoptic losses as and and the rest ones as . Without loss balance, we set the weights of both semantic and panoptic losses as and the rest ones as . The -rd and -th rows of Table 6 show that introducing the loss balance improves the performance.
RoI Loss & Unknown Prediction: Here, we investigate the effectiveness of our RoI loss function over the semantic head and the unknown prediction. From -th and -th rows of Table 6, one can conclude that adding such a new loss function does slightly boost the . From -th and -th rows of Table 6, along with the RoI loss, predicting unknown class improves the metrics significantly.
Oracle Results: We also explore the room for improvement of the current system by replacing some inference results with the ground truth (GT) ones. Specifically, we study the box, instance class assignment and semantic segmentation results which correspond to GT Box, GT ICA and GT Seg. columns in Table 6. It is clear from the table that using GT boxes along with our predicted class probabilities improves PQ which indicates that better region proposals are required to achieve higher recall. On top of the GT boxes, using the GT class assignment greatly improves the PQ, e.g., . The imperfect indicates that our mask segmentation is not good enough. Moreover, using the GT semantic segmentation gives the largest gain of PQ, i.e., , which highlights the importance of improving semantic segmentation. is imperfect since we resize images during inference which causes the misalignment with labels. It is worth noticing that increasing semantic segmentation also boosts for points. This is because our model leverages semantic segments while producing instance segments. However, it is not the case for the combined method as its predicted instance segments only relies on the instance segmentation network.
In this paper, we proposed the UPSNet which provides a unified framework for panoptic segmentation. It exploits a single backbone network and two lightweight heads to predict semantic and instance segmentation in one shot. More importantly, our parameter-free panoptic head leverages the logits from the above two heads and has the flexibility to predict an extra unknown class. It handles the varying number of classes per image and enables back propagation for the bottom representation learning. Empirical results on three large datasets show that our UPSNet achieves state-of-the-art performance with significantly faster inference compared to other methods. In the future, we would like to explore more powerful backbone networks and smarter parameterization of panoptic head.
-  A. Arnab, S. Jayasumana, S. Zheng, and P. H. Torr. Higher order conditional random fields in deep neural networks. In ECCV, 2016.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062, 2014.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE TPAMI, 40(4):834–848, 2018.
-  L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
-  L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille. Attention to scale: Scale-aware semantic image segmentation. In CVPR, 2016.
-  M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In CVPR, pages 3213–3223, 2016.
-  J. Dai, K. He, Y. Li, S. Ren, and J. Sun. Instance-sensitive fully convolutional networks. In ECCV, pages 534–549, 2016.
-  J. Dai, K. He, and J. Sun. Convolutional feature masking for joint object and stuff segmentation. In CVPR, 2015.
-  J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In ICCV, pages 764–773, 2017.
-  D. de Geus, P. Meletis, and G. Dubbelman. Panoptic segmentation with a joint semantic and instance segmentation network. arXiv preprint arXiv:1809.02110, 2018.
-  M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. IJCV, 88(2):303–338, 2010.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
-  B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Simultaneous detection and segmentation. In ECCV, 2014.
-  B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015.
-  K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In ICCV, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
-  A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollár. Panoptic segmentation. arXiv preprint arXiv:1801.00868, 2018.
-  J. Li, A. Raventos, A. Bhargava, T. Tagawa, and A. Gaidon. Learning to fuse things and stuff. arXiv preprint arXiv:1812.01192, 2018.
-  Q. Li, A. Arnab, and P. H. Torr. Weakly-and semi-supervised panoptic segmentation. In ECCV, pages 102–118, 2018.
-  Y. Li, X. Chen, Z. Zhu, L. Xie, G. Huang, D. Du, and X. Wang. Attention-guided unified network for panoptic segmentation. arXiv preprint arXiv:1812.03904, 2018.
-  Y. Li, H. Qi, J. Dai, X. Ji, and Y. Wei. Fully convolutional instance-aware semantic segmentation. In CVPR, 2017.
-  T.-Y. Lin, P. Dollár, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie. Feature pyramid networks for object detection. In CVPR, volume 1, page 4, 2017.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740–755. Springer, 2014.
-  S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia. Path aggregation network for instance segmentation. In CVPR, pages 8759–8768, 2018.
-  Z. Liu, X. Li, P. Luo, C.-C. Loy, and X. Tang. Semantic image segmentation via deep parsing network. In ICCV, 2015.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
-  R. Mottaghi, X. Chen, X. Liu, N.-G. Cho, S.-W. Lee, S. Fidler, R. Urtasun, and A. Yuille. The role of context for object detection and semantic segmentation in the wild. In CVPR, 2014.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In NIPS Workshop, 2017.
-  C. Peng, T. Xiao, Z. Li, Y. Jiang, X. Zhang, K. Jia, G. Yu, and J. Sun. Megdet: A large mini-batch object detector. arXiv preprint arXiv:1711.07240, 7, 2017.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, pages 91–99, 2015.
-  A. Sergeev and M. D. Balso. Horovod: fast and easy distributed deep learning in TensorFlow. arXiv preprint arXiv:1802.05799, 2018.
-  Z. Tu, X. Chen, A. L. Yuille, and S.-C. Zhu. Image parsing: Unifying segmentation, detection, and recognition. IJCV, 63(2):113–140, 2005.
-  J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders. Selective search for object recognition. IJCV, 104(2):154–171, 2013.
-  T. Xiao, Y. Liu, B. Zhou, Y. Jiang, and J. Sun. Unified perceptual parsing for scene understanding. In ECCV, 2018.
-  J. Yao, S. Fidler, and R. Urtasun. Describing the scene as a whole: Joint object detection, scene classification and semantic segmentation. In CVPR, pages 702–709. IEEE, 2012.
-  F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016.
-  F. Yu, V. Koltun, and T. A. Funkhouser. Dilated residual networks. In CVPR, 2017.
-  H. Zhao, X. Qi, X. Shen, J. Shi, and J. Jia. Icnet for real-time semantic segmentation on high-resolution images. In ECCV, 2018.
-  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang,
and P. H. Torr.
Conditional random fields as recurrent neural networks.In ICCV, 2015.
6 Supplementary Material
We first explain the hyper-parameters and then provide full experimental results on all three datasets.
For all our experiments, we exploit a -iteration warm-up phase where the learning rate gradually increases from to . We also initialize all models with ImageNet pre-trained weights released by MSRA 222https://github.com/KaimingHe/deep-residual-networks.
We resize the image such that the length of the shorter edge is and the length of the longer edge does not exceed . We do not utilize multi-scale training. For testing, we feed multi-scale images for all models. Specifically, we resize the images to multiple scales of which the shorter edge equals to respectively. We also add left-right flipping. Finally, we average the semantic segmentation logits under different scales. For PSPNet, we do sliding window test with cropped image.
We do multi-scale training where we resize the input image in a way that the length of the shorter edge is randomly sampled from . For multi-scale testing, we use the same protocol as COCO except the set of scales is . For PSPNet, we do sliding window test with cropped image.
We utilize multi-scale training where the length of the shorter edge is randomly sampled from . We do not perform multi-scale testing.
6.2 Full Experimental Results
We show the full experimental results including run time in Table 7, Table 8 and Table 9. We also add two more variants of our model as baselines, i.e., UPSNet-C and UPSNet-CP. UPSNet-C is UPSNet without the panoptic head. We train it with just semantic and instance segmentation losses. During test, we use the same combine heuristics as in  to generate the final prediction. UPSNet-CP has the same model as UPSNet. We train it in the same way as UPSNet as well. During test, we use the same combine heuristics as in  to generate the final prediction.
|Models||PQ||SQ||RQ||mIoU||Run Time (ms)|
|Multi-scale||PQ||SQ||RQ||mIoU||Run Time (ms)|
|Models||PQ||SQ||RQ||mIoU||Run Time (ms)|
|Li et al. ||53.8||-||-||42.5||-||-||62.1||-||-||71.6||-||28.6||-|
|Multi-scale||PQ||SQ||RQ||mIoU||Run Time (ms)|
|Kirillov et al. ||61.2||80.9||74.4||54.0||-||-||66.4||-||-||80.9||-||36.4|