A key task in computer vision is image recognition, for which the ultimate goal is to recognize all elements in an image. At a high level these elements can be divided into two categories:things and stuff . Things are usually countable objects, such as vehicles, persons and furniture items. On the other hand, stuff is the set of remanining elements, usually not countable, such as sky, road and water. Within image recognition, many tasks have been introduced to identify these elements in images. Instance segmentation and semantic segmentation are two of such tasks that have become very prominent, with state-of-the-art methods [6, 10] and [2, 14], respectively. The first task, instance segmentation, focuses on the detection and segmentation of things. If an object is detected, a pixel mask is predicted for this object, and the output of such a method is a set of pixel masks. By design, this method does not account for all elements in an image, as it does not consider stuff classes. The second task, semantic segmentation, does consider all elements, as the aim is to make a class prediction for each pixel in an image, for both things and stuff classes. However, the semantic segmentation output does not differentiate between different instances of things classes. As a result, both methods lack the ability to fully describe the contents of an image.
To fill this gap, the task of panoptic segmentation is introduced in . For this task, each pixel of an image must be assigned with a class label and an instance id. For things classes, the instance id is used to distinguiscoh between different objects. On the other hand, for the stuff classes, it is not necessary – and sometimes not even possible – to distinguish between different instances. Therefore, pixels in these classes always get the same instance id. In , a baseline method for this task is presented, for which they take the outputs of the best scoring instance segmentation and semantic segmentation networks, and combine them using basic heuristics to generate an output in the panoptic format.
Before the task of panoptic segmentation was formally introduced, there were already some publications that focused on exactly this task. In , depth layering and direction predictions are used to detect different instances of objects in a semantic segmentation map. In , a Dynamically Instantiated Network is used to combine the outputs from an external object detector and an internal semantic segmentation network to form a panoptic-like output.
In this report, we present a single end-to-end network that makes both instance segmentation and semantic segmentation predictions, using a shared feature extractor. These predictions are combined to form panoptic segmentation outputs using heuristics, following . The main contribution of our approach is the fact that we apply end-to-end learning to jointly make semantic segmentation and instance segmentation predictions to finally predict a panoptic segmentation output.
We propose a Joint Semantic and Instance Segmentation Network (JSIS-Net) for panoptic segmentation. This method consists of two main sections: 1) a Convolutional Neural Network (CNN) that jointly predicts semantic segmentation and instance segmentation outputs (Section2.1) and 2) heuristics that are used to merge these outputs to generate panoptic segmentation predictions (Section 2.2).
2.1 Network architecture
We propose a CNN that jointly predicts semantic segmentation and instance segmentation outputs. The base of the network is a ResNet-50 feature extractor , which is shared by the semantic segmentation and instance segmentation branch. This is depicted in Figure 2.
The semantic segmentation branch first applies a Pyramid Pooling Module to the generated feature map, as presented in , and uses hybrid upsampling to reshape the predictions to the size of the input image . This hybrid upsampling first applies a deconvolution operation and then bilinearly resizes the predictions to the dimensions of the input image. The output of this branch is a pixel map where each entry corresponds to the predicted class label for that pixel in the input image.
The instance segmentation branch is based on Mask R-CNN . First, a Region Proposal Network (RPN) is used to generate region proposals for potential objects in the image. The features corresponding to these proposals are then extracted from the feature map and subjected to the final layers of ResNet-50. Finally, these features are used to make three different parallel predictions: a classification score, bounding box coordinates, and an instance mask. After applying non-maximum suppression, the output of this branch is a set of pixel clusters with class labels predicted to correspond to the location of different objects in the image. With post-processing, these pixel clusters are transformed to form per-object normalized instance masks with the dimensions of the input image.
2.1.1 Loss balancing for end-to-end learning
To enable end-to-end learning for this network, a single loss function is formed. This means that the various loss functions from the different network branches have to be combined and balanced. This total loss,has the following form:
Here, is the softmax cross-entropy objectness loss function for the RPN, is the smooth L1 regression loss function for the RPN , is the softmax cross-entropy classification loss function for object detection, is the smooth L1 regression loss function for the object bounding boxes, is the sigmoid cross-entropy loss on the instance masks, and is the sparse softmax cross-entropy segmentation loss on the segmentation outputs. Finally, is the L2 regularization on the model parameters. The weights are the tuning parameters that are used to balance the losses.
2.2 Merging heuristics
Because our network outputs two separate predictions in parallel, these outputs have to be processed in order to generate a panoptic segmentation prediction. For panoptic segmentation, two values have to be predicted for each pixel: a class label and an instance id. There are essentially two conflicts that need to be solved before being able to generate this output: overlapping instance masks and conflicting predictions for things classes.
2.2.1 Overlapping instance masks
Because the instance segmentation prediction is essentially based on an object detector and many overlapping region proposals, there is usually overlap between different predicted instance masks. This could be solved by applying non-maximum suppression to all overlapping instance masks, but this would remove many true
predictions. Instead, we chose to leverage the per-instance probability maps to resolve conflicting sections. In the case that two or more instance masks predict that a certain pixel belongs totheir object, the pixel is assigned to the instance mask with the highest probability at that specific pixel. These probabilities are predicted by the instance segmentation branch for each pixel in an instance mask. As a result of this heuristic, all output pixels are assigned to only one object.
2.2.2 Conflicting predictions for things classes
Unlike the stuff classes, which are only considered in the semantic segmentation branch, the things classes are part of the prediction of both the semantic segmentation and the instance segmentation branch. As a result, there are inevitably things prediction conflicts between the two outputs. Because the semantic segmentation output does not distinguish between different instances of objects, the two outputs cannot directly be compared. For this reason, we opt for the merging heuristics used in : we remove all things classes from the semantic segmentation output, and replace them with the most probable stuff class according to the semantic segmentation prediction. This leads to a segmentation map consisting of only stuff class labels. Subsequently, the instance segmentation output is used to replace stuff predictions at pixels where things are predicted. Hence, the instance segmentation output is prioritized over the semantic segmentation output. As a second heuristic, any predicted stuff class with a total pixel count below 4096 is removed from the output as well, and replaced by the next best scoring stuff class above this threshold. This is done because it is very unlikely that a stuff class consists of such a limited number of pixels, if it is present in an image.
After resolving these conflicts, all detected objects get a unique id, and the network has the desired output: a class label and an instance id for all pixels in the input image.
|Mapillary Vistas val||17.6||55.9||23.5||10.0||47.6||14.1||27.5||66.9||35.8|
During training, the entire network is trained end-to-end, using a stochastic gradient descent optimizer with a momentum of 0.9. The initial learning rate is 0.075, and the learning rate is decreased twice with a factor of 2. The time steps at which this happens depend on the dataset that is trained on. The loss and regularization weights are provided in Table2
. The network is initialized using weights pre-trained on the ImageNet dataset, and it is always trained on a single Titan Xp GPU. All presented results are from a single model.
. For both datasets we use slightly different hyperparameters.
For Mapillary Vistas, the feature extractor has input dimensions of 512 x 1024 pixels, and we use a batch size of 2. All input images are resized to these dimensions before being fed to the network. The network is trained for 19 epochs, and the learning rate is decreased after 7 and 13 epochs. This dataset consists of three splits:training, validation and testing. The network is trained on the training set and the hyperparameters are tuned by evaluating on the validation set. In this report, the performance on the validation set is presented. The performance on the test set will be known when the results of the Mapillary Vistas Panoptic Segmentation Challenge are published.
COCO. Because the COCO images are much smaller than the Mapillary Vistas images, the input dimensions of the feature extractor are decreased to 400 x 400 pixels. Again, all input images are resized to these dimensions. In this case, a batch size of 5 is used. Because the amount of training images is much larger for COCO, the network is trained for 8 epochs, and the learning rate is decreased after 4 and 7 epochs. This dataset consists of four splits: training, validation, test-dev and test-challenge. The network is trained on the training set and the hyperparameters are tuned by evaluating on the validation set. In this report, the performance on both the validation and the test-dev set is presented. The performance on the test-challenge set will be known when the results of the COCO Panoptic Segmentation Challenge are published.
The results on the aformentioned datasets have been submitted to the COCO and Mapillary Joint Recognition Challenge 2018. At the time of publication, the results on the challenge test sets for both datasets have not yet been announced.
To enable proper evaluation of panoptic segmentation methods, a new metric is introduced as a main evaluation criterium for the challenge, called Panoptic Quality (PQ) . This metric is designed to assess both the segmentation and recognition performance of the different methods, and it can be divided into the Segmentation Quality (SQ) and the Recognition Quality (RQ). The best overall results on the two datasets are shown in Table 1. Examples of model outputs can be seen in Figures 1 and 3. Especially in Figure 1, it can clearly be seen that the predictions by both methods are combined to generate predictions for every pixel in the input image, while differentiating between different things.
4.1 Joint training vs. independent training
The major difference between our method and the baseline method proposed in  is the fact that we jointly learn the semantic segmentation and instance segmentation branch, instead of using two independently trained models. To evaluate the effectiveness of this joint approach, we compare joint training on our network with independent training of the instance segmentation and semantic segmentation branches of our network. We use the same hyperparameters for all training runs, we train on Mapillary Vistas for 17 epochs, and we evaluate on the Mapillary Vistas validation set. To generate a panoptic output, we merge the results from the independently trained networks using the same heuristics used for the joint approach.
As evaluation criteria, we use not only the PQ, but also metrics to assess the segmentation and instance segmentation performance. For semantic segmentation, we use the commonly used mean Intersection over Union (mIoU), and for instance segmentation we use the mean Average Precision at an IoU threshold of 0.5 (mAP0.5). The results are provided in Table 3.
|Semantic segmentation only||33.6||-||15.0|
|Instance segmentation only||-||6.5||15.0|
From the results it is clear that the jointly trained network outperforms the independently trained networks in all evaluated metrics. It should be noted that all experiments are conducted with the same learning rate and regularization weight, which might be more optimal for the joint task than for the independent tasks.
4.2 Detecting small objects
From the results in Table 1, it is clear that there is a large gap in PQ between things and stuff classes on the Mapillary Vistas dataset. When looking at the network output qualitatively, it appears that the instance segmentation branch of the network is not able to detect small and oddly-shaped objects very well.
Since we suspect the cause for this problem to be a sub-optimal performing RPN, we evaluate the performance of the RPN on the two different datasets. We do this by assessing the mean recall, which is the mean of the recall values of all images in a given image set. Recall is defined as
where is the number of true positives and is the number of false negatives. We define a true positive as a ground-truth object bounding box that has an IoU 0.5 with a region proposal generated by the RPN. A false negative is a ground-truth object bounding box thas does not meet this requirement. Essentially, the recall is the fraction of object bounding boxes in the ground-truth that is covered by the region proposals generated by the RPN.
|Mapillary Vistas val||0.363|
The RPN mean recall for the two different datasets is given in Table 4. From this, it becomes clear that the mean recall for the Mapillary Vistas dataset is much lower than for COCO. On average, only 36.3% of the objects is covered by the region proposals. As a result, it is nearly impossible for the remaining part of the instance segmentation branch to detect the majority of objects in an image. This indicates that the RPN is currently a bottleneck in our approach.
We presented a method that is able to generate panoptic segmentation predictions by merging outputs from a Joint Semantic and Instance Segmentation Network. As is clear from Section 4.1, this joint approach has the potential to outperform independently trained networks.
Although this method works, the performance is worse than the baseline presented in . For this report, we have only tested the architecture with one feature extractor, instance segmentation method and semantic segmentation method. There is no reason, however, that this architecture would not work with other individual methods. Because of this, the performance of the method is greatly dependent on the performance and complexity of the individual sub-methods that are used, and the resources that are required to apply these methods. The hyperparameters that we used for training could also be sub-optimal. After submitting the results to the challenge, we were able to achieve better results for PQ, but the performance is still not as desired.
In future work, we want to further explore the potential benefits of joint learning for panoptic segmentation.
A. Arnab and P. H. S. Torr.
Pixelwise Instance Segmentation with a Dynamically Instantiated
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 879–888, July 2017.
-  L. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. arXiv preprint arXiv:1802.02611, Feb. 2018.
-  J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, June 2009.
-  D. A. Forsyth, J. Malik, M. M. Fleck, H. Greenspan, T. Leung, S. Belongie, C. Carson, and C. Bregler. Finding pictures of objects in large collections of images. In J. Ponce, A. Zisserman, and M. Hebert, editors, Object Representation in Computer Vision II, pages 335–360, Berlin, Heidelberg, 1996. Springer Berlin Heidelberg.
-  R. Girshick. Fast R-CNN. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 1440–1448, Dec. 2015.
-  K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask R-CNN. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 2980–2988, Oct 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, June 2016.
-  A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollár. Panoptic Segmentation. arXiv preprint arXiv:1801.00868, Jan. 2018.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common Objects in Context. In D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, editors, Computer Vision – ECCV 2014, pages 740–755, Cham, 2014. Springer International Publishing.
-  S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia. Path Aggregation Network for Instance Segmentation. arXiv preprint arXiv:1803.01534, Mar. 2018.
-  P. Meletis and G. Dubbelman. Training of Convolutional Networks on Multiple Heterogeneous Datasets for Street Scene Semantic Segmentation. arXiv preprint arXiv:1803.05675, Mar. 2018.
-  G. Neuhold, T. Ollmann, S. R. Bulò, and P. Kontschieder. The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 5000–5009, Oct. 2017.
-  J. Uhrig, M. Cordts, U. Franke, and T. Brox. Pixel-Level Encoding and Depth Layering for Instance-Level Semantic Labeling. arXiv preprint arXiv:1604.05096, Apr. 2016.
-  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid Scene Parsing Network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6230–6239, July 2017.