Implementation of Panoptic Segformer, in Pytorch
We present Panoptic SegFormer, a general framework for end-to-end panoptic segmentation with Transformers. The proposed method extends Deformable DETR with a unified mask prediction workflow for both things and stuff, making the panoptic segmentation pipeline concise and effective. With a ResNet-50 backbone, our method achieves 50.0% PQ on the COCO test-dev split, surpassing previous state-of-the-art methods by significant margins without bells and whistles. Using a more powerful PVTv2-B5 backbone, Panoptic-SegFormer achieves a new record of 54.1%PQ and 54.4% PQ on the COCO val and test-dev splits with single scale input.READ FULL TEXT VIEW PDF
Implementation of Panoptic Segformer, in Pytorch
SOTA Panoptic Segmentation Models in PyTorch
Semantic segmentation and instance segmentation are two important and correlated vision problems. Their underlying connections recently motivated panoptic segmentation as a unification of both tasks . In panoptic segmentation, image contents are divided into two types: things and stuff. Things are countable instances (e.g., person, car, and bicycle) and each instance has a unique id to distinguish it from the other instances. Stuff refers to the amorphous and uncountable regions (e.g., sky, grassland, and snow) and has no instance id .
The differences between things and stuff also lead to different ways to handle their predictions. A number of works simply decompose panoptic segmentation into an instance segmentation task for things and a semantic segmentation task for stuff [16, 15, 26, 39, 25]. However, such a separated strategy tend to increase model complexity and undesired artifacts. Several works further consider bottom-up (proposal-free) instance segmentation approaches but still maintain similar separate strategies [41, 12, 2, 7, 33]. Some recent methods try to simplify the panoptic segmentation pipeline by processing things and stuff with a unified framework. For example, several works [30, 38, 19, 42] achieve this with fully convolutional frameworks. These framework share a similar “top-down meets bottom-up” two-branch design where a kernel branch encodes object/region information, and is dynamically convolved with an image-level feature branch to generate the object/region masks.
|PQ (%)||#Param (M)|
Recently, Vision Transformers have been widely introduced to instance localization and recognition tasks [3, 43, 36, 23]. Vision Transformers generally divide an input image into crops and encode them as tokens. For object detection problems, both DETR  and Deformable DETR  represent the object proposals with a set of learnable queries which are used to predict bounding boxes and are dynamically matched with object ground truths via a bipartite graph matching loss. The role of query features is similar to RoI features in conventional detection architectures, thus inspiring several methods [3, 8, 32] with two-branch designs similar to Panoptic FCN .
In this work, we propose Panoptic SegFormer, a concise and effective framework for end-to-end panoptic segmentation with Vision Transformers. Specifically, Panoptic SegFormer contains three key designs:
A query set to represent things and stuff uniformly, where the stuff classes are considered as special type of things with single instance ids;
A location decoder which focuses on leveraging the location information of things and stuff to improve the segmentation quality;
A mask-wise post-processing strategy to equally merge the segmentation results of things and stuff.
Benefiting from these three designs, Panoptic SegFormer achieves state-of-the-art panoptic segmentation performance tasks with efficiency.
To verify our framework, we conduct extensive experiments on the COCO dataset . As shown in Figure 1, our smallest model, Panoptic SegFormer (PVTv2-B0), achieves 49.0% PQ on the COCO val2017 split with only 22.2M parameters, surpassing prior arts such as MaskFormer  and Max-Deeplab , whose parameter sizes are twice and three times larger. Panoptic SegFormer (PVTv2-B5) further achieves the state PQ of 54.1%, which is 3% PQ higher than Max-Deeplab (51.1% PQ) and 1.4% PQ higher than MaskFormer (52.7% PQ), respectively, while our method still enjoys significantly fewer parameters. It is worth mentioning that Panoptic SegFormer achieves 54.4% PQ on COCO test-dev with single scale input, outperforming competition methods including Innovation , which uses plenty of tricks such as model ensemble, multi-scale testing. Currently, Panoptic SegFormer (PVTv2-B5) is the 1st place on COCO Panoptic Segmentation leaderboard111https://competitions.codalab.org/competitions/19507#results.
The panoptic segmentation literature mainly treat this problem as a joint task of instance segmentation and semantic segmentation where things and stuff are handled separately. Kirillov et al.  proposed the concept of and benchmark of panoptic segmentation together with a baseline which directly combines the outputs of individual instance segmentation and semantic segmentation models. Since then, models such as Panoptic FPN , UPSNet  and AUNet  have improved the accuracy and reduced the computational overhead by combining instance segmentation and semantic segmentation into a single model. However, these methods still approximate the target task by solving the surrogate sub-tasks, therefore introducing undesired model complexities and sub-optimal performance.
Recently, efforts have been made to unified framework of panoptic segmentation. Li et al.  proposed Panoptic FCN where the panoptic segmentation pipeline is simplified with a “top-down meets bottom-up” two-branch design similar to CondInst . In their work, things and stuff are jointly modeled by an object/region-level kernel branch and an image-level feature branch. Several recent works represent things and stuff as queries and perform end-to-end panoptic segmentation via transformers. DETR  predicts the bounding boxes of things and stuff and combines the attention maps of the transformer decoder and the feature maps of ResNet  to perform panoptic segmentation. Max-Deeplab  directly predicts object categories and masks through a dual-path transformer regardless of the category being things or stuff. On top of DETR, MaskFomer  uses an additional pixel decoder to refine high spatial resolution features and generated the masks by multiplying queries and features from the pixel decoder. Due to the computational complexity of multi-head attention , both DETR and MaskFormer use feature maps with limited spatial resolutions for panoptic segmentation, which hurts the performance and requires combining additional high-resolution feature maps in final mask prediction. These methods have provided unified frameworks for predicting things and stuff in panoptic segmentation. However, there is still a noticeable gap between these methods and the top leaderboard methods with separated prediction strategies in terms of performance [4, 34].
The recent popularity of end-to-end object detection frameworks have inspired many other related works. DETR  is arguably the most representative end-to-end object detector among these methods. DETR models the object detection task as a dictionary lookup problem with learnable queries and employs an encoder-decoder transformer to predict bounding boxes without extra post-processing. DETR greatly simplifies the conventional detection framework and removes many hand-crafted components such as NMS [27, 21] and anchors . Zhu et al.  proposed Deformable DETR which further reduces the memory and computational cost in DETR through deformable attention layers. Although having these advantages, the attention maps of the deformable attention layers are sparse and cannot be directly used for dense prediction in panoptic segmentation.
Mask R-CNN  has been one of the most representative two-stage instance segmentation methods by first extracting ROIs and then predicting the final results conditioned on these ROIs. One-stage methods such as CondInst  and SOLOv2  further simplifies this pipeline by employing dynamic filters (conditional convolution)  with a kernel branch. Recently, SOLQ  and QueryInst , perform instance segmentation in an end-to-end paradigm without involving NMS. QueryInst is based on an end-to-end object detector Sparse-RCNN 
and predicts masks through corresponding bounding boxes and queries. By encoding masks to vectors, SOLQ predicts mask vectors in a regressive manner and outputs the final masks by decoding the vectors. The proposed Panoptic SegFormer can also handle end-to-end instance segmentation by only predicting thing classes.
As illustrated in Figure 2, Panoptic SegFormer consists of three key modules: transformer encoder, location decoder, and mask decoder, where (1) the transformer encoder is applied to refine the multi-scale feature maps given by the backbone, (2) the location decoder is designed to capturing object’s location clues, and (3) the mask decoder is for final classification and segmentation.
During the forward phase, we first feed the input image to the backbone network, and obtain the feature maps , , and from the last three stages, whose resolutions are , and compared to the input image, respectively. We then project the three feature maps to the ones with 256 channels by a fully-connected (FC) layer, and flatten them into feature tokens , , and . Here, we define as , and the shapes of , , and are , , and , respectively. Next, using the concatenated feature tokens as input, the transformer encoder outputs the refined features of size . After that, we use randomly initialized queries to uniformly describe things and stuff. We then embed the location clues (i.e. center location and scale (size of mask). Finally, we adopt a mask-wise strategy to merge the predicted masks into the panoptic segmentation result, which will be introduced in detail in Section 3.6.
High-resolution and the multi-scale features maps are important for the segmentation task [15, 38, 19]. Since the high computational cost of multi-head attention layer, previous transformer-based methods [3, 8] can only process low-resolution feature map (e.g., of ResNet) in their encoders, which limits the segmentation performance.
Location information plays an important role in distinguishing things with different instance ids in the panoptic segmentation task [37, 30, 38]. Inspired by this, we design a location decoder to introduce the location information (i.e., center location and scale) of things and stuff into the learnable queries.
Specifically, given randomly initialized queries and the refined feature tokens generated by transformer encoder, the decoder will output location-aware queries. In the training phase, we apply an auxiliary MLP head on top of location-aware queries to predict the center locations and scales of the target object, and supervise the prediction with a location loss . Note that, the MLP head is an auxiliary branch, which can be discarded during the inference phase. Since the location decoder does not need to predict the segmentation mask, we implement it with computational and memory efficient deformable attention .
As shown in Figure 3, the mask decoder is proposed to predict the object category and mask according to the given queries. The queries of the mask decoder is the location-aware queries from the location decoder, and the keys and values of the mask decoder is the refined feature tokens from the transformer encoder. We first pass the queries through 4 decoder layers, and then fetch the attention map and the refined query from the last decoder layer, where is the query number, is the head number of the multi-head attention layer, and is the length of feature tokens .
At the same time, to predict the object mask, we first split and reshape the attention maps into attention maps , , and , which have the same spatial resolution as , , and . This process can be formulated as:
where denotes the split and reshaping operation. After that, we upsample these attention maps to the resolution of and concatenate them along the channel dimension, as illustrated in Eqn. 2.
mean the 2 times and 4 times bilinear interpolation operations, respectively.is the concatenation operation. Finally, based on the fused attention maps , we predict the binary mask through a convolution.
is padded withso that the element number is the same as the prediction set . Specifically, we utilize Hungarian algorithm  to search for the permutation with the minimum matching cost which is the sum of the classification loss and the segmentation loss .
The overall loss function of Panoptic SegFormer can be written as:
where , , and are the weights to balance three losses. is the classification loss that is implemented by Focal loss , and is the segmentation loss implemented by Dice loss . is the location loss as formulated in Eqn. 4:
where is the L1 loss. and are the predicted center points and scales from the location decoder. denotes the index in the permutation . and indicate the center location and scale (size of mask that normalized by the size of the image) of the target mask , respectively. indicates that only pairs included real ground truth are taken into account.
Panoptic Segmentation requires each pixel to be assigned a category label (or void) and instance id (id is ignored for stuff) 
. One commonly used post-processing method is the heuristic procedure, which adopts a NMS-like procedure  to generate the non-overlapping instance segments for things and we call it as mask-wise strategy here. The heuristic procedure also uses pixel-wise argmax strategy for stuff and resolves overlap between things and stuff in favor of the thing classes. Recent methods [8, 3, 32] directly use pixel-wise strategy directly to uniformly merge the results of things and stuff. Although pixel-wise argmax strategy is conceptually simple, we observe that it consistently produces results with noise due to the abnormally extreme pixel values. To this end, we adopt the mask-wise strategy to generate non-overlap results for stuff based on the heuristic procedure, instead of taking the pixel-wise strategy. However, we equally treat things and stuff and solve the overlaps among all masks by their confidence scores instead of favoring things over stuff in the heuristic procedure, which marks a difference between our approach and .
As illustrated in Algorithm 1, mask-wise merging strategy takes , , and as input, which denote the predicted categories, confidence scores, and segmentation masks, respectively, and output a semantic mask and a instance id mask , to assign a category label and a instance id to each pixel.
Specifically, and are first initialized by zeros. Then, we sorted prediction results in descending order of confidence score, and fill the sorted predicted masks to and . Note that, the results with confidence scores below will be discarded, and the overlaps with lower confidence score will be removed to generate non-overlap panoptic results. In the end, category label and instance id (only things) is added.
Panoptic FPN 
|R50-FPN [14, 20]||36||41.5||48.5||31.1||-||-|
|Panoptic FCN ||R50-FPN||36||43.6||49.3||35.0||37.0M||244G|
|Panoptic SegFormer||PVTv2-B0 ||50||49.6||55.5||40.6||22.2M||156G|
|Panoptic SegFormer||PVTv2-B2 ||50||52.6||58.7||43.3||41.6M||219G|
|Panoptic SegFormer||PVTv2-B5 ||50||54.1||60.4||44.6||100.9M||391G|
notes that backbones are pre-trained on ImageNet-22K.
|Panoptic FPN ||R101-FPN||36||43.5||50.8||32.5||-||-|
|Panoptic FCN ||R101-FPN||36||45.5||51.4||36.4||56.0M||310G|
|Max-Deeplab-S ||Max-S ||54||49.0||54.0||41.6||61.9M||162G|
|Max-Deeplab-L ||Max-L ||54||51.3||57.2||42.4||451.0M||1846G|
|Panoptic SegFormer||PVTv2-B5 ||50||54.4||61.1||44.3||100.9M||391G|
|Mask R-CNN ||R50-FPN||36||37.5||21.1||39.6||48.3|
|SOLQ (300 queries) ||R50||50||39.7||21.5||42.5||53.1|
|QueryInst (300 queries) ||R50-FPN||36||40.6||23.4||42.5||52.8|
|Panoptic SegFormer (300 queries)||R50||50||41.7||21.9||45.3||56.3|
|Deformable DETR*[6, 43]||R50||39.8M||195G||15||4567M|
|Original Image||Ours||DETR ||MaskFormer ||Ground Truth|
|50.4% PQ||45.1% PQ||47.6% PQ|
We evaluate Panoptic SegFormer on COCO , comparing it with several state-of-the-art methods. We provide the main results of panoptic segmentation and some visualization results. We also report the results of instance segmentation.
We perform experiments on COCO 2017 datasets  without external data. The COCO dataset contains 118K training images and 5k validation images, and it contains 80 things and 53 stuff.
Our settings mainly follow DETR and Deformable DETR for simplicity. Specially, we use Channel Mapper  to map dimensions of the backbone’s outputs to 256. The location decoder contains 6 deformable attention layers, and the mask decoder contains 4 vanilla cross-attention layers. The hyper-parameters in deformable attention are the same as Deformable DETR . We train our models with 50 epochs, a batch size of 1 per GPU, a learning rate of (decayed at the 40th epoch by a factor of 0.1, learning rate multiplier of the backbone is 0.1). We use a multi-scale training strategy with the maximum image-side not exceeding 1333 and the minimum image size varying from 480 to 800. The number of queries is set to 400. , , and in Equation 3 are set to 1, 1, 5, respectively. we employ threshold 0.5 to obtain binary masks from soft masks. Threshold used to filter low-quality results is 0.3. The PVTv2 
is pre-trained on ImageNet-1K set. All experiments are trained on one NVIDIA DGX node with 8 Tesla V100 GPUs. For our largest model Panoptic SegFormer (PVTv2-B5), we use 4 DGX nodes to shorten training time.
We conduct experiments on COCO val set and test-dev set. In Tables 1 and 2, we report our main results, comparing with other state-of-the-art methods. Panoptic SegFormer attains 50.0% PQ on COCO val with ResNet-50 as the backbone and single-scale input, and it surpasses previous methods Panoptic-FCN  and DETR  over 6.4% PQ and 6.6% PQ, respectively. Except for the remarkable effect, the training of Panoptic SegFormer is efficient. Under training strategy (12 epochs) and ResNet-50 as the backbone, Panoptic SegFormer achieves 46.4% PQ that can be on par with 46.5% PQ of MaskFormer that training 300 epochs. Enhanced by powerful vision transformer backbone PVTv2-B5 , Panoptic SegFormer attains a new record of 54.4% PQ on COCO test-dev without TTA, surpassing Max-Deeplab over 3.1% PQ. Our method even surpasses the previous competition-level method Innovation  over 0.8 % PQ 222We only compare methods and results that dost not use external data.. Figure 4 shows some visualization results on the COCO val set. These original images are highly crowded or occluded scenarios, and our Panoptic SegFormer still can predict convincing results.
In Table 3, we report our instance segmentation results on COCO test-dev set. For a fair comparison, we use 300 queries for instance segmentation and only things data is used. With ResNet-50 as the backbone and single scale input, Panoptic SegFormer achieves 41.7 mask AP, surpassing previous state-of-the-methods HTC  and QueryInst  over 1.6 AP and 1.1 AP, respectively.
Different from previous methods, our results are generated through multi-scale multi-head attention maps. Figure 5 shows some samples of multi-head attention maps. Through a multi-head attention mechanism, different heads of one query learn their own attention preference. We observe that some heads pay attention to foreground regions, some heads prefer boundaries, and others prefer background regions. This shows that each mask is generated by considering various comprehensive information in the image.
We show model complexity and inference efficiency in Table 4, and we can see that Panoptic SegFormer can achieve state-of-the-art performance on panoptic segmentation with acceptable inference speed.
We propose a concise model named Panoptic SegFormer by unifying the processing workflow of things and stuff. Panoptic SegFormer can surpass previous methods with a large margin and demonstrate the superiority of treating things and stuff with the same recipe.
DAGM German Conference on Pattern Recognition, Cited by: §1.
V-net: fully convolutional neural networks for volumetric medical image segmentation. In International conference on 3D vision (3DV), Cited by: §3.5.