Panoptic SegFormer

09/08/2021
by   Zhiqi Li, et al.
8

We present Panoptic SegFormer, a general framework for end-to-end panoptic segmentation with Transformers. The proposed method extends Deformable DETR with a unified mask prediction workflow for both things and stuff, making the panoptic segmentation pipeline concise and effective. With a ResNet-50 backbone, our method achieves 50.0% PQ on the COCO test-dev split, surpassing previous state-of-the-art methods by significant margins without bells and whistles. Using a more powerful PVTv2-B5 backbone, Panoptic-SegFormer achieves a new record of 54.1%PQ and 54.4% PQ on the COCO val and test-dev splits with single scale input.

READ FULL TEXT

page 8

page 9

research
01/12/2019

UPSNet: A Unified Panoptic Segmentation Network

In this paper, we propose a unified panoptic segmentation network (UPSNe...
research
01/14/2020

Unifying Training and Inference for Panoptic Segmentation

We present an end-to-end network to bridge the gap between training and ...
research
12/04/2018

Learning to Fuse Things and Stuff

We propose an end-to-end learning approach for panoptic segmentation, a ...
research
10/20/2022

SimpleClick: Interactive Image Segmentation with Simple Vision Transformers

Click-based interactive image segmentation aims at extracting objects wi...
research
06/29/2023

ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation

This paper presents a new mechanism to facilitate the training of mask t...
research
04/19/2020

ResNeSt: Split-Attention Networks

While image classification models have recently continued to advance, mo...
research
10/04/2020

A New Mask R-CNN Based Method for Improved Landslide Detection

This paper presents a novel method of landslide detection by exploiting ...

Please sign up or login with your details

Forgot password? Click here to reset