Learning to Fuse Things and Stuff

12/04/2018
by   Jie Li, et al.
0

We propose an end-to-end learning approach for panoptic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmentations. We explicitly constrain these two output distributions through a global things and stuff binary mask to enforce cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmarks for panoptic segmentation as well as on the individual semantic and instance segmentation tasks.

READ FULL TEXT

page 1

page 3

page 4

page 8

page 13

page 14

research
10/19/2019

SpatialFlow: Bridging All Tasks for Panoptic Segmentation

The newly proposed panoptic segmentation task, which aims to encompass t...
research
06/13/2019

Learning Instance Occlusion for Panoptic Segmentation

Recently, the vision community has shown renewed interest in the effort ...
research
01/14/2020

Unifying Training and Inference for Panoptic Segmentation

We present an end-to-end network to bridge the gap between training and ...
research
12/03/2020

Single-shot Path Integrated Panoptic Segmentation

Panoptic segmentation, which is a novel task of unifying instance segmen...
research
01/03/2023

PanopticPartFormer++: A Unified and Decoupled View for Panoptic Part Segmentation

Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part ...
research
09/08/2021

Panoptic SegFormer

We present Panoptic SegFormer, a general framework for end-to-end panopt...
research
09/07/2022

SUNet: Scale-aware Unified Network for Panoptic Segmentation

Panoptic segmentation combines the advantages of semantic and instance s...

Please sign up or login with your details

Forgot password? Click here to reset