DeepAI AI Chat
Log In Sign Up

Learning to Fuse Things and Stuff

by   Jie Li, et al.

We propose an end-to-end learning approach for panoptic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmentations. We explicitly constrain these two output distributions through a global things and stuff binary mask to enforce cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmarks for panoptic segmentation as well as on the individual semantic and instance segmentation tasks.


page 1

page 3

page 4

page 8

page 13

page 14


SpatialFlow: Bridging All Tasks for Panoptic Segmentation

The newly proposed panoptic segmentation task, which aims to encompass t...

Learning Instance Occlusion for Panoptic Segmentation

Recently, the vision community has shown renewed interest in the effort ...

Unifying Training and Inference for Panoptic Segmentation

We present an end-to-end network to bridge the gap between training and ...

Single-shot Path Integrated Panoptic Segmentation

Panoptic segmentation, which is a novel task of unifying instance segmen...

PanopticPartFormer++: A Unified and Decoupled View for Panoptic Part Segmentation

Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part ...

Panoptic SegFormer

We present Panoptic SegFormer, a general framework for end-to-end panopt...

SUNet: Scale-aware Unified Network for Panoptic Segmentation

Panoptic segmentation combines the advantages of semantic and instance s...