Open-world Semantic Segmentation via Contrasting and Clustering Vision-Language Embedding

07/18/2022
by   Quande Liu, et al.
0

To bridge the gap between supervised semantic segmentation and real-world applications that acquires one model to recognize arbitrary new concepts, recent zero-shot segmentation attracts a lot of attention by exploring the relationships between unseen and seen object categories, yet requiring large amounts of densely-annotated data with diverse base classes. In this paper, we propose a new open-world semantic segmentation pipeline that makes the first attempt to learn to segment semantic objects of various open-world categories without any efforts on dense annotations, by purely exploiting the image-caption data that naturally exist on the Internet. Our method, Vision-language-driven Semantic Segmentation (ViL-Seg), employs an image and a text encoder to generate visual and text embeddings for the image-caption data, with two core components that endow its segmentation ability: First, the image encoder is jointly trained with a vision-based contrasting and a cross-modal contrasting, which encourage the visual embeddings to preserve both fine-grained semantics and high-level category information that are crucial for the segmentation task. Furthermore, an online clustering head is devised over the image encoder, which allows to dynamically segment the visual embeddings into distinct semantic groups such that they can be classified by comparing with various text embeddings to complete our segmentation pipeline. Experiments show that without using any data with dense annotations, our method can directly segment objects of arbitrary categories, outperforming zero-shot segmentation methods that require data labeling on three benchmark datasets.

READ FULL TEXT

page 2

page 11

page 13

research
08/09/2023

MixReorg: Cross-Modal Mixed Patch Reorganization is a Good Mask Learner for Open-World Semantic Segmentation

Recently, semantic segmentation models trained with image-level text sup...
research
11/13/2022

Visual Semantic Segmentation Based on Few/Zero-Shot Learning: An Overview

Visual semantic segmentation aims at separating a visual sample into div...
research
03/23/2023

Zero-guidance Segmentation Using Zero Segment Labels

CLIP has enabled new and exciting joint vision-language applications, on...
research
03/23/2023

Top-Down Visual Attention from Analysis by Synthesis

Current attention algorithms (e.g., self-attention) are stimulus-driven ...
research
05/25/2023

Interactive Segment Anything NeRF with Feature Imitation

This paper investigates the potential of enhancing Neural Radiance Field...
research
04/13/2023

[CLS] Token is All You Need for Zero-Shot Semantic Segmentation

In this paper, we propose an embarrassingly simple yet highly effective ...
research
09/17/2023

CLIPUNetr: Assisting Human-robot Interface for Uncalibrated Visual Servoing Control with CLIP-driven Referring Expression Segmentation

The classical human-robot interface in uncalibrated image-based visual s...

Please sign up or login with your details

Forgot password? Click here to reset