Segmenting 3D Hybrid Scenes via Zero-Shot Learning

07/01/2021
by   Bo Liu, et al.
0

This work is to tackle the problem of point cloud semantic segmentation for 3D hybrid scenes under the framework of zero-shot learning. Here by hybrid, we mean the scene consists of both seen-class and unseen-class 3D objects, a more general and realistic setting in application. To our knowledge, this problem has not been explored in the literature. To this end, we propose a network to synthesize point features for various classes of objects by leveraging the semantic features of both seen and unseen object classes, called PFNet. The proposed PFNet employs a GAN architecture to synthesize point features, where the semantic relationship between seen-class and unseen-class features is consolidated by adapting a new semantic regularizer, and the synthesized features are used to train a classifier for predicting the labels of the testing 3D scene points. Besides we also introduce two benchmarks for algorithmic evaluation by re-organizing the public S3DIS and ScanNet datasets under six different data splits. Experimental results on the two benchmarks validate our proposed method, and we hope our introduced two benchmarks and methodology could be of help for more research on this new direction.

READ FULL TEXT
research
02/27/2019

Zero-shot Learning of 3D Point Cloud Objects

Recent deep learning architectures can recognize instances of 3D point c...
research
07/20/2023

See More and Know More: Zero-shot Point Cloud Segmentation via Multi-modal Visual Data

Zero-shot point cloud segmentation aims to make deep models capable of r...
research
03/02/2016

Synthesized Classifiers for Zero-Shot Learning

Given semantic descriptions of object classes, zero-shot learning aims t...
research
09/29/2022

Prompt-guided Scene Generation for 3D Zero-Shot Learning

Zero-shot learning on 3D point cloud data is a related underexplored pro...
research
01/05/2022

Learning Semantic Ambiguities for Zero-Shot Learning

Zero-shot learning (ZSL) aims at recognizing classes for which no visual...
research
05/25/2021

GAN for Vision, KG for Relation: a Two-stage Deep Network for Zero-shot Action Recognition

Zero-shot action recognition can recognize samples of unseen classes tha...
research
08/14/2021

Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation

We address the problem of generalized zero-shot semantic segmentation (G...

Please sign up or login with your details

Forgot password? Click here to reset