GTNet: Graph Transformer Network for 3D Point Cloud Classification and Semantic Segmentation

05/24/2023
by   Wei Zhou, et al.
0

Recently, graph-based and Transformer-based deep learning networks have demonstrated excellent performances on various point cloud tasks. Most of the existing graph methods are based on static graph, which take a fixed input to establish graph relations. Moreover, many graph methods apply maximization and averaging to aggregate neighboring features, so that only a single neighboring point affects the feature of centroid or different neighboring points have the same influence on the centroid's feature, which ignoring the correlation and difference between points. Most Transformer-based methods extract point cloud features based on global attention and lack the feature learning on local neighbors. To solve the problems of these two types of models, we propose a new feature extraction block named Graph Transformer and construct a 3D point point cloud learning network called GTNet to learn features of point clouds on local and global patterns. Graph Transformer integrates the advantages of graph-based and Transformer-based methods, and consists of Local Transformer and Global Transformer modules. Local Transformer uses a dynamic graph to calculate all neighboring point weights by intra-domain cross-attention with dynamically updated graph relations, so that every neighboring point could affect the features of centroid with different weights; Global Transformer enlarges the receptive field of Local Transformer by a global self-attention. In addition, to avoid the disappearance of the gradient caused by the increasing depth of network, we conduct residual connection for centroid features in GTNet; we also adopt the features of centroid and neighbors to generate the local geometric descriptors in Local Transformer to strengthen the local information learning capability of the model. Finally, we use GTNet for shape classification, part segmentation and semantic segmentation tasks in this paper.

READ FULL TEXT

page 1

page 4

page 9

research
11/30/2021

CT-block: a novel local and global features extractor for point cloud

Deep learning on the point cloud is increasingly developing. Grouping th...
research
10/23/2022

LCPFormer: Towards Effective 3D Point Cloud Analysis via Local Context Propagation in Transformers

Transformer with its underlying attention mechanism and the ability to c...
research
11/02/2020

Point Transformer

In this work, we present Point Transformer, a deep neural network that o...
research
04/27/2023

Exploiting Inductive Bias in Transformer for Point Cloud Classification and Segmentation

Discovering inter-point connection for efficient high-dimensional featur...
research
12/17/2021

Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction

Recent Transformer-based methods have achieved advanced performance in p...
research
04/12/2022

HiTPR: Hierarchical Transformer for Place Recognition in Point Cloud

Place recognition or loop closure detection is one of the core component...
research
03/20/2023

Revisiting Transformer for Point Cloud-based 3D Scene Graph Generation

In this paper, we propose the semantic graph Transformer (SGT) for the 3...

Please sign up or login with your details

Forgot password? Click here to reset