3DPCT: 3D Point Cloud Transformer with Dual Self-attention

09/21/2022
by   Dening Lu, et al.
0

Transformers have resulted in remarkable achievements in the field of image processing. Inspired by this great success, the application of Transformers to 3D point cloud processing has drawn more and more attention. This paper presents a novel point cloud representational learning network, 3D Point Cloud Transformer with Dual Self-attention (3DPCT) and an encoder-decoder structure. Specifically, 3DPCT has a hierarchical encoder, which contains two local-global dual-attention modules for the classification task (three modules for the segmentation task), with each module consisting of a Local Feature Aggregation (LFA) block and a Global Feature Learning (GFL) block. The GFL block is dual self-attention, with both point-wise and channel-wise self-attention to improve feature extraction. Moreover, in LFA, to better leverage the local information extracted, a novel point-wise self-attention model, named as Point-Patch Self-Attention (PPSA), is designed. The performance is evaluated on both classification and segmentation datasets, containing both synthetic and real-world data. Extensive experiments demonstrate that the proposed method achieved state-of-the-art results on both classification and segmentation tasks.

READ FULL TEXT
research
04/27/2021

Dual Transformer for Point Cloud Analysis

Following the tremendous success of transformer in natural language proc...
research
03/02/2022

3DCTN: 3D Convolution-Transformer Network for Point Cloud Classification

Although accurate and fast point cloud classification is a fundamental t...
research
04/12/2023

Multi-scale Geometry-aware Transformer for 3D Point Cloud Classification

Self-attention modules have demonstrated remarkable capabilities in capt...
research
10/03/2022

Dual-former: Hybrid Self-attention Transformer for Efficient Image Restoration

Recently, image restoration transformers have achieved comparable perfor...
research
03/01/2022

Enhancing Local Feature Learning for 3D Point Cloud Processing using Unary-Pairwise Attention

We present a simple but effective attention named the unary-pairwise att...
research
05/13/2022

Local Attention Graph-based Transformer for Multi-target Genetic Alteration Prediction

Classical multiple instance learning (MIL) methods are often based on th...
research
07/22/2020

Cloud Transformers

We present a new versatile building block for deep point cloud processin...

Please sign up or login with your details

Forgot password? Click here to reset