Multi-scale Geometry-aware Transformer for 3D Point Cloud Classification

04/12/2023
by   Xian Wei, et al.
0

Self-attention modules have demonstrated remarkable capabilities in capturing long-range relationships and improving the performance of point cloud tasks. However, point cloud objects are typically characterized by complex, disordered, and non-Euclidean spatial structures with multiple scales, and their behavior is often dynamic and unpredictable. The current self-attention modules mostly rely on dot product multiplication and dimension alignment among query-key-value features, which cannot adequately capture the multi-scale non-Euclidean structures of point cloud objects. To address these problems, this paper proposes a self-attention plug-in module with its variants, Multi-scale Geometry-aware Transformer (MGT). MGT processes point cloud data with multi-scale local and global geometric information in the following three aspects. At first, the MGT divides point cloud data into patches with multiple scales. Secondly, a local feature extractor based on sphere mapping is proposed to explore the geometry inner each patch and generate a fixed-length representation for each patch. Thirdly, the fixed-length representations are fed into a novel geodesic-based self-attention to capture the global non-Euclidean geometry between patches. Finally, all the modules are integrated into the framework of MGT with an end-to-end training scheme. Experimental results demonstrate that the MGT vastly increases the capability of capturing multi-scale geometry using the self-attention mechanism and achieves strong competitive performance on mainstream point cloud benchmarks.

READ FULL TEXT
research
09/21/2022

3DPCT: 3D Point Cloud Transformer with Dual Self-attention

Transformers have resulted in remarkable achievements in the field of im...
research
01/05/2022

Synthesizing Tensor Transformations for Visual Self-attention

Self-attention shows outstanding competence in capturing long-range rela...
research
04/27/2021

Cross-Level Cross-Scale Cross-Attention Network for Point Cloud Representation

Self-attention mechanism recently achieves impressive advancement in Nat...
research
01/06/2023

Model-Agnostic Hierarchical Attention for 3D Object Detection

Transformers as versatile network architectures have recently seen great...
research
12/24/2020

GraNet: Global Relation-aware Attentional Network for ALS Point Cloud Classification

In this work, we propose a novel neural network focusing on semantic lab...
research
08/30/2022

MODNet: Multi-offset Point Cloud Denoising Network Customized for Multi-scale Patches

The intricacy of 3D surfaces often results cutting-edge point cloud deno...
research
03/08/2023

Point Cloud Classification Using Content-based Transformer via Clustering in Feature Space

Recently, there have been some attempts of Transformer in 3D point cloud...

Please sign up or login with your details

Forgot password? Click here to reset