DAPnet: A double self-attention convolutional network for segmentation of point clouds

04/18/2020
by   Li Chen, et al.
8

LiDAR point cloud has a complex structure and the 3D semantic labeling of it is a challenging task. Existing methods adopt data transformations without fully exploring contextual features, which are less efficient and accurate problem. In this study, we propose a double self-attention convolutional network, called DAPnet, by combining geometric and contextual features to generate better segmentation results. The double self-attention module including point attention module and group attention module originates from the self-attention mechanism to extract contextual features of terrestrial objects with various shapes and scales. The contextual features extracted by these modules represent the long-range dependencies between the data and are beneficial to reducing the scale diversity of point cloud objects. The point attention module selectively enhances the features by modeling the interdependencies of neighboring points. Meanwhile, the group attention module is used to emphasizes interdependent groups of points. We evaluate our method based on the ISPRS 3D Semantic Labeling Contest dataset and find that our model outperforms the benchmark by 85.2 improvements over powerline and car are 7.5 comparison, we find that the point attention module is more effective for the overall improvement of the model than the group attention module, and the incorporation of the double self-attention module has an average of 7 improvement on the pre-class accuracy of the classes. Moreover, the adoption of the double self-attention module consumes a similar training time as the one without the attention module for model convergence. The experimental result shows the effectiveness and efficiency of the DAPnet for the segmentation of LiDAR point clouds. The source codes are available at https://github.com/RayleighChen/point-attention.

READ FULL TEXT

page 2

page 3

page 8

page 9

research
03/19/2022

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds

Transformer has demonstrated promising performance in many 2D vision tas...
research
06/21/2022

Position-prior Clustering-based Self-attention Module for Knee Cartilage Segmentation

The morphological changes in knee cartilage (especially femoral and tibi...
research
11/29/2020

Deeper or Wider Networks of Point Clouds with Self-attention?

Prevalence of deeper networks driven by self-attention is in stark contr...
research
11/24/2020

SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud based Place Recognition

We tackle the problem of place recognition from point cloud data and int...
research
03/08/2023

DANet: Density Adaptive Convolutional Network with Interactive Attention for 3D Point Clouds

Local features and contextual dependencies are crucial for 3D point clou...
research
10/03/2017

A Fully Convolutional Network for Semantic Labeling of 3D Point Clouds

When classifying point clouds, a large amount of time is devoted to the ...
research
06/18/2020

SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks

We introduce the SE(3)-Transformer, a variant of the self-attention modu...

Please sign up or login with your details

Forgot password? Click here to reset