Cross-Level Cross-Scale Cross-Attention Network for Point Cloud Representation

04/27/2021
by   Xian-Feng Han, et al.
0

Self-attention mechanism recently achieves impressive advancement in Natural Language Processing (NLP) and Image Processing domains. And its permutation invariance property makes it ideally suitable for point cloud processing. Inspired by this remarkable success, we propose an end-to-end architecture, dubbed Cross-Level Cross-Scale Cross-Attention Network (CLCSCANet), for point cloud representation learning. First, a point-wise feature pyramid module is introduced to hierarchically extract features from different scales or resolutions. Then a cross-level cross-attention is designed to model long-range inter-level and intra-level dependencies. Finally, we develop a cross-scale cross-attention module to capture interactions between-and-within scales for representation enhancement. Compared with state-of-the-art approaches, our network can obtain competitive performance on challenging 3D object classification, point cloud segmentation tasks via comprehensive experimental evaluation.

READ FULL TEXT
research
04/27/2021

Dual Transformer for Point Cloud Analysis

Following the tremendous success of transformer in natural language proc...
research
06/07/2023

Cross-attention learning enables real-time nonuniform rotational distortion correction in OCT

Nonuniform rotational distortion (NURD) correction is vital for endoscop...
research
05/16/2022

Transformers in 3D Point Clouds: A Survey

In recent years, Transformer models have been proven to have the remarka...
research
04/12/2023

Multi-scale Geometry-aware Transformer for 3D Point Cloud Classification

Self-attention modules have demonstrated remarkable capabilities in capt...
research
04/28/2021

Point Cloud Learning with Transformer

Remarkable performance from Transformer networks in Natural Language Pro...
research
10/25/2022

MemoNet:Memorizing Representations of All Cross Features Efficiently via Multi-Hash Codebook Network for CTR Prediction

New findings in natural language processing(NLP) demonstrate that the st...
research
02/28/2022

Spatiotemporal Transformer Attention Network for 3D Voxel Level Joint Segmentation and Motion Prediction in Point Cloud

Environment perception including detection, classification, tracking, an...

Please sign up or login with your details

Forgot password? Click here to reset