LighTN: Light-weight Transformer Network for Performance-overhead Tradeoff in Point Cloud Downsampling

02/13/2022
by   Xu Wang, et al.
3

Compared with traditional task-irrelevant downsampling methods, task-oriented neural networks have shown improved performance in point cloud downsampling range. Recently, Transformer family of networks has shown a more powerful learning capacity in visual tasks. However, Transformer-based architectures potentially consume too many resources which are usually worthless for low overhead task networks in downsampling range. This paper proposes a novel light-weight Transformer network (LighTN) for task-oriented point cloud downsampling, as an end-to-end and plug-and-play solution. In LighTN, a single-head self-correlation module is presented to extract refined global contextual features, where three projection matrices are simultaneously eliminated to save resource overhead, and the output of symmetric matrix satisfies the permutation invariant. Then, we design a novel downsampling loss function to guide LighTN focuses on critical point cloud regions with more uniform distribution and prominent points coverage. Furthermore, We introduce a feed-forward network scaling mechanism to enhance the learnable capacity of LighTN according to the expand-reduce strategy. The result of extensive experiments on classification and registration tasks demonstrates LighTN can achieve state-of-the-art performance with limited resource overhead.

READ FULL TEXT

page 1

page 3

research
04/27/2021

Dual Transformer for Point Cloud Analysis

Following the tremendous success of transformer in natural language proc...
research
10/11/2022

Point Transformer V2: Grouped Vector Attention and Partition-based Pooling

As a pioneering work exploring transformer architecture for 3D point clo...
research
04/06/2023

PointCAT: Cross-Attention Transformer for point cloud

Transformer-based models have significantly advanced natural language pr...
research
09/01/2023

Robust Point Cloud Processing through Positional Embedding

End-to-end trained per-point embeddings are an essential ingredient of a...
research
03/22/2023

RegFormer: An Efficient Projection-Aware Transformer Network for Large-Scale Point Cloud Registration

Although point cloud registration has achieved remarkable advances in ob...
research
11/26/2021

POEM: 1-bit Point-wise Operations based on Expectation-Maximization for Efficient Point Cloud Processing

Real-time point cloud processing is fundamental for lots of computer vis...

Please sign up or login with your details

Forgot password? Click here to reset