Applying Plain Transformers to Real-World Point Clouds

02/28/2023
by   Lanxiao Li, et al.
0

Due to the lack of inductive bias, transformer-based models usually require a large amount of training data. The problem is especially concerning in 3D vision, as 3D data are harder to acquire and annotate. To overcome this problem, previous works modify the architecture of transformers to incorporate inductive biases by applying, e.g., local attention and down-sampling. Although they have achieved promising results, earlier works on transformers for point clouds have two issues. First, the power of plain transformers is still under-explored. Second, they focus on simple and small point clouds instead of complex real-world ones. This work revisits the plain transformers in real-world point cloud understanding. We first take a closer look at some fundamental components of plain transformers, e.g., patchifier and positional embedding, for both efficiency and performance. To close the performance gap due to the lack of inductive bias and annotated data, we investigate self-supervised pre-training with masked autoencoder (MAE). Specifically, we propose drop patch, which prevents information leakage and significantly improves the effectiveness of MAE. Our models achieve SOTA results in semantic segmentation on the S3DIS dataset and object detection on the ScanNet dataset with lower computational costs. Our work provides a new baseline for future research on transformers for point clouds.

READ FULL TEXT

page 4

page 16

research
08/19/2021

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

Point clouds captured in real-world applications are often incomplete du...
research
09/30/2022

Transformers for Object Detection in Large Point Clouds

We present TransLPC, a novel detection model for large point clouds that...
research
09/15/2022

Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer?

Vision Transformers (ViTs) have proven to be effective, in solving 2D im...
research
05/04/2023

OctFormer: Octree-based Transformers for 3D Point Clouds

We propose octree-based transformers, named OctFormer, for 3D point clou...
research
01/24/2023

RangeViT: Towards Vision Transformers for 3D Semantic Segmentation in Autonomous Driving

Casting semantic segmentation of outdoor LiDAR point clouds as a 2D prob...
research
07/04/2023

DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation

Recent Diffusion Transformers (e.g., DiT) have demonstrated their powerf...
research
02/03/2023

Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers

We propose a new class of linear Transformers called FourierLearner-Tran...

Please sign up or login with your details

Forgot password? Click here to reset