SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks

06/18/2020
by   Fabian B. Fuchs, et al.
1

We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds, which is equivariantunder continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model, leading to fewer trainable parameters and thus decreased sample complexity (i.e. we need less training data). The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy N-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2019

Modeling Point Clouds with Self-Attention and Gumbel Subset Sampling

Geometric deep learning is increasingly important thanks to the populari...
research
03/19/2022

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds

Transformer has demonstrated promising performance in many 2D vision tas...
research
04/20/2023

Generalizing Neural Human Fitting to Unseen Poses With Articulated SE(3) Equivariance

We address the problem of fitting a parametric human body model (SMPL) t...
research
04/18/2020

DAPnet: A double self-attention convolutional network for segmentation of point clouds

LiDAR point cloud has a complex structure and the 3D semantic labeling o...
research
09/15/2021

Discriminative and Generative Transformer-based Models For Situation Entity Classification

We re-examine the situation entity (SE) classification task with varying...
research
02/26/2021

Iterative SE(3)-Transformers

When manipulating three-dimensional data, it is possible to ensure that ...
research
12/09/2021

Fast Point Transformer

The recent success of neural networks enables a better interpretation of...

Please sign up or login with your details

Forgot password? Click here to reset