Rotation-Invariant Transformer for Point Cloud Matching

03/14/2023
by   Hao Yu, et al.
0

The intrinsic rotation invariance lies at the core of matching point clouds with handcrafted descriptors, but it is despised by most of the recent deep matchers. As an alternative, they obtain the rotation invariance extrinsically via data augmentation. However, the continuous SO(3) space can never be covered by the finite number of augmented rotations, resulting in their instability when facing rotations that are rarely seen. To this end, we introduce RoITr, a Rotation-Invariant Transformer to cope with the pose variations in the point cloud matching task. We contribute both on the local and global levels. Starting from the local level, we introduce an attention mechanism embedded with Point Pair Feature (PPF)-based coordinates to describe the pose-invariant geometry, upon which a novel attention-based encoder-decoder is constructed. We further propose a global transformer with rotation-invariant cross-frame spatial awareness learned by the self-attention mechanism, which significantly improves the feature distinctiveness and makes the model robust with respect to the low overlap. Experiments are conducted on both the rigid and non-rigid public benchmarks, where RoITr outperforms all the state-of-the-art models by a considerable margin in the low-overlapping scenarios. Especially when the rotations are enlarged on the challenging 3DLoMatch benchmark, RoITr surpasses the existing methods by at least 13 and 5 percentage points in terms of the Inlier Ratio and the Registration Recall, respectively.

READ FULL TEXT

page 3

page 7

page 8

page 14

page 15

research
09/27/2022

RIGA: Rotation-Invariant and Globally-Aware Descriptors for Point Cloud Registration

Successful point cloud registration relies on accurate correspondences e...
research
12/31/2022

Rethinking Rotation Invariance with Point Cloud Registration

Recent investigations on rotation invariance for 3D point clouds have be...
research
10/07/2020

Rotation-Invariant Local-to-Global Representation Learning for 3D Point Cloud

We propose a local-to-global representation learning algorithm for 3D po...
research
08/30/2018

PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors

We present PPF-FoldNet for unsupervised learning of 3D local descriptors...
research
06/27/2019

Effective Rotation-invariant Point CNN with Spherical Harmonics kernels

We present a novel rotation invariant architecture operating directly on...
research
06/08/2022

VN-Transformer: Rotation-Equivariant Attention for Vector Neurons

Rotation equivariance is a desirable property in many practical applicat...
research
10/20/2019

Endowing Deep 3D Models with Rotation Invariance Based on Principal Component Analysis

In this paper, we propose a simple yet effective method to endow deep 3D...

Please sign up or login with your details

Forgot password? Click here to reset