Revisiting Transformation Invariant Geometric Deep Learning: Are Initial Representations All You Need?

12/23/2021
by   Ziwei Zhang, et al.
0

Geometric deep learning, i.e., designing neural networks to handle the ubiquitous geometric data such as point clouds and graphs, have achieved great successes in the last decade. One critical inductive bias is that the model can maintain invariance towards various transformations such as translation, rotation, and scaling. The existing graph neural network (GNN) approaches can only maintain permutation-invariance, failing to guarantee invariance with respect to other transformations. Besides GNNs, other works design sophisticated transformation-invariant layers, which are computationally expensive and difficult to be extended. To solve this problem, we revisit why the existing neural networks cannot maintain transformation invariance when handling geometric data. Our findings show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance rather than needing sophisticated neural layer designs. Motivated by these findings, we propose Transformation Invariant Neural Networks (TinvNN), a straightforward and general framework for geometric data. Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling before feeding the representations into neural networks. We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks. Extensive experimental results on point cloud analysis and combinatorial optimization demonstrate the effectiveness and general applicability of our proposed method. Based on the experimental results, we advocate that TinvNN should be considered a new starting point and an essential baseline for further studies of transformation-invariant geometric deep learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2018

Isometric Transformation Invariant Graph-based Deep Neural Network

Learning transformation invariant representations of visual data is an i...
research
08/17/2019

Rotation Invariant Convolutions for 3D Point Clouds Deep Learning

Recent progresses in 3D deep learning has shown that it is possible to d...
research
01/23/2023

A Structural Approach to the Design of Domain Specific Neural Network Architectures

This is a master's thesis concerning the theoretical ideas of geometric ...
research
06/18/2021

Training or Architecture? How to Incorporate Invariance in Neural Networks

Many applications require the robustness, or ideally the invariance, of ...
research
05/28/2021

Symmetry-driven graph neural networks

Exploiting symmetries and invariance in data is a powerful, yet not full...
research
06/29/2023

Restore Translation Using Equivariant Neural Networks

Invariance to spatial transformations such as translations and rotations...
research
11/22/2021

Graph-Based Similarity of Neural Network Representations

Understanding the black-box representations in Deep Neural Networks (DNN...

Please sign up or login with your details

Forgot password? Click here to reset