Flex-Convolution (Deep Learning Beyond Grid-Worlds)

03/20/2018
by   Fabian Groh, et al.
0

The goal of this work is to enable deep neural networks to learn representations for irregular 3D structures -- just like in common approaches for 2D images. Unfortunately, current network primitives such as convolution layers are specifically designed to exploit the natural data representation of images -- a fixed and regular grid structure. This represents a limitation when transferring these techniques to more unstructured data like 3D point clouds or higher dimensional data. In this work, we propose a surprisingly natural generalization flex-convolution of the conventional convolution layer and provide a highly efficient implementation. Compared to very specific neural network architectures for point cloud processing, our more generic approach yields competitive results on the rather small standard benchmark sets using fewer parameters and lower memory consumption. Our design even allows for raw neural networks prediction on several magnitudes larger point clouds, providing superior results compared to previous hand-tuned and well-engineered approaches on the 2D-3D-S dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset