Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data

02/25/2020 ∙ by Marc Finzi, et al. ∙ 11

The translation equivariance of convolutional layers enables convolutional neural networks to generalize well on image problems. While translation equivariance provides a powerful inductive bias for images, we often additionally desire equivariance to other transformations, such as rotations, especially for non-image data. We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group with a surjective exponential map. Incorporating equivariance to a new group requires implementing only the group exponential and logarithm maps, enabling rapid prototyping. Showcasing the simplicity and generality of our method, we apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems. For Hamiltonian systems, the equivariance of our models is especially impactful, leading to exact conservation of linear and angular momentum.



There are no comments yet.


page 5

page 20

Code Repositories


SE(3)-equivariant point cloud networks for virtual screening

view repo


SE(3)-invariant neural networks for hotspot prediction

view repo


A simple implementation of the LieConv modules in PyTorch Geometric

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.