Rotationally Equivariant Neural Operators for Learning Transformations on Tensor Fields (eg 3D Images and Vector Fields)
We introduce equivariant neural operators for learning resolution invariant as well as translation and rotation equivariant transformations between sets of tensor fields. Input and output may contain arbitrary mixes of scalar fields, vector fields, second order tensor fields and higher order fields. Our tensor field convolution layers emulate any linear operator by learning its impulse response or Green's function as the convolution kernel. Our tensor field attention layers emulate pairwise field coupling via local tensor products. Convolutions and associated adjoints can be in real or Fourier space allowing for linear scaling. By unifying concepts from E3NN, TBNN and FNO, we achieve good predictive performance on a wide range of PDEs and dynamical systems in engineering and quantum chemistry. Code is in Julia and available upon request from authors.
READ FULL TEXT