Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs

06/23/2022
by   Yi-Lun Liao, et al.
0

3D-related inductive biases like translational invariance and rotational equivariance are indispensable to graph neural networks operating on 3D atomistic graphs such as molecules. Inspired by the success of Transformers in various domains, we study how to incorporate these inductive biases into Transformers. In this paper, we present Equiformer, a graph neural network leveraging the strength of Transformer architectures and incorporating SE(3)/E(3)-equivariant features based on irreducible representations (irreps). Irreps features encode equivariant information in channel dimensions without complicating graph structures. The simplicity enables us to directly incorporate them by replacing original operations with equivariant counterparts. Moreover, to better adapt Transformers to 3D graphs, we propose a novel equivariant graph attention, which considers both content and geometric information such as relative position contained in irreps features. To improve expressivity of the attention, we replace dot product attention with multi-layer perceptron attention and include non-linear message passing. We benchmark Equiformer on two quantum properties prediction datasets, QM9 and OC20. For QM9, among models trained with the same data partition, Equiformer achieves best results on 11 out of 12 regression tasks. For OC20, under the setting of training with IS2RE data and optionally IS2RS data, Equiformer improves upon state-of-the-art models. Code reproducing all main results will be available soon.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2023

Graph Inductive Biases in Transformers without Message Passing

Transformers for graph data are increasingly widely studied and successf...
research
03/10/2023

Exphormer: Sparse Transformers for Graphs

Graph transformers have emerged as a promising architecture for a variet...
research
12/13/2022

Bridging Graph Position Encodings for Transformers with Weighted Graph-Walking Automata

A current goal in the graph neural network literature is to enable trans...
research
02/15/2021

Translational Equivariance in Kernelizable Attention

While Transformer architectures have show remarkable success, they are b...
research
02/08/2023

Attending to Graph Transformers

Recently, transformer architectures for graphs emerged as an alternative...
research
08/22/2023

Transformers for Capturing Multi-level Graph Structure using Hierarchical Distances

Graph transformers need strong inductive biases to derive meaningful att...
research
10/11/2022

Relational Attention: Generalizing Transformers for Graph-Structured Tasks

Transformers flexibly operate over sets of real-valued vectors represent...

Please sign up or login with your details

Forgot password? Click here to reset