Neural Message Passing on High Order Paths

02/24/2020 ∙ by Daniel Flam-Shepherd, et al. ∙ 0

Graph neural network have achieved impressive results in predicting molecular properties, but they do not directly account for local and hidden structures in the graph such as functional groups and molecular geometry. At each propagation step, GNNs aggregate only over first order neighbours, ignoring important information contained in subsequent neighbours as well as the relationships between those higher order connections. In this work, we generalize graph neural nets to pass messages and aggregate across higher order paths. This allows for information to propagate over various levels and substructures of the graph. We demonstrate our model on a few tasks in molecular property prediction.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and Motivation

Graph Neural Networks (GNNs) are a powerful tool for representation learning across different domains involving relational data such as molecules Duvenaud et al. (2015) or social and biological networks Hamilton et al. (2017). These models learn node embeddings in a message passing framework Gilmer et al. (2017) by passing and aggregating node and edge feature information across the graph using neural networks. The learned node representations can then be used for any downstream procedure such as node or graph classification or regression. In particular, GNNs have been used to drastically reduce the computation time for predicting molecular properties Gilmer et al. (2017).

However, current GNN models still suffer from limitations as they only propagate information across neighbouring edges and, after propagation, use simple pooling of final node embeddings Duvenaud et al. (2015); Li et al. (2015). This means that, in most models, nodes only learn about the larger neighbourhood surrounding them over many propagation steps. This makes it difficult for GNNs to learn higher order graph structure and impossible to learn in a single propagation layer. However, such long range correlations are important for many domains, in particular, when learning chemical properties that depend on rings, branches, functional groups or molecular geometry.

The only way to directly account for higher order graph properties is to pass messages over additional neighbours in every propagation layer of the GNN. Notice how much larger the neighbourhood of the atom gets when you consider second and third order neighbours (Figure 1). This work focuses on generalizing message passing neural networks to accomplish this.

a)                                b)                                c)

Figure 1: the neighbourhood of one atom comprised of a) b) and c) order neighbours

1.1 Motivations

There are many factors pertaining to molecular graphs that motivate the development of our model. In this section we discuss, in more depth, the limitations of GNNs with respect to specific aspects of molecules that motivate our model. These include molecular substructures like rings and functional groups, molecular geometry as characterized by internal coordinates as well as stereochemistry.

Figure 2: Cyclohexanol

molecular substructures play an important role in determining molecular properties for example functional groups are responsible for the chemical reactions a molecule undergoes. By only aggregating over neighbours, GNN cannot learn about these larger substructures in a single propagation layer. On the other hand, by passing messages over larger neighbourhoods, in every layer we could directly learn about these structures. Furthermore, We could directly indicate if the path that a message is traveling on contains a simple functional group like alcohol (ROH) or passes through a larger functional group. For example, in figure 2, atoms in the neighbourhood of OH could receive messages of length two or more indicating that an alcohol group is in their neighbourhood.

molecular geometry is the three dimensional arrangement of atoms in a molecule and influences several properties, including the reactivity, polarity and biological activity of the molecule. An important application of GNNs is predicting quantum mechanical properties of molecules, which are heavily dependent on the geometry of the molecule. The 3D configuration of a molecule can be fully specified by 1) bond lengths – the distance between two bonded atoms, 2) bond angles – the angle formed between three neighbouring atoms, and 3) dihedral angles between four consecutive atoms. In fact the potential energy is typically modeled as a sum of terms involving each of these three. Current GNN approaches to quantum chemistry incorporate neighbouring geometry by using bond distances as edge features Gilmer et al. (2017), but do not directly account for the relative orientation of neighbouring atoms and bonds – a framework that could do so would be advantageous.

Figure 3: rotation of a bond

stereochemistry involves the relative spatial arrangement of atoms in molecules, specifically, stereoisomers– which are molecules with the same discrete graph but different three-dimensional orientation of atoms. For example, enantomers– mirror images of molecules and cis-trans isomers, that only differ through the rotation of a functional group. Even if they use interatomic distances as edge features, GNNs will have limited ability to distinguish stereoisomers, since these molecules only differ through the relative orientation of atoms. In general, at every propagation step, GNNs should learn representations over each node’s extended neighbourhood to encode the relationships between nodes in that neighbourhood.

1.2 Approach and Contributions

We generalize MPNNs to aggregate across larger neighbourhoods by passing messages along simple paths of higher order neighbours. We describe the general framework in Section 3. We experiment with various molecular property prediction task and a node classification task in citation networks. Our specific contributions are two-fold

  • we devise a simple extension to any message passing neural network to learn representations over larger node neighbourhoods within each propagation layer by simply augmenting the message function to aggregate over additional neighbours.

  • By summing over additional neighbours we enable the use of path features such as bond angles for paths of length two and dihedral angles for paths of length three and thus encoding the full molecular geometry and orientation, so that MPNNs can distinguish isomers.

2 Related Work and Background

2.1 Background

Message passing neural networks operate on graphs with nodes each with feature vector that specify what kind of atom the node is, among other possible features. There are edge feature vectors that specify what kind of bond type atoms have. The forward pass has two phases, a message passing phase and a readout phase.

The message passing phase runs for propagation steps and is defined in terms of message functions and node update functions . During the message passing phase, hidden states at each node in the graph are updated based on messages according to

The message node receives aggregates over its neighbours , in this case, by simple summation. We then readout predictions based on final node embeddings.

2.2 Related Work

The first graph neural network model was proposed by Scarselli et al. (2008) and many variants have been recently proposed Li et al. (2015); Veličković et al. (2017); Kipf and Welling (2016). Our focus is on the general framework of neural message passing from Gilmer et al. (2017)

. We review relevant GNN models and their use in Molecular Deep learning in this section.

Molecular Deep Learning

Recently GNNs have superseded machine learning methods involving hand-crafted feature representation, on predicting molecular properties. For example, neural fingerprints generalizes standard molecular fingerprints with a differentiable one that achieves better predictive accuracy

Duvenaud et al. (2015). Another model, SchNet Schütt et al. (2017)

defines a continuous-filter convolutional neural network for modeling quantum interactions and achieves state of the art results.

Higher Order GNNs. Recent work has generalized graph convolution networks (GCNs) Kipf and Welling (2016) to higher order structure by repeatedly mixing feature representations of neighbors at various distances Abu-El-Haija et al. (2019), or casting GCNs into a general framework inspired by the path integral formulation of quantum mechanics Ma et al. (2019). Both of these works are based on powers of the adjacency matrix and do not account directly for the relationship between higher order neighbours. Another work Morris et al. (2019) proposes k-dimensional GNNs in order to take higher order graph structures at multiple scales into account. GNNs and higher order GNNs do not incorporate the relationship between higher order neighbours, which would allow for features that are dependent on that relationship, namely ’path features’.

path augmented transformer Another model based on the transformer architecture Chen et al. (2019)

accounts for long range dependencies in molecular graphs by augmenting edge feature tensor to include some (shortest) path features like bond type, conjugacy, inter-atomic distance and ring membership.

structured transformer A few graph neural networks recently proposed have incorporated directional information. The first Ingraham et al. (2019) builds a model for proteins that considers the local change in the coordinate system for each atom in the chain.

3D GCN Cho et al. (2014) build a three-dimensional graph convolutional network, for molecular properties and biochemical activities prediction using 3D molecular graph by augmenting the standard GCN layer with the relative atomic position vector.

directional message passing Klicpera et al. (2020) embeds the messages passed between atoms such that each message is associated with a direction in coordinate space and are rotationally equivariant since the associated directions rotate with the molecule. Their message passing scheme transforms messages based on the angle between them in order to encode direction.

a) standard message                                     b) path message

Figure 4: Message function and path features for a) standard MPNN and b) MPNN passing messages on paths with length 3 in a molecule with path features involving molecular geometry

3 Neural Message Passing on paths

We extend the message passing framework by propagating information from every node’s higher order neighbour instead of aggregating messages from only nearest neighbours. The message passing phase is augmented such that hidden states at each node in the graph are updated based on messages over all simple paths up to length from its neighbourhood:

(1)

where we define to be a path in , which is the set of all simple paths starting from node with length and to be path features along path from node to node . We only sum over simple paths, excluding loops and multiple inclusions of the same node.

3.1 Path features

For graphs with a large number of nodes and edges, passing messages along paths becomes very expensive and, as in GraphSage Hamilton et al. (2017), sampling a subset of paths of higher order neighbours is necessary. However, for molecules, where the number of neighbours is usually this is not necessary. Furthermore, one can include domain specific path features in the message function. We describe two examples of these path features below

molecular substructures we can incorporate whether the path travels through a molecular substructure by considering paths of at least length 2, where we have a message function that sums over 2 neighbouring atoms . Along with their node and edge features, the possible path features include ring features - ie one hot indication if any atoms are in (specific) rings as well as if the path is a functional group (ROH) or within a larger functional group.

(2)

molecular geometry considering paths of length 3, where we have a message function that sums over 3 neighbouring atoms . Along paths of length three additional features include two bond angles and the dihedral angle between the planes defined by the pairs of atoms and . Effectively, messages passed over 3 consecutive neighbours contain information about the entire molecular geometry (see Figure 4).

(3)
Figure 5: molecules from the datasets considered

4 Experiments

4.1 Datasets

We compare the performance of our model against a few baselines on a variety of molecular property prediction tasks involving different datasets. These tasks include :

  • ESOL: Delaney (2004) predicting the aqueous solubility of 1144 molecules.

  • QM8 : Ruddigkeit et al. (2012) predicting 16 electronic spectra values calculated using density functional theory for 21786 organic molecules that have 8 or less heavy atoms (CON and F)

  • CEP : the photovoltaic efficiency of 20000 organic molecules from The Harvard Clean Energy Project Hachmann et al. (2011)

4.2 Model design

We use the following basic MPNN model that is augmented along the lines of section three in order to pass messages over paths (Path MPNN).

this uses graph attention Veličković et al. (2017) as an aggregation method and the message function from the interaction networks model in Battaglia et al. (2016), which is a simple concatenation of node and edge features. The node update function concatenates incoming messages with the current node state and feeds it through a dense layer. After propagation through message passing layers, we use the set2set model Vinyals et al. (2015) as the readout function to combine the node hidden features into a fix-sized hidden vector. For QM8 we pass messages over paths of length three and use path features for molecular geometry as specified in equation (3). For ESOL and CEP we pass messages over paths of length two and use path features for molecular substructures as specified by equation (2) The models are trained using root mean squared error (RMSE) for loss. Model evaluation is done using mean absolute error (MAE) of the molecular properties in the QM8 dataset, RMSE for ESOL and percent for CEP.

4.3 Results

Baselines We use the top performing model from Molecule Net Wu et al. (2018) (Molnet) for each dataset. We also benchmark with the differentiable version of circular fingerprints from Duvenaud et al. (2015)

(neural fingerprints). To highlight the importance of path features, we also compared the performance of the (MPNN) model we used without passing messages on paths. The last benchmark is the Path-Augmented Graph Transformer Network) (PAGTN) since this model is similarly built to model longer-range dependencies in molecular graphs. As can be seen in Table 1, for QM8, ESOL and CEP, passing messages over paths leads to a substantial improvement in predictive accuracy.

5 Comparison with other Higher Order GNNs

In a separate experiment, we compare the path MPNN with other GNNs that use higher order neighbours and do not use edge features. We consider a standard task of semi-supervised node classification with the CORA dataset.

5.1 The dataset

It contains sparse bag-of-words feature vectors for each document and a list of citation links between documents which we use as undirected edges in the adjacency matrix. Each document has a class label. Altogether, the network has 2,708 nodes and 5,429 edges with 7 classes and 1,433 features.

Model Test accuracy
GCN Kipf and Welling (2016) 81.5
MixHop Abu-El-Haija et al. (2019) 81.9
PAN Ma et al. (2019) 82.0
Path GCN 82.4

5.2 Model

We use the experimental setup of Kipf and Welling (2016). We sum over paths of length 3 while uniformly sampling a single second order and third order neighbour. Our base MPNN is a GCN Kipf and Welling (2016) that has message function

where is a dense layer with sigmoid activation. For a citation network the path features are just the node features and edge features connecting to nodes that are nodes away, i.e.

5.3 Results

We compare with two other higher order GCN variants: Mixhop Abu-El-Haija et al. (2019) and PAN Ma et al. (2019): Path integral graph convolution – both use powers of the adjacency to aggregate GCN layers of higher order neighbours. From the results table above our model achieve similar accuracy to our baselines.

6 Conclusion and Discussion

Limitations In this work we only considered very simple message functions, in general, it is not straight forward to construct message function over paths. For example, the message function from Gilmer et al. (2017), maps edge features to a square matrix using a neural net– incorporating more neighbours and their edge and path features into this kind of message function introduces many design challenges.

We introduce a general GNN framework based on message passing over simple paths of higher order neighbours. This allows us to use path features in addition to node and edge features, which is very useful in molecular graphs, as many informative features are characterized by the paths between atoms. We benchmarked our framework on molecular property prediction tasks and a node classification task in citation networks.

References

  • [1] S. Abu-El-Haija, B. Perozzi, A. Kapoor, H. Harutyunyan, N. Alipourfard, K. Lerman, G. V. Steeg, and A. Galstyan (2019) Mixhop: higher-order graph convolution architectures via sparsified neighborhood mixing. arXiv preprint arXiv:1905.00067. Cited by: §2.2, §5.1, §5.3.
  • [2] P. Battaglia, R. Pascanu, M. Lai, D. J. Rezende, et al. (2016) Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, pp. 4502–4510. Cited by: §4.2.
  • [3] B. Chen, R. Barzilay, and T. Jaakkola (2019) Path-augmented graph transformer network. arXiv preprint arXiv:1905.12712. Cited by: §2.2, Figure 5.
  • [4] K. Cho, B. Van Merriënboer, D. Bahdanau, and Y. Bengio (2014)

    On the properties of neural machine translation: encoder-decoder approaches

    .
    arXiv preprint arXiv:1409.1259. Cited by: §2.2.
  • [5] J. S. Delaney (2004)

    ESOL: estimating aqueous solubility directly from molecular structure

    .
    Journal of chemical information and computer sciences 44 (3), pp. 1000–1005. Cited by: 1st item.
  • [6] D. Duvenaud, D. Maclaurin, J. Aguilera-Iparraguirre, R. Gómez-Bombarelli, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams (2015) Convolutional networks on graphs for learning molecular fingerprints. In Neural Information Processing Systems, Cited by: §1, §1, §2.2, Figure 5, §4.3.
  • [7] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017) Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263–1272. Cited by: §1.1, §1, §2.2, §6.
  • [8] J. Hachmann, R. Olivares-Amaya, S. Atahan-Evrenk, C. Amador-Bedolla, R. S. Sánchez-Carrera, A. Gold-Parker, L. Vogt, A. M. Brockway, and A. Aspuru-Guzik (2011) The harvard clean energy project: large-scale computational screening and design of organic photovoltaics on the world community grid. The Journal of Physical Chemistry Letters 2 (17), pp. 2241–2251. Cited by: 3rd item.
  • [9] W. Hamilton, Z. Ying, and J. Leskovec (2017) Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024–1034. Cited by: §1, §3.1.
  • [10] J. Ingraham, V. Garg, R. Barzilay, and T. Jaakkola (2019) Generative models for graph-based protein design. In Advances in Neural Information Processing Systems, pp. 15794–15805. Cited by: §2.2.
  • [11] T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §2.2, §2.2, §5.1, §5.2.
  • [12] J. Klicpera, J. Groß, and S. Günnemann (2020) Directional message passing for molecular graphs. In International Conference on Learning Representations, External Links: Link Cited by: §2.2.
  • [13] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel (2015) Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493. Cited by: §1, §2.2.
  • [14] Z. Ma, M. Li, and Y. Wang (2019) PAN: path integral based convolution for deep graph neural networks. arXiv preprint arXiv:1904.10996. Cited by: §2.2, §5.1, §5.3.
  • [15] C. Morris, M. Ritzert, M. Fey, W. L. Hamilton, J. E. Lenssen, G. Rattan, and M. Grohe (2019) Weisfeiler and leman go neural: higher-order graph neural networks. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Vol. 33, pp. 4602–4609. Cited by: §2.2.
  • [16] L. Ruddigkeit, R. Van Deursen, L. C. Blum, and J. Reymond (2012) Enumeration of 166 billion organic small molecules in the chemical universe database gdb-17. Journal of chemical information and modeling 52 (11), pp. 2864–2875. Cited by: 2nd item.
  • [17] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini (2008) The graph neural network model. IEEE Transactions on Neural Networks 20 (1), pp. 61–80. Cited by: §2.2.
  • [18] K. T. Schütt, P. Kindermans, H. E. Sauceda, S. Chmiela, A. Tkatchenko, and K. Müller (2017) SchNet: a continuous-filter convolutional neural network for modeling quantum interactions. External Links: 1706.08566 Cited by: §2.2.
  • [19] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio (2017) Graph attention networks. arXiv preprint arXiv:1710.10903. Cited by: §2.2, §4.2.
  • [20] O. Vinyals, S. Bengio, and M. Kudlur (2015) Order matters: sequence to sequence for sets. arXiv preprint arXiv:1511.06391. Cited by: §4.2.
  • [21] Z. Wu, B. Ramsundar, E. N. Feinberg, J. Gomes, C. Geniesse, A. S. Pappu, K. Leswing, and V. Pande (2018) MoleculeNet: a benchmark for molecular machine learning. Chemical science 9 (2), pp. 513–530. Cited by: Figure 5, §4.3.