# Sheaf Neural Networks with Connection Laplacians

A Sheaf Neural Network (SNN) is a type of Graph Neural Network (GNN) that operates on a sheaf, an object that equips a graph with vector spaces over its nodes and edges and linear maps between these spaces. SNNs have been shown to have useful theoretical properties that help tackle issues arising from heterophily and over-smoothing. One complication intrinsic to these models is finding a good sheaf for the task to be solved. Previous works proposed two diametrically opposed approaches: manually constructing the sheaf based on domain knowledge and learning the sheaf end-to-end using gradient-based methods. However, domain knowledge is often insufficient, while learning a sheaf could lead to overfitting and significant computational overhead. In this work, we propose a novel way of computing sheaves drawing inspiration from Riemannian geometry: we leverage the manifold assumption to compute manifold-and-graph-aware orthogonal maps, which optimally align the tangent spaces of neighbouring data points. We show that this approach achieves promising results with less computational overhead when compared to previous SNN models. Overall, this work provides an interesting connection between algebraic topology and differential geometry, and we hope that it will spark future research in this direction.

## Authors

• 2 publications
• 12 publications
• 4 publications
• 27 publications
• 42 publications
• 106 publications
12/17/2021

### A singular Riemannian geometry approach to Deep Neural Networks II. Reconstruction of 1-D equivalence classes

In a previous work, we proposed a geometric framework to study a deep ne...
09/28/2020

### Graph Neural Networks with Heterophily

Graph Neural Networks (GNNs) have proven to be useful for many different...
01/17/2021

### Heterogeneous Similarity Graph Neural Network on Electronic Health Records

Mining Electronic Health Records (EHRs) becomes a promising topic becaus...
03/11/2019

### Fisher-Bures Adversary Graph Convolutional Networks

In a graph convolutional network, we assume that the graph G is generate...
06/30/2020

### Track Seeding and Labelling with Embedded-space Graph Neural Networks

To address the unprecedented scale of HL-LHC data, the Exa.TrkX project ...
12/06/2019

### Hyperbolic Graph Attention Network

Graph neural network (GNN) has shown superior performance in dealing wit...
06/29/2021

### RICE: Refining Instance Masks in Cluttered Environments with Graph Neural Networks

Segmenting unseen object instances in cluttered environments is an impor...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Graph Neural Networks (GNNs) (Scarselli et al., 2008) have shown encouraging results in a wide range of applications, ranging from drug design (Stokes et al., 2020) to guiding discoveries in pure mathematics (Davies et al., 2021). One advantage over traditional neural networks is that they can leverage the extra structure in graph data, such as edge connections.

GNNs, however, do not come without issues. Traditional GNN models, such as Graph Convolutional Networks (GCNs) (Kipf and Welling, 2016) have been shown to work poorly on heterophilic data. In fact, GCNs use homophily as an inductive bias by design, that is, they assume that connected nodes will likely belong to the same class and have similar feature vectors, which is not true in many real-world applications (Zhu et al., 2020a). Moreover, GNNs also suffer from over-smoothing (Oono and Suzuki, 2019), which prevents these models from improving, and may actually even worsen their performance when stacking several layers. These two problems are, from a geometric point of view, intimately connected (Chen et al., 2020a; Bodnar et al., 2022).

Bodnar et al. (2022) showed that when the underlying “geometry” of the graph is too simple, the issues discussed above arise. More precisely, they analysed the geometry of the graph through cellular sheaf theory (Curry, 2014; Hansen, 2020; Hansen and Ghrist, 2019), a subfield of algebraic topology (Hatcher, 2005). A cellular sheaf associates a vector space to each node and edge of a graph, and linear maps between these spaces. A GNN which operates over a cellular sheaf is known as a Sheaf Neural Network (SNN) (Hansen and Gebhart, 2020; Bodnar et al., 2022).

SNNs work by computing a sheaf Laplacian, which recovers the well-known graph Laplacian when the underlying sheaf is trivial, that is, when vector spaces are 1-dimensional and we apply identity maps between them. Hansen and Ghrist (2019) have first shown the utility of SNNs in a toy experimental setting, where they used a manually-constructed sheaf Laplacian based on full knowledge of the data generation process. Bodnar et al. (2022) proposed to learn

this sheaf Laplacian from data using stochastic gradient descent, making these types of models applicable to any graph dataset. However, this can also lead to computational complexity problems, overfitting and optimisation issues.

This work proposes a novel technique that aims to precompute a sheaf Laplacian from data in a deterministic manner, removing the need to learn it with gradient-based approaches. We do this through the lens of differential geometry, by assuming that the data is sampled from a low-dimensional manifold and optimally aligning the neighbouring tangent spaces via orthogonal transformations ( see Figure 1). This idea was first introduced as groundwork for vector diffusion maps by Singer and Wu (2012). However, it only assumed a point-cloud structure. Instead, one of our contributions involves the computation of these optimal alignments over a graph structure. We find that our proposed technique performs well, while reducing the computational overhead involved in learning the sheaf.

In Section 2, we present a brief overview of cellular sheaf theory and neural sheaf diffusion (Bodnar et al., 2022). Next, in Section 3, we give details of our new procedure used to pre-compute the sheaf Laplacian before the model-training phase, which we refer to as Neural Sheaf Diffusion with Connection Laplacians (Conn-NSD). We then, in Section 4, evaluate this technique on various datasets with varying homophily levels. We believe that this work is a promising attempt at connecting ideas from algebraic topology and differential geometry with machine learning, and hope that it will spark further research at their intersection.

## 2 Background

We briefly overview the necessary background, starting with GNNs and cellular sheaf theory and concluding with neural sheaf diffusion. The curious reader may refer to Curry (2014); Hansen (2020); Hansen and Ghrist (2019) for a more in-depth insight into cellular sheaf theory, and to Bodnar et al. (2022) for the full theoretical results of neural sheaf diffusion.

### 2.1 Graph Neural Networks

GNNs are a family of neural network architectures that generalise neural networks to arbitrarily structured graphs. A graph is a tuple consisting of a set of nodes and a set of edges . We can represent each node in the graph with a -dimensional feature vector and group all the feature vectors into a matrix . We represent the set of edges with an adjacency matrix . A GNN layer then takes these two matrices as input to produce a new set of (latent) feature vectors for each node:

 \Hb(l)=f(\Hb(l−1),\Ab). (1)

In the case of a multi-layer GNN, the first layer , takes as input , whereas subsequent layers, , take as input , the latent features produced by the GNN layer immediately before it. There are numerous architectures which take this form, with one of the most popular being the Graph Convolutional Network (GCN) (Kipf and Welling, 2016) which implements Equation (1) the following way:

 \Hb(l)=σ(^\Db−12^\Ab^\Db−12\Hb(l−1)\Wb(l)), (2)

where

is a non-linear activation function (e.g. ReLU),

, is the diagonal node degree matrix of and is a weight matrix. This update propagation is local (due to the adjacency matrix), meaning that each latent feature vector is updated as a function of its local neighbourhood, weighted by a weight matrix and then symmetrically normalised. This kind of model has proven to be extremely powerful in a myriad of tasks. The weight matrix

at each layer is learnt from the data through back-propagation, by minimising some loss function (e.g. cross-entropy loss).

### 2.2 Cellular Sheaf Theory

###### Definition 2.1.

A cellular sheaf on an undirected graph consists of:

• [leftmargin=5mm, topsep=0pt,itemsep=-0.4ex]

• A vector space for each ,

• A vector space for each ,

• A linear map for each incident node-edge pair .

The vector spaces of the node and edges are called stalks, while the linear maps are called restriction maps. It is then natural to group the various spaces. The space which is formed by the node stalks is called the space of -cochains, while the space formed by edge stalks is called the space of -cochains.

###### Definition 2.2.

Given a sheaf , we define the space of -cochains as the direct sum over the vertex stalks . Similarly, the space of -cochains as the direct sum over the edge stalks .

Defining the spaces and allows us to construct a linear co-boundary map . From an opinion dynamics perspective (Hansen and Ghrist, 2021), the node stalks may be thought of as the private space of opinions and the edge stalks as the space in which these opinions are shared in a public discourse space. The co-boundary map then measures the disagreement between all the nodes.

###### Definition 2.3.

Given some arbitrary orientation for each edge , we define the co-boundary map as . Here is a 0-cochain and is the vector of at the node stalk .

The co-boundary map allows us to construct the sheaf Laplacian operator over a sheaf.

###### Definition 2.4.

The sheaf Laplacian of a sheaf is a map defined as .

The sheaf Laplacian is a symmetric positive semi-definite (by construction) block matrix. The diagonal blocks are , while the off-diagonal blocks are .

###### Definition 2.5.

The normalised sheaf Laplacian is defined as where is the block-diagonal of .

Although stalk dimensions are arbitrary, we work with node and edge stalks which are all -dimensional for simplicity. This means that each restriction map is , and therefore so is each block in the sheaf Laplacian. With we denote the number of nodes in the underlying graph , which results in our sheaf Laplacian having dimensions .

If we construct a trivial sheaf where each stalk is isomorphic to and the restriction maps are identity maps, then we recover the well-known graph Laplacian from the sheaf Laplacian. This effectively means that the sheaf Laplacian generalises the graph Laplacian by considering a non-trivial sheaf on .

###### Definition 2.6.

The orthogonal (Lie) group of dimension , denoted , is the group of orthogonal matrices together with matrix multiplication.

If we constrain the restriction maps in the sheaf to belong to the orthogonal group (i.e., ), the sheaf becomes a discrete -bundle and can be thought of as a discretised version of a tangent bundle on a manifold. The sheaf Laplacian of the -bundle is equivalent to a connection Laplacian used by Singer and Wu (2012). The orthogonal restriction maps describe how vectors are rotated when transported between stalks, in a way analogous to the transportation of tangent vectors on a manifold.

Orthogonal restriction maps are advantageous because orthogonal matrices have fewer free parameters, making them more efficient to work with. The Lie group has a -dimensional manifold structure (compared to the -dimensional general linear group describing all invertible matrices). In , for instance, rotation matrices have only one free parameter (the rotation angle).

### 2.3 Neural Sheaf Diffusion

We now discuss the existing sheaf-based machine learning models and their theoretical properties. Consider a graph where each node has a -dimensional feature vector . We construct an -dimensional vector by column-stacking the individual vectors . Allowing for feature channels, we produce the feature matrix . The columns of are vectors in , one for each of the channels.

Sheaf diffusion is a process on governed by the following differential equation: (0) = ,  ˙(t) = - Δ_(t), which is discretised via the explicit Euler scheme with unit step-size: (t+1) = (t) - Δ_(t) = (_nd - Δ_) (t)

The model used by Bodnar et al. (2022) for experimental validation was of the form

 (3)

where and are weight matrices, the restriction maps defining are computed by a learnable parametric matrix-valued function , on which additional constraints (e.g., diagonal or orthogonal structure) can be imposed. Equation (3) was discretised as

 \Xbt+1=\Xbt−σ(Δ\Fs(t)(In⊗\Wbt1)\Xbt\Wbt2) (4)

It is important to note that the sheaf and the weights in equation (4) are time-dependent, meaning that the underlying “geometry” evolves over time.

## 3 Connection Sheaf Laplacians

The sheaf Laplacian arises from the sheaf built upon the graph , which in turn is determined by constructing the individual restriction maps . Instead of learning a parametric function as done by Bodnar et al. (2022)

, we compute the restriction maps in a non-parametric manner at pre-processing time. In doing so, we avoid learning the maps by backpropagation. In particular the restriction maps we compute are orthogonal. We work with this class because it was shown to be more efficient when using the same stalk width as compared to other models in

Bodnar et al. (2022), and due to the geometric analogy to parallel transport on manifolds.

### 3.1 Local PCA & Alignment for Point Clouds

We adapt a procedure to learn orthogonal transformations on point clouds, presented by Singer and Wu (2012). Their construction relies on the so-called “manifold assumption”, positing that even though data lives in a high-dimensional space , the correlation between dimensions suggests that in reality, the data points lie on a -dimensional Riemannian manifold embedded in (with significantly lower dimension, ).

Assume the manifold is sampled at points . At every point , has a tangent space (which is analogous to our ) that intuitively contains all the vectors at that are tangent to the manifold. A mechanism allowing to transport vectors between two and at nearby points is a connection (or parallel transport, which would correspond to our transport maps between and ).

Computing a connection on the discretised manifold is a two step procedure. First, orthonormal bases of the tangent spaces for each data point are constructed via local PCA. Next, the tangent spaces are optimally aligned via orthogonal transformations, which can be thought of as mappings from one tangent space to a neighbouring one. Singer and Wu (2012) computed a -neighbourhood ball of points for each point denoted . This forms a set of neighbouring points . Then the matrix is obtained, which centres all of the neighbours at . Next, an weighting matrix is constructed, giving more importance to neighbours closer to . This allows us to compute the matrix

. Then Singular Value Decomposition (SVD) is used on

such that

. Assuming that the singular values are in decreasing order, the first

left singular vectors are kept (the first vectors of ), forming the matrix . Note that the columns of are orthonormal by construction and they form a -dimensional subspace of . This basis constitutes our approximation to the basis of the tangent space .

To compute the orthogonal matrix

, which represents our orthogonal transformation from to , it is sufficient to first of all compute the SVD of and then . is the orthogonal transformation which optimally aligns the tangent spaces and based on their bases and . Whenever and are “nearby”, Singer and Wu (2012) show that is an approximation to the parallel transport operator.

### 3.2 Local PCA & Alignment for Graphs

The technique has many valuable theoretical properties, but was originally designed for point clouds. In our case, we also wish to leverage the valuable edge information at our disposal. To do this, instead of computing the neighbourhood , we take the 1-hop neighbourhood of . A problem is encountered when computing the weighting matrix , which gives different weightings dependent on the distance to the centroid of the neighbourhood. We make the assumption that

is an identity matrix, giving the same weighting to each node in the neighbourhood, as they are all at a 1-hop distance from the reference feature vector. This means that in our approach

.

Following this modification, the technique matches the procedure proposed by Singer and Wu (2012). We compute the SVD of to extract from the left singular vectors. We finally compute the orthogonal transport maps from the SVD of . This gives a modified version of the alignment procedure, that is now graph-aware. To the best of our knowledge, this a novel technique to operate over graphs. A diagram of the newly proposed approach is displayed in Figure 1.

Estimating is non-trivial, that is, the dimension of the tangent space (in our case, the stalks). In fact, we are assuming that every neighbourhood is larger than or else would have less than singular vectors, and our construction would be ill-defined. This is clearly not always the case for all . While Singer and Wu (2012) proposed to estimate directly from the data, we leave as a tunable hyper-parameter.

To solve the problem for nodes which have less than neighbours, we take the closest neighbours in terms of the Euclidean distance which are not in the 1-hop neighbourhood. In other words, when there are less than neighbours, we pick the remaining neighbours following the original procedure by Singer and Wu (2012). We note that one could try to consider an -hop neighbourhood instead, in a similar fashion to in the original technique. Still, this comes with a larger computational overhead and complications related to the weightings. Furthermore, if a graph has a disconnected node, this would still be an issue. In practice, is kept small such that most nodes have at least edge-neighbours.

Algorithm 1 shows the pseudo-code for our technique. In principle, the LocalNeighbourhood function selects the neighbours based on the 1-hop neighbourhood. If the number of these neighbours is less than the stalk dimension, we pick the closest neighbours based on the Euclidean distance, which are not in the 1-hop neighbourhood. Assuming unit cost for SVD, the run-time increases linearly with the number of data-points. Also, given that the approach here described is performed at pre-processing time, we are able to compute the sheaf Laplacian in a deterministic way in constant time during training. This removes the overhead required whilst backpropagating through the sheaf Laplacian to learn the parametric function . It also helps counter issues related to overfitting, especially when the dimension of the stalks increases as we are removing the additional parameters which come with , reducing model complexity.

## 4 Evaluation

We evaluate our model on several datasets, and compare its performance to a variety of models recorded in the literature, as well as to some especially designed baselines. For consistency, we use the same datasets as the ones discussed by Bodnar et al. (2022). These are real-world datasets which aim at evaluating heterophilic learning (Rozemberczki et al., 2021; Pei et al., 2020). They are ordered based on their homophily coefficient , which is higher for more homophilic datasets. Effectively, is the fraction of edges which connect nodes of the same class label. The results are collected over fixed splits, where 48%, 32%, and 20% of nodes per class are used for training, validation, and testing, respectively. The reported results are chosen from the highest validation score.

Table 1

contains accuracy results for a wide range of models, along with ours, Conn-NSD, for node classification tasks. An important baseline is the Multi-Layer Perceptron (MLP), whose result we report in the last row of Table

1. The MLP has access only to the node features and it provides an idea of how much useful information GNNs can extract from the graph structure. The GNN models in Table 1 can be classiffied in 3 main categories:

1. Classical: GCN (Kipf and Welling, 2016), GAT (Velickovic et al., 2017), GraphSAGE (Hamilton et al., 2017),

2. Models for heterophilic settings: GGCN (Yan et al., 2021), Geom-GCN (Pei et al., 2020), H2GCN (Zhu et al., 2020b), GPRGNN (Chien et al., 2020), FAGCN (Bo et al., 2021), MixHop (Abu-El-Haija et al., 2019),

3. Models which address over-smoothing: GCNII (Chen et al., 2020b), PairNorm (Zhao and Akoglu, 2019),

Additionally, we also include the results presented by Bodnar et al. (2022)

using sheaf diffusion models, and the two random baselines: RandEdge-NSD and RandNode-NSD. RandEdge-NSD generates the sheaf by sampling a Haar-random matrix

(Meckes, 2019) for each edge. RandNode-NSD instead generates the sheaf by sampling a Haar-random matrix for each node and then by computing the transport maps from and . These last two baselines help us determine how our sheaf structure performs against a randomly sampled one.

As we can see from the results, sheaf diffusion models tend to perform best for the heterophilic datasets such as Texas, Wisconsin, and Film. On the other hand, their relative performance drops as homophily increases. This is expected since, for example, classical models such as GCN and GAT exploit homophily by construction, whereas sheaf diffusion models are more general, adaptable, and versatile, but at the same time lose the inductive bias provided by classical models for homophilic data.

Conn-NSD, alongside the other original discrete sheaf diffusion methods, consistently beats the random orthogonal sheaf baselines, which shows that our model incorporates meaningful geometric structure. The proposed Conn-NSD model achieves excellent results on the Texas and Film datasets, outperforming Diag-NSD, O(d)-NSD, and Gen-NSD, using fewer learnable parameters. Furthermore, Conn-NSD also obtains competitive results for Wisconsin, Cornell and Pubmed and remains close-behind on Citeseer and Cora.

It is only in the case of the Squirrel dataset, and to a lesser extent Chameleon, that Conn-NSD is not able to perform as well as the models discussed by Bodnar et al. (2022). The Squirrel dataset contains a large amount of nodes and a substantially greater number of edges than all the other datasets. Importantly, the underlying MLP used for classification scores poorly. It may be that the extra flexibility provided by learning the sheaf is specially beneficial in cases in which the underlying MLP achieves low accuracy. Nevertheless, Conn-NSD still convincingly outperforms the random baselines, especially on these last two datasets.

Overall, Conn-NSD performs comparably well to learning the sheaf via gradient-based approaches in most cases. It also seems most well-suited on graphs with a very low amount of nodes. This may be explained by the fact that Conn-NSD aims to mitigate overfitting, acting as a form of regularisation which allows for faster training and fewer parameters.

#### Runtime performance

Finally, we measure the speedup achieved by moving the computation of the sheaf Laplacian at pre-processing time. Table 2 displays the mean wall-clock time for an epoch measured in seconds, obtained with a NVIDIA TITAN X GPU and an Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz. Conn-NSD achieves significantly faster inference times when compared to its direct counter-part -NSD from Bodnar et al. (2022). The larger datasets see the most benefit, with Squirrel showing a speed up.

## 5 Conclusion

We proposed and evaluated a novel technique to compute the sheaf Laplacian of a graph deterministically, obtaining promising results. This was done by leveraging existing differential geometry work that constructs orthogonal maps that optimally align tangent spaces between points, relying on the manifold assumption. We crucially adapted this intuition to be graph-aware, leveraging the valuable edge connection information in the graph structure.

We showed that this technique achieves competitive empirical results and it is able to beat or match the performance of the original models by Bodnar et al. (2022) on most datasets, as well as to consistently outperform the random sheaf baselines. This suggests that in some cases it may not be necessary to learn the sheaf through a parametric function, but instead the sheaf can be computed as a pre-processing step. This work may be regarded as a regularisation technique for SNNs, which also reduces the training time as it removes the need to backpropagate through the sheaf.

We believe we have uncovered an exciting research direction which aims to find a way to compute sheaves non-parametrically with an objective that is independent of the downstream task. Furthermore, we are excited by the prospect of further research tying intuition stemming from the fields of algebraic topology and differential geometry to machine learning. We believe that this work forms a promising first step in this direction.

## References

• S. Abu-El-Haija, B. Perozzi, A. Kapoor, N. Alipourfard, K. Lerman, H. Harutyunyan, G. Ver Steeg, and A. Galstyan (2019) Mixhop: higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning, pp. 21–29. Cited by: item 2.
• D. Bo, X. Wang, C. Shi, and H. Shen (2021) Beyond low-frequency information in graph convolutional networks. arXiv preprint arXiv:2101.00797. Cited by: item 2.
• C. Bodnar, F. Di Giovanni, B. P. Chamberlain, P. Lio, and M. M. Bronstein (2022) Neural sheaf diffusion: a topological perspective on heterophily and oversmoothing in gnns. In ICLR 2022 Workshop on Geometrical and Topological Representation Learning, Cited by: §1, §1, §1, §1, §2.3, §2, §3, §4, §4, §4, §4, §5.
• D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun (2020a) Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In

Proceedings of the AAAI Conference on Artificial Intelligence

,
Vol. 34, pp. 3438–3445. Cited by: §1.
• M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li (2020b) Simple and deep graph convolutional networks. In International Conference on Machine Learning, pp. 1725–1735. Cited by: item 3.
• E. Chien, J. Peng, P. Li, and O. Milenkovic (2020) Joint adaptive feature smoothing and topology extraction via generalized pagerank gnns. arXiv preprint arXiv:2006.07988. Cited by: item 2.
• J. M. Curry (2014) Sheaves, cosheaves and applications. University of Pennsylvania. Cited by: §1, §2.
• A. Davies, P. Veličković, L. Buesing, S. Blackwell, D. Zheng, N. Tomašev, R. Tanburn, P. Battaglia, C. Blundell, A. Juhász, et al. (2021) Advancing mathematics by guiding human intuition with ai. Nature 600 (7887), pp. 70–74. Cited by: §1.
• W. Hamilton, Z. Ying, and J. Leskovec (2017) Inductive representation learning on large graphs. Advances in neural information processing systems 30. Cited by: item 1.
• J. Hansen and T. Gebhart (2020) Sheaf neural networks. arXiv preprint arXiv:2012.06333. Cited by: §1.
• J. Hansen and R. Ghrist (2019) Toward a spectral theory of cellular sheaves. Journal of Applied and Computational Topology 3 (4), pp. 315–358. Cited by: §1, §1, §2.
• J. Hansen and R. Ghrist (2021) Opinion dynamics on discourse sheaves. SIAM Journal on Applied Mathematics 81 (5), pp. 2033–2060. Cited by: §2.2.
• J. Hansen (2020) Laplacians of cellular sheaves: theory and applications. Ph.D. Thesis, University of Pennsylvania. Cited by: §1, §2.
• A. Hatcher (2005) Algebraic topology. Cited by: §1.
• T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §1, §2.1, item 1.
• E. S. Meckes (2019) The random matrix theory of the classical compact groups. Vol. 218, Cambridge University Press. Cited by: §4.
• K. Oono and T. Suzuki (2019) Graph neural networks exponentially lose expressive power for node classification. arXiv preprint arXiv:1905.10947. Cited by: §1.
• H. Pei, B. Wei, K. C. Chang, Y. Lei, and B. Yang (2020) Geom-gcn: geometric graph convolutional networks. arXiv preprint arXiv:2002.05287. Cited by: item 2, §4.
• B. Rozemberczki, C. Allen, and R. Sarkar (2021) Multi-scale attributed node embedding. Journal of Complex Networks 9 (2), pp. cnab014. Cited by: §4.
• F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini (2008) The graph neural network model. IEEE transactions on neural networks 20 (1), pp. 61–80. Cited by: §1.
• A. Singer and H. Wu (2012) Vector diffusion maps and the connection laplacian. Communications on pure and applied mathematics 65 (8), pp. 1067–1144. Cited by: §1, §2.2, §3.1, §3.1, §3.1, §3.2, §3.2, §3.2.
• J. M. Stokes, K. Yang, K. Swanson, W. Jin, A. Cubillos-Ruiz, N. M. Donghia, C. R. MacNair, S. French, L. A. Carfrae, Z. Bloom-Ackermann, V. M. Tran, A. Chiappino-Pepe, A. H. Badran, I. W. Andrews, E. J. Chory, G. M. Church, E. D. Brown, T. Jaakkola, R. Barzilay, and J. J. Collins (2020)

A deep learning approach to antibiotic discovery

.
Cell 180, pp. 688–702.e13. Cited by: §1.
• P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio (2017) Graph attention networks. stat 1050, pp. 20. Cited by: item 1.
• Y. Yan, M. Hashemi, K. Swersky, Y. Yang, and D. Koutra (2021)

Two sides of the same coin: heterophily and oversmoothing in graph convolutional neural networks

.
arXiv preprint arXiv:2102.06462. Cited by: item 2.
• L. Zhao and L. Akoglu (2019) Pairnorm: tackling oversmoothing in gnns. arXiv preprint arXiv:1909.12223. Cited by: item 3.
• J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra (2020a) Beyond homophily in graph neural networks: current limitations and effective designs. Advances in Neural Information Processing Systems 33, pp. 7793–7804. Cited by: §1.
• J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra (2020b) Generalizing graph neural networks beyond homophily. arXiv preprint arXiv:2006.11468. Cited by: item 2.