Information over-squashing is a phenomenon of inefficient information propagation between distant nodes on networks. It is an important problem that is known to significantly impact the training of graph neural networks (GNNs), as the receptive field of a node grows exponentially. To mitigate this problem, a preprocessing procedure known as rewiring is often applied to the input network. In this paper, we investigate the use of discrete analogues of classical geometric notions of curvature to model information flow on networks and rewire them. We show that these classical notions achieve state-of-the-art performance in GNN training accuracy on a variety of real-world network datasets. Moreover, compared to the current state-of-the-art, these classical notions exhibit a clear advantage in computational runtime by several orders of magnitude.
Discrete curvature, geometric deep learning, graph neural networks, graph rewiring, information over-squashing.
The abundance of data availability has resulted in the occurrence of data captured by structures beyond vectors living in Euclidean space. Much fundamental information is encoded in data that exhibit more complex structures, some with a distinct geometric characterization—such as networks. The driving premise underlyinggeometric deep learning
is that this geometry encompasses information that is crucial to take into consideration when developing machine learning techniques (specifically, deep learning) to handle these data(geometric-learning). Thus, the inherent, non-Euclidean geometry of the data structures as well as the space they live in are fundamental aspects to understand and build into deep learning architectures.
In this paper, we consider network data and an important problem associated with training graph neural networks (GNNs). Specifically, we study the problem of information over-squashing (bottleneck; bottleneck-bronstein), which amounts to inefficient information propagation between distant nodes on a graph. This phenomenon is especially significant in tree-like graphs, where multiple nodes lead to a single node—namely, the “bottleneck.” A common approach to mitigate this problem is to perform graph rewiring on the input network data by adding or suppressing edges in the network in order to alleviate such bottlenecks and increase the efficiency of information flow over a network. Recent pioneering work by bottleneck-bronstein models information flow on a network using notions of discrete curvature and uses this network curvature information to perform graph rewiring prior to training GNNs, yielding the current state-of-the-art for GNN training in the presence of bottlenecks. In particular, a novel discrete curvature, the balanced Forman curvature, was introduced and utilized to identify bottlenecks and rewire graphs prior to training for increased efficiency in information propagation over networks (bottleneck-bronstein).
The discretization of classical notions of smooth geometry has been actively studied in recent decades (see for instance, najman-romon), resulting in various definitions of discrete curvature. An important motivation behind such discretizations are for the application of geometric methods to statistics and machine learning tasks for data exhibiting discrete geometric structure, such as network learning by sampling (e.g., barkanass2020geometric; sigbeku2021curved). In this work, we return to these fundamental principles and study the alleviation of information over-squashing by graph rewiring following the procedure used by bottleneck-bronstein. We systematically test and compare the performance of several classical discrete curvature notions against the recently proposed balanced Forman curvature on several benchmarking datasets, and find that these classical discrete curvature notions are able to achieve state-of-the-art performance in terms of accuracy for GNN training. Moreover, the computation of these classical discrete curvatures is much more efficient and runs several orders of magnitude faster than the state-of-the-art.
The remainder of this paper is organized as follows. Section 2 discusses in further detail the GNNs and the problem of information over-squashing, and briefly surveys various approaches to reducing over-squashing. In Section 3, we then switch to discussing mathematical details on discrete curvature and formally present the notions studied in this paper; we also present the balanced Forman curvature recently proposed by bottleneck-bronstein. We also overview the procedure to identifying network bottlenecks and performing graph rewiring adapted by bottleneck-bronstein, which use discretizations of smooth curvature concepts. This will be the same procedure that we will implement with the various classical discrete curvature notions. In Section 4, we describe the data we study and our experimental design and setup. In Section 5, we demonstrate the method on a wide variety of benchmarking datasets and present the accuracy and computational runtime results. Finally, we close in Section 6 with a discussion and some proposals for future research.
2 Graph Neural Networks and Information Over-Squashing
Neural networks are systems of algorithms that aim to identify underlying relationships in data in a manner similar to how biological neural networks in brains function. They consist of collections of artificial “neurons” and “synapses” that are typically organized into layers. Among deep neural networks, a specific class are adapted specifically to handling graph or network data—collections of vertices connected by edges that mathematically describe a dependency structure—-which is the focus of this paper.
The main difference between the traditional deep neural networks and GNNs lies in the functioning of the message passing algorithm (message-passing). Briefly, in message passing, at each layer and for each node, features from the neighboring nodes are aggregated before updating the features of the target node. This is the mechanism by which the network captures the information from the graph structure of the data.
2.1 Information Over-Squashing
Training GNNs presents new issues in comparison to standard neural network training due to the discrete geometric structure of network data. A particularly important challenge that has recently gained much research interest is that of information over-squashing, also known as the bottleneck problem (e.g., bottleneck; bottleneck-bronstein).
In over-squashing, the principle concern is that the influence of certain node features (which may be important) may be too small and eventually have minimal or no impact on features of distant nodes on the network when performing message passing over the GNN. This is particularly problematic in the context of network data, since the receptive field of a graph node is known to grow exponentially (bottleneck).
In a binary tree, let the -jump neighborhood of a node be the set of nodes in the graph that contains only the nodes that have the shortest path of at most to the node . Then the receptive field of the root doubles with every jump: there are twice as many nodes in the -jump neighborhood than in the -jump neighborhood for any integer . See Figure 1.
Thus, over-squashing is a crucial issue to take into consideration, especially when the long-range dependencies in the data are important for the learning task when training on graph data. It is primarily caused by the poor propagation of long-distance information by some specific edges in the graph. As an illustration, consider two components that are cliques (or two densely connected, clique-like graphs) connected only by a single edge, illustrated in Figure 1(a). When propagating information from a node in a source component to a node in the target component, over-squashing is likely to happen as the information is crowded or “squashed” together with all other node features from the source component. This happens on the edge connecting the two components which, here, is the main source of over-squashing in the graph and called a bottleneck.
2.2 Mitigating Over-Squashing
Bottlenecks may be alleviated to reduce over-squashing via graph rewiring, which adds or suppresses edges in the graph to obtain a new graph with the same nodes and node features, but a different set of edges. The goal of the rewiring is to better support the bottleneck and give alternative routes of access between components which improves the information flow between components and reduces the risk that features become crowded out (over-squashed). Edges that have little impact on the flow of information in the graph can be deleted to control the size of the graph. An example of a rewired graph with an alleviated bottleneck is shown in Figure 1(b).
Several approaches to bottleneck alleviation have been proposed in the recent literature. For example, digl propose graph diffusion convolution (GDC) as a graph rewiring approach using a discretization of the gas diffusion equation to model the propagation of information on a network. However, this method fails to capture long-range dependencies on networks (bottleneck-bronstein). There also exist other bottleneck alleviation methods that do not entail rewiring. For example, much in the spirit of the work of this paper, cgnn propose a curvature GNN (CGNN) which, instead of adding or deleting edges, assigns specific weights to graph edges as a measure of how much information flows over this edge where the weights are determined by discrete curvature. The work of bottleneck-bronstein uses a hybrid of these two methods where graph rewiring is performed driven by a newly proposed discrete curvature.
While graph rewiring effectively alleviates the over-squashing problem, it does present limitations. First, on some types of data, the rewiring approach may not be applicable at all; an example is in chemical data where the graphs represent molecules and adding or deleting edges changes the molecule under study entirely. Second, rewiring alters the structure (topology) of the graph and changes the information that may be captured from the graph connectivity, which can negatively impact feature recognition (bottleneck). In this case, there is a trade-off between reducing bottlenecks and changing the graph topology to consider. It is therefore important to obtain a measure of severity of the bottleneck—the bottleneckness of the graph—and and use it as a guide when performing graph rewiring.
2.3 Quantifying Over-Squashing
In this paper, we use the Jacobian as a measure of the bottleneckness of a graph. Consider a graph with nodes; take two nodes that are at a distance from each other, where is the number of edges on the shortest path between these nodes. To quantify over-squashing, we need to measure how much of an impact the feature vector of has on the feature vector of after -many forward passes (i.e., message passing is performed times); denote this impact by . Then the Jacobian
for -distance dependencies quantifies the over-squashing in a graph (bottleneck-bronstein).
bottleneck-bronstein show that the bounds on the elements of the Jacobian of -distance dependencies are proportional to the respective th powers of the normalized augmented adjacency matrix, i.e.,
for a constant . The powers of the normalized augmented adjacency matrix then measure the degree to which a given graph is prone to over-squashing. In other words, bottlenecks are associated with the entries of the powers of the matrix with small values. Note that the values cannot be zero, since zero values indicate no edge between two nodes.
It is also important to note that in our setting of graphs the Jacobian (1) is computed as a discrete derivative. In our work, we assume that the Jacobian is computed by numerical approximations; no further details were provided bottleneck-bronstein.
There also exist other measures to quantify over-squashing. For example, the Cheeger constant is a direct measure of over-squashing that captures how easy or difficult it is to totally disconnect a graph; however, it is known to be NP-hard to compute (cheeger).
3 Discrete Geometry and Curvature
In this section, we turn to the mathematical aspects of discrete curvature, which may be used to model information flow on a network (e.g., cgnn; bottleneck-bronstein). Here, we present the origins of smooth geometry and curvature, and discuss the evolution towards discrete notions. Most importantly, we define all discrete curvatures that will be implemented in our study.
3.1 Ricci Curvature and Ricci Flow
The Ricci curvature of differential geometry is, roughly speaking, a measure that quantifies the extent to which a Riemannian manifold locally differs from a Euclidean space in various tangential directions. In particular, Ricci curvature determines whether two geodesics shot in parallel from two nearby points on a given manifold tend to converge, remain parallel, or diverge along the manifold. Then curvature is positive, if the geodesics converge to a single point; zero, if the geodesics remain parallel; and negative, if the geodesics diverge; see Figure 3. The quicker the convergence or divergence, the larger the absolute value of Ricci curvature.
Ricci curvature can be used to smooth a manifold via the Ricci flow
, namely the partial differential equation
where denotes the Riemannian metric and the Ricci curvature (ricci-flow). It should be noted that in most discretizations the 2-dimensional version of the flow is adopted (see e.g. GuYau. In this dimension, , where denotes the classical Gauss curvature, thus the Ricci flow becomes
In the discrete setting of meshes or networks, the PDE above becomes an ODE, thus the flow is reversible, which a fact of practical importance in many applications and, in particular, the one we study in this paper. Also observe that regions where () tend to shrink, while those with () tend to expand.
Consider the example manifold in Figure 4 for an intuition on how Ricci flow may be used to smooth a manifold. In Figure 3(a), the Ricci flow is illustrated by the color and thickness of the arrows and indicate how much as well as the direction in which an expansion produces a smoothed version of the manifold illustrated in Figure 3(b).
In the above Example 2, the regions of negative Ricci curvature where the Ricci flow is illustrated with blue arrows can be seen as a bottleneck of the manifold. This observation motivates a discretization of the manifolds to graphs, as well as corresponding notions of Ricci curvature and Ricci flow, to be able to use them to model and reduce information over-squashing.
. Red and blue arrows correspond to points with positive and negative curvature respectively, and figuratively represent the metric tensor. (fig:ricci-flow2) The manifold at a later time step, expanded and shrunk accordingly to the arrows in (fig:ricci-flow1).
From Manifolds to Graphs.
In certain instances, there is a natural reduction of manifolds to graphs. For example, images can be represented in a discrete manner by meshes, which can be seen as 4-regular graphs, while in graphics, data is encoded as triangular meshes whose 1-skeleta are also graphs.
Concretely, for the three types of curvature discussed above (positive, zero, and negative), there exist natural graph analogies. For a sphere where curvature is positive, a clique is a suitable representation: two parallel geodesics shot from two nearby points on a sphere meet at the top of a sphere, and, likewise, two edges from two adjacent points (connected directly by an edge) in a clique can meet at a common node to create a triangle. For a plane where curvature is flat, a rectangular grid is an appropriate graphical representation: parallel lines on a plane remain parallel forever, and edges from two adjacent points remain parallel. Finally, a hyperbolic manifold with negative curvature may be represented by a binary tree. See Figure 5 for graphical examples of manifolds with positive, zero, and negative curvature.
3.2 Discrete Curvature
With the above intuition of discretizing manifolds to graphs, it is natural to correspondingly define discretized versions of curvature. On graphs, discrete curvatures are traditionally node-based measures, however, discrete Ricci curvature is an edge-based measure. This is not only natural, given that classical curvature is a directional measure, hence attached to vectors, it also allows for a better and deeper understanding of networks, which are defined by the relationships between their nodes, i.e., by their edges (najman-romon; discrete-curvature).
In the discrete Ricci curvature, the edge endpoints correspond to the two nearby points on the manifold from which parallel geodesics are shot to determine the Ricci curvature. We note that the discrete Ricci curvature can also be defined for graph nodes by aggregating, e.g., averaging, the discrete curvature of incident edges, however the notion of node curvature does not play a role in our study of over-squashing which is an edge-specific phenomenon.
There is no single, established definition of discrete curvature. Depending on heuristics, there are many types. Here, we outline the first and best-known discrete curvatures historically proposed for networks. The driving motivation is that, in analogy to the Ricci flow for manifolds, the bottlenecks will have the lowest discrete curvature in the graph.
Note that in our setting, we work with undirected networks and these curvatures are defined for undirected networks. Analogues for directed networks exist, but since the interest of this work is to explore the role of various discrete curvatures on networks and their performance in reducing over-squashing as studied by bottleneck-bronstein, who study the undirected case, we follow suit in our work. Furthermore, we work with unweighted networks, which give rise to combinatorial properties of graphs that lend computational benefits.
1D Forman curvature.
For two nodes in a graph and an edge between them, the general 1D Forman curvature of is given by (forman-curvature):
where and denote the edges other than that that are adjacent to nodes and respectively; , , and denote the weights of edges , and respectively; and and denote the weights of the nodes and respectively.
Recall, however, that here, we study unweighted graphs, which means that only combinatorial weights of nodes and edges are considered, i.e., the weights of all nodes and edges are equal to . In this case, (4) becomes simply
where denotes the degree of node . Note that the first term is rather than because the node is counted as a neighbor in and vice versa.
In our setting, the 1D Forman curvature is given by (5) which is a very simple expression and extremely fast to compute, and is concerned only by the degrees of the endpoints of the edge under consideration. The highest value of the curvature is equal to and is attained when the edge is disconnected from the rest of the graph. The 1D Forman curvature is negative for the majority of edges in general, as it is always negative when the edge is directly connected to at least other edges.
The drawback of the simplicity of this curvature is that it is not always very descriptive, even in comparison with the curvature values of other edges in the graph, since in the model case of combinatorial weights, the 1D Forman curvature gives information only about the number of edges directly connected to the edge under consideration. For example, for two clique-like subgraphs connected by one edge, as in Figure 1(a), the bottleneck would be correctly identified as an edge with the lowest curvature. However, it would generally assign lower curvature to clique-like components of the graph rather than the tree-like components, as an edge in a clique is connected directly to all of the other edges in the clique, while e.g., in a binary tree it is only directly connected to other edges. Thus the measure of interest is the relative curvature in comparison to the curvature of other edges in the graph, mainly in the combinatorial case.
Augmented Forman curvature.
The augmented Forman curvature or 2D Forman curvature attempts to solve the above-mentioned drawbacks of 1D Forman curvature.
For two nodes in a graph and an edge between them, the augmented Forman curvature or 2D Forman curvature is given by (2d-forman):
where denotes that is parallel to , i.e., and have a common higher or lower dimensional graph face (e.g., they are two edges that are both a part of the same triangle, which would be in turn equivalent to them having a common neighbour); denotes that is a graph face of (e.g., is an edge and is a triangle that is a part of); and the rest of the notation is as in Definition 3 (here the faces are also weighted).
The mathematical definition of augmented Forman curvature captured in (6) is significantly more complex than 1D Forman curvature. However, following 2d-forman, we can consider solely -cycles, i.e., triangles, and chose again only combinatorial weights. This reduces (6) to the following form that relates to (5) (2d-forman)
where is the number of triangles containing the edge under consideration.
The idea is that the curvature (7) increases in relation to if an edge is contained in some triangles. More precisely, the factor of in (7) equal to guarantees that edges that create a triangle together with the edge under consideration do not contribute negatively to the curvature, but positively instead. Indeed, there should intuitively be no problem with information over-squashing within a -cycle, which is the simplest form of a clique.
For each pair of edges that create a triangle with , the curvature increases by , while for the 1D Forman curvature if would simply decrease the curvature by , due to the contribution to the degrees of endpoints of . If an edge is not a member a triangle with , it contributes negatively to the augmented Forman curvature by decreasing it by , just as in the 1D version. Hence, the augmented version maintains a balance between the growth of degrees of endpoints and creation of -cycles.
The Haantjes curvature (haantjes2) is less common than the Forman curvatures, even though its definition is by far the simplest and most intuitive of all the discrete network curvatures; see haantjes for its even simpler network adaptations.
Consider a graph where all weights are equivalently equal to 1 (i.e., the combinatorial case). For two nodes in a graph and an edge between them, the Haantjes curvature is given by
where is as in (7), i.e., the number of triangles containing the edge .
The original Haantjes curvature is a metric curvature, thus in the network case it takes into account solely edge weights. However, in practice, one can devise new edge weights that incorporate both the original ones as well as the given vertex weights (see haantjes). Definition 5 is commonly used in graphics settings and simply counts the triangles adjacent to a given edge, i.e., the number of -cycles containing the edge under consideration, .
As a consequence, the Haantjes curvature is indeed higher for clique-like components of a graph than for tree-like components (each edge of a -clique is a part of triangles, but there are no triangles in a tree). Haantjes curvature is trivially nonnegative, which is also in contrast with 1D Forman, where the majority of edges usually have negative curvature. The augmented Forman curvature can now be thought of as a balance between 1D Forman and Haantjes curvatures. (Note that an “augmented” Haantjes curvature, that takes into account face weights as well, has been introduced in haantjes.)
Balanced Forman curvature.
The most recent proposal for discrete curvature that will also be studied in this paper is is the balanced Forman curvature (BFC). It was introduced specifically in the context of over-squashing on graphs (bottleneck-bronstein).
Definition 6 (Balanced Forman curvature, (bottleneck-bronstein)).
Consider an edge . Let for ; , the number of triangles containing ; for , the number of neighbors of that create a -cycle (square) that contains and does not contain any diagonals (see Figure 6); and be the maximal number of -cycles that contain traversing a common node. Then the balanced Forman curvature is defined as if and
The idea behind the BFC is to preserve a balance between the complexity of computation (in the spirit of the simple formulation of the classical Forman curvature) and the richness of structural information associated with neighboring edges. In particular, the BFC formulation takes into account - and -cycles, as well as “loose” neighboring edges, i.e., those that do not create - or -cycles. Here, loose edges have a negative curvature contribution, -cycles have a positive curvature contribution, and -cycles have zero curvature contribute to the BFC. These components are normalized by node degrees.
Note that the BFC is similar to the Forman and Haantjes curvatures where -cycles are explicitly taken into consideration, while -cycles are considered via loose edges. Here, in contrast to the BFC, loose edges are considered as components of -cycles and have a negative curvature contribution to the Forman and Haantjes curvatures.
3.3 Discrete Ricci Flow: The Stochastic Discrete Ricci Flow
We now outline the discretization of Ricci flow that will be implemented in our experimental work, namely, the stochastic discrete Ricci flow (SDRF) algorithm (bottleneck-bronstein). Specifically, it is a graph rewiring algorithm that was introduced with the aim of addressing the problem of over-squashing in GNN training; it is designed to support edges with low curvature which are identified as the bottlenecks by adding new edges to increase curvature to increase the efficiency of message passing. It operates very much in the same spirit as Ricci flow, where, in particular, regions of negative or low curvature are identified and compensated by an opposite effect depending on the negativity in order to smooth the manifold. Additionally, it incorporates a mechanism to prevent a blow-up on the size of the graph. The algorithm thus intakes a graph and produces another graph where the regions of the most negatively curved edges of the input graph are augmented with additional edges to increase the curvature at those regions.
At each iteration, the algorithm chooses the edge with the smallest curvature, candidate edges to add to support the edge under consideration, and the edge to add from candidates with softmax probability (regulated with atemperature parameter ) with the aim to increase curvature, where this latter value is calculated as the difference between curvature of the edge under consideration before and after adding the support edge. The algorithm then chooses the edge with the highest curvature and, if this curvature value surpasses a certain threshold, removes this edge from the graph and thus ensures a bound on the size of the graph. The process repeats until either the convergence is reached, in the sense that there are no additional candidates and no edges to remove, or the maximum number of iterations is reached.
Notice here that the curvature computation is incorporated into the SDRF algorithm when the softmax probability is computed for each candidate selection.
4 Data and Experimental Setup
In this section, we describe the datasets used and give details on the setup of our experimental study. The aim is to test the performance in terms of accuracy and computational runtime of various discrete curvatures in the SDRF algorithm that is designed to reduce information over-squashing in training GNNs. With this aim in mind, the experiments were set up to closely align with the setup of bottleneck-bronstein in order to better relate the findings. Furthermore, to ensure fairness of method evaluation, the performance on additional datasets that were not studied by bottleneck-bronstein was also evaluated, resulting in a wider variety of dataset applications and an independent implementation of their proposed BFC. Recall, however, that an important difference between our work and theirs is that only one curvature—the BFC—was used in their curvature-based rewiring.
We used the following 12 benchmarking datasets in our experimental study:
Large citations dataset containing information about diabetes of patients classified into one of three classes
Cornell, Texas, and Wisconsin (ctw)
Small datasets containing information about world wide web pages collected from computer science departments of corresponding universities
Chameleon, Squirrel (chameleon-squirrel), and Actor (actor)
Large datasets based on the Wikipedia networks
Computers and Photo (computers-photo)
Large e-commerce (Amazon) datasets,
Coauthor CS (cgnn)
Large citation dataset with papers in computer science
The datasets Computers, Photo and Coauthor CS were not evaluated in bottleneck-bronstein. The details of the datasets are summarized in Table 1.
max width= Chameleon Squirrel Actor Computers Photo Coauthor CS Nodes 832 2186 4388 13381 7487 18333 Edges 12355 65224 21907 245778 119043 81894 Features 2323 2089 931 767 745 6805 Classes 5 5 5 10 8 15 Undirected No No No Yes Yes Yes
4.2 Experimental Design
We tested the performances of no curvature (i.e., no rewiring); 1D Forman curvature; augmented Forman curvature; Haantjes curvature; and balanced Forman curvature in the SDRF algorithm for graph rewiring. The implementation of the SDRF algorithm was taken from the repository associated with bottleneck-bronstein available at https://github.com/jctops/understanding-oversquashing
. Other design choices and setup parameters such as such as data loading, selection of largest connected component, network type, hyperparameters, and seeds have been set followingbottleneck-bronstein; Table 2 presents the hyperparameters used for training of GNN models.
Software and Data Availability.
The full implementation of the SDRF algorithm with all curvatures studied incorporated and datasets are freely and publicly available at https://github.com/jakubbober/discrete-curvature-rewiring.
5 Results: Supervised Learning with Graph Rewiring
We now present the results of the supervised learning task of SDRF-based graph rewiring on each of the 12 datasets discussed in the previous section. We report results on accuracy and computational runtime.
Each experiment was run for 100 seeds, so we report 95% confidence intervals of mean accuracies are reported using a-score of 1.96. For reference and performance comparison, the 95% confidence intervals for the SDRF-rewiring using BFC reported by bottleneck-bronstein are also given for those relevant datasets.
The best two results are highlighted for each dataset in each accuracy table: the best one in red bold, the second best in black bold (excluding the reported BFC results from bottleneck-bronstein for reference). The None curvature row represents results without any rewiring. OOM indicates that the out of memory error has occurred. N/A in the reference BFC row for Computers, Photo and Coauthor CS datasets indicates that there are no reference results for these datasets as these datasets were not studied by bottleneck-bronstein.
The results reported in Table 3 indicate that SDRF rewiring generally increases the training performance. The reference BFC results reported in bottleneck-bronstein are also generally comparable to the BFC results of the performed experiment, although we note a tendency for our computation of BFC-based SDRF rewiring to be on the lower side, though still in the general range where we are able to claim reproducibility.
In particular, we note that performance for the classical curvatures is generally better than the performance without any rewiring, and often better than performance of BFC. For some results in Table 3, the simplest form of curvature—the 1D Forman curvature—tends to give the best results. This tends to indicate that the edges with large sums of degrees are the graph bottlenecks and suffer from over-squashing. The results for Haantjes curvature are the best for some of the other datasets, which suggests that association with many -cycles helps an edge to reduce over-squashing. Although with less frequency, the augmented Forman curvature also yields best results for certain experiments, which could mean that maintaining the balance between the two metrics can reduce over-squashing most effectively.
Note, however, that the experiments upon rerunning yielded results that differ quite significantly, especially for small datasets (Cornell, Texas, Wisconsin). For example, Table 3 shows that the Haantjes curvature seems to generally bring the best results in the first run, while the augmented Forman curvature performs best in the second run. More importantly, it is often the case that the corresponding results (dataset–curvature pairs) for different rewirings for the two runs are often not within the respective confidence intervals, indicating a lack of robustness of the results. One explanation for this phenomenon can be overfitting of the average accuracy to one instance of the SDRF rewiring. This can have a significant impact on the average performance, especially for the small datasets, for which the rewiring of multiple edges can have a greater impact on the graph structure than for larger datasets. The results for these datasets also differ significantly between each type of curvature, and in relation to performance without any rewiring. Moreover, the BFC results differ more significantly for these datasets than for others with respect to the reference BFC results.
To further investigate the intuition that adding or deleting edges on smaller graphs impact the overall graph structure more significantly (taking into account that the hyperparameters set in Table 2 are very high relative to graph size), we re-ran experiments for Cora, Citeseer, Cornell, Texas and Wisconsin datasets with rewiring for each seed. These datasets were selected given that rewiring was the fastest (as will be discussed further on in discussing computational runtime). The test results of these experiments are shown in Table 4.
Table 4 presents results from two runs with rewiring for every seed, which are shown to be significantly more robust. The sizes of the confidence intervals are comparable to those reported previously in Table 3, but only two pairs of corresponding runs are not contained in the confidence intervals of one another (namely, Cornell–Haantjes and Texas–BFC). As there are
dataset–curvature pairs for which the experiments were run, the mean results are indeed robust and it is reasonable to consider the results as close to being independent and identically distributed (i.i.d.): the probability that two or more out of 25 means of i.i.d. random variables are not within the correspondingconfidence intervals is , which is high.
Furthermore, the results of these additional experiments are significantly worse than the reference BFC results. This is likely due to the accuracies for differently rewired graphs having been averaged out, as opposed to using the rewiring with the best validation accuracy for the benchmarking. In contrast, the results in Table 3 are slightly better for some dataset–curvature pairs than the reference BFC results, and sometimes slightly worse. When actually using the framework in practice, for the best results, the training can be performed for several different seeds and the model with the best validation accuracy can be chosen with the most effective rewired graph structure.
We summarize the test results for rewiring instances and model parameters pairs that achieved the best validation accuracy in the experiments reported in the second run from Table 4 in Table 5. Only the second run is considered, but this does not have a significantly negative impact on the robustness of the results, since, as previously justified, the results in Table 4 are robust.
The main conclusion we draw from these experiemnts is that there is no clear curvature type that has better mean performance overall across the multiple datasets, but it is reasonable to conclude that using the classical curvatures for SDRF-based rewiring can lead to significant performance improvement, often achieving better results than BFC. For every dataset, performing the SDRF-based rewiring almost always yields the best test accuracy when using one of the three classical curvature types, compared to BFC (although no rewiring may also yield the best results). Often, the two best test accuracies are achieved using classical curvatures.
Determining Bottleneckness: Jacobian Bounds.
We now turn to an assessment of the bottleneckness of each of our datasets as a further validation of our accuracy conclusions. As described above in Section 2, heuristically, bottleneckness is more severe the faster the decrease of the minimum nonzero values of the powers of the normalized augmented adjacency matrix (2).
Figure 7 presents the log-log plot of the minimum non-zero entries of the normalized augmented adjacency matrix for the first powers for most of the studied datasets. Pubmed, Computers, Photo and Coauthor CS, which are not in this plot, produced adjacency matrices that were too large, which led to killing the processes responsible for computing the powers.
From this figure, we see that the decay of the values is the slowest for Cora and Citeseer datasets. This confirms our previous conclusions, since according to the previously reported results, the difference in accuracy between no rewiring and rewirings for different curvatures was minimal for these datasets suggesting that these datasets are not significantly affected by over-squashing. Similar conclusions can be drawn for the Squirrel dataset, for which the plot in Figure 7 also decreases more slowly (especially between and ), and which is also reported to have the best accuracy for no rewiring in Table 3.
To capture the reduction of over-squashing via rewiring for a graph, we ran experiments to compute the powers of matrix values as above for both before and after the rewiring. The results of these experiments for several datasets, curvature types, and matrix powers are presented in Table 6.
The rewiring instances chosen for this experiment are the ones with the best validation accuracy used for Table 5. For the datasets that were not evaluated (Chameleon, Squirrel, Actor), rewiring instances from the second run of experiment from Table 3 were selected. The data presented in Table 6 is rounded to decimal places. The matrix powers reported for each curvature type are chosen to be , , and .
From Table 6 and the comparison of Figure 7 to Figure 8, we see that rewiring successfully decreases the decay of the Jacobian bounds. The minimum nonzero values of powers of normalized augmented adjacency matrices are lower by many degrees of magnitude for no rewiring than for SDRF-based rewiring using any discrete curvature. The only two exceptions are Cora and Citeseer datasets, for which the values for respective powers are similar with and without rewiring; see Table 6. This also confirms that these two datasets have low bottleneckness and are not particularly prone to over-squashing.
5.2 Computational Runtime
We now assess computation runtime of the SDRF algorithm for graph rewiring based on each curvature type. We measure the runtime for one rewiring process per curvature type and per dataset; the measurements are given in Table 7.
The runtimes here are reported for only one instance for each dataset and each curvature type, in order to avoid influences of spurious computational issues such as CPU and GPU occupancy with other processes which would become much more significant with repeated iterations. Here, the interest is rather in the comparison between longer computation times which shows the difference in computational complexity at scale.
From these results, we see that all of the classical discrete curvatures studied have a significantly shorter computation time than the BFC. The slowest among the three classical curvatures is the augmented Forman curvature, at scale. This is expected, as it essentially needs to do the same calculations as 1D Forman and Haantjes curvatures combined (computation of degrees of endpoints and adjacent triangles for each edge).
For the Computers and Photo datasets, however, the computation of the augmented Forman curvature took longer than the computation of BFC. This suggests that for some types of graphs, possibly for bigger or more dense graphs (notice from Table 1 that the edges to nodes ratio is very high for these two datasets), the BFC computation can outperform the augmented Forman curvature computation in terms of computation time. Nevertheless, the 1D Forman and Haantjes curvatures are still quicker to compute.
In this paper, we systematically and comprehensively studied the role of various classical and novel discrete curvatures in mitigating the over-squashing problem in training GNNs. Specifically, following the work of bottleneck-bronstein, we adapted discretizations of Ricci curvature and Ricci flow, which can be viewed as the smooth, manifold-valued analogues of important characteristics on networks relevant in the information over-squashing problem—namely, information flow on a network and bottleneckness of a network, respectively. In bottleneck-bronstein—considered to be the current state-of-the-art in mitigating the over-squashing problem in GNN training, classified among the top 1.5% of submissions in the 2022 International Conference on Learning Representations (ICLR) with Honorable Mention—the BFC was proposed as a discrete notion of Ricci curvature, while the SDRF algorithm was proposed as a discrete notion of Ricci flow. In our work, we tested a wide range of classical discrete curvatures against the BFC in the implementation of the SDRF algorithm. We found that more classical curvatures were able to achieve performance of the same order as the BFC in training accuracy and, at times, outperformed the BFC. Moreover, they far outperformed it in computational runtime. From this systematic study, we find that the impact of the contribution by bottleneck-bronstein lies in the SDRF algorithm, rather than the BFC. We also conclude that almost any of the more classical discrete curvatures may be used over the BFC together with the SDRF algorithm in favor of the more efficient computational runtimes, which is an important consideration when studying very large networks.
Directions of future study include exploring the performance of classical discrete curvatures taking into account directedness of the graphs in the SDRF and other rewiring methods. Also, alternative discrete geometric approaches to mitigating the over-squashing problem that do not involve rewiring may be explored, in the spirit of the CGNN (cgnn). Here, other computational notions of geometry for networks may also be investigated, such as those arising from topological data analysis, where persistent homology concurrently captures the topology of data as well as its integral geometry. Such an approach would be particularly interesting when the goal is to preserve the topology of a graph, as the CGNN does.