1. Introduction
The application of deep learning on graphs, or Graph Neural Networks (GNNs), has recently gained considerable attention. Among the significant open challenges in this area of research is the question of scalability. Cornerstone techniques such as Graph Convolutional Networks (GCNs)
(Kipf and Welling, 2017) make the training dependent on the neighborhood of any given node. Since in many realworld graphs the number of neighbors grows exponentially with the number of hops taken, the scalability of such methods is a significant challenge. In recent years, several techniques have been proposed to make GCNs more scalable, including layerwise sampling (Hamilton et al., 2018) and subgraph sampling (Chiang et al., 2019) approaches (see section 2).We contribute to this line of work by proposing a decentralised divideandconquer approach to improve the scalability of network embedding techniques. Our “local2global” approach proceeds by first dividing the network into overlapping subgraphs (or “patches”) and training separate local node embeddings for each patch (local in the sense that each patch is embedded into its own local coordinate system). The resulting local patch node embeddings are then transformed into a global node embedding (i.e. all nodes embedded into a single global coordinate system) by estimating a rigid motion applied to each patch using the AsSynchronizedAsPossible (ASAP) algorithm (Cucuringu et al., 2012b, a). A key distinguishing feature of this “decentralised” approach is that we can train the different patch embeddings separately, without the need to keep parameters synchronised. The benefit of local2global is threefold: (1) it is highly parallelisable as each patch is trained independently; (2) it can be used in privacypreserving applications and federated learning setups, where frequent communication between devices is often a limiting factor (Kairouz and McMahan, 2021), or “decentralized” organizations, where one needs to simultaneously consider data sets from different departments; (3) it can reflect varying structure across a graph through asynchronous parameter learning. Another important advantage of our local2global approach is that it can be directly applied to improve the scalability of a large variety of network embedding techniques (Goyal and Ferrara, 2018), unlike most of the existing approaches reviewed in section 2 which are restricted to GCNs.
2. Related work
The key scalability problems for GCNs only concern deep architectures where we have
nested GCN layers. In particular, a singlelayer GCN is easy to train in a scalable manner using minibatch stochastic gradient descent (SGD). For simplicity, assume that we have a fixed feature dimension
, i.e., for all layers. The original GCN paper (Kipf and Welling, 2017)uses fullbatch gradient descent to train the model which entails the computation of the gradient for all nodes before updating the model parameters. This is efficient in terms of time complexity per epoch (
) where is the number of nodes and is the number of edges. However, it requires storing all the intermediate embeddings and thus has memory complexity . Further, as there is only a single parameter update per epoch, convergence tends to be slow.The problem with applying vanilla minibatch SGD (where we only compute the gradient for a sample of nodes, i.e., the batch) to a deep GCN model is that the embedding of the nodes in the final layer depends on the embedding of all the neighbours of the nodes in the previous layer and so on iteratively. Therefore the time complexity for a single minibatch update approaches that for a fullbatch update as the number of layers increases, unless the network has disconnected components. There are mainly three families of methods (Chen et al., 2020; Chiang et al., 2019) that have been proposed to make minibatch SGD training more efficient for GCNs.

[leftmargin=0pt, itemsep=]
 Layerwise sampling.:

The idea behind layerwise sampling is to sample a set of nodes for each layer of the nested GCN model and compute the embedding for sampled nodes in a given layer only based on embeddings of sampled nodes in the previous layer rather than considering all the neighbours as would be the case for vanilla SGD. This seems to have first been used by GraphSAGE (Hamilton et al., 2018), where a fixed number of neighbours is sampled for each node at each layer. However, this results in a computational complexity that is exponential in the number of layers and also redundant computations as the same intermediate nodes may be sampled starting from different nodes in the batch. Later methods avoid the exponential complexity by first sampling a fixed number of nodes for each layer either independently (FastGCN (Chen et al., 2018a)) or conditional on being connected to sampled nodes in the previous layer (LADIES (Zou et al., 2019)) and reusing embeddings. Both methods use importance sampling to correct for bias introduced by nonuniform nodesampling distributions. Also notable is (Chen et al., 2018b)
, which uses variance reduction techniques to effectively train a GCN model using neighbourhood sampling as in GraphSAGE with only 2 neighbours per node. However, this is achieved by storing hidden embeddings for all nodes in all layers and thus has the same memory complexity as fullbatch training.
 Linear model.:

Linear models remove the nonlinearities between the different GCN layers which means that the model can be expressed as a singlelayer GCN with a more complicated convolution operator and hence trained efficiently using minibatch SGD. Common choices for the convolution operator are powers of the normalised adjacency matrix (Wu et al., 2019) and variants of personalised PageRank (PPR) matrices (Busch et al., 2020; Chen et al., 2020; Bojchevski et al., 2020; Klicpera et al., 2019). Another variant of this approach is (Frasca et al., 2020), which proposes combining different convolution operators in a wide rather than deep architecture. There are different variants of the linear model architecture, depending on whether the nonlinear feature transformation is applied before or after the propagation (see (Busch et al., 2020) for a discussion), leading to predictpropagate and propagatepredict architectures respectively. The advantage of the propagatepredict architecture is that one can precompute the propagated node features (e.g., using an efficient pushbased algorithm (Chen et al., 2020)) which can make training highly scalable. The disadvantage is that this will densify sparse features which can make training harder (Bojchevski et al., 2020). However, the results from (Busch et al., 2020) suggest that there is usually not much difference in prediction performance between these options (or the combined architecture where trainable transformations are applied before and after propagation).
 Subgraph sampling.:

Subgraph sampling techniques (Zeng et al., 2019; Chiang et al., 2019; Zeng et al., 2020) construct batches by sampling an induced subgraph of the full graph. In particular, for subgraph sampling methods, the sampled nodes in each layer of the model in a batch are the same. In practice, subgraph sampling seems to outperform layerwise sampling (Chen et al., 2020). GraphSAINT (Zeng et al., 2020), which uses a randomwalk sampler with an importance sampling correction similar to (Chen et al., 2018a; Zou et al., 2019), seems to have the best performance so far. Our local2global approach shares similarities with subgraph sampling, most notably ClusterGCN (Chiang et al., 2019), which uses graph clustering techniques to sample the batches. The key distinguishing feature of our approach is that we train independent models for each patch whereas for ClusterGCN, model parameters have to be kept in sync for different batches, which hinders fully distributed training and its associated key benefits (see section 1).
3. LOCAL2GLOBAL algorithm
The key idea behind the local2global approach to graph embedding is to embed different parts of a graph independently by splitting the graph into overlapping “patches” and then stitching the patch node embeddings together to obtain a global node embedding. The stitching of the patch node embeddings proceeds by estimating the rotations/reflections and translations for the embedding patches that best aligns them based on the overlapping nodes.
Consider a graph with node set and edge set . The input for the local2global algorithm is a patch graph , where each node (i.e., a “patch”) of the patch graph is a subset of and each patch is associated with an embedding . We require that the set of patches is a cover of the node set (i.e., ), and that the patch embeddings all have the same dimension . We further assume that the patch graph is connected and that the patch edges satisfy the minimum overlap condition . Note that a pair of patches that satisfies the minimum overlap condition is not necessarily connected in the patch graph.
The local2global algorithm for aligning the patch embeddings proceeds in two stages and is an evolution of the approach in (Cucuringu et al., 2012b, a). We assume that each patch embedding is a perturbed part of an underlying global node embedding , where the perturbation is composed of reflection (), rotation (SO()), translation (
), and noise. The goal is to estimate the transformation applied to each patch using only pairwise noisy measurements of the relative transformation for pairs of connected patches. In the first stage, we estimate the orthogonal transformation to apply to each patch embedding, using a variant of the eigenvector synchronisation method
(Singer, 2011; Cucuringu et al., 2012b, a). In the second stage, we estimate the patch translations by solving a leastsquares problem. Note that unlike (Cucuringu et al., 2012b, a), we solve for translations at the patch level rather than solving a least squares problem for the node coordinates. This means that the computational cost for computing the patch alignment is independent of the size of the original network and depends only on the amount of patch overlap, the number of patches and the embedding dimension.3.1. Eigenvector synchronisation over orthogonal transformations
We assume that to each patch , there corresponds an unknown group element (represented by a orthogonal matrix), and for each pair of connected patches we have a noisy proxy for , which is precisely the setup of the group synchronization problem.
For a pair of connected patches such that we can estimate the relative rotation/reflection by applying the method from (Horn et al., 1988)^{1}^{1}1Note that the roation/reflection can be estimated without knowing the relative translation. to their overlap as . Thus, we can construct a block matrix where is the orthogonal matrix representing the estimated relative transformation from patch to patch if and otherwise, such that for connected patches.
In the noisefree case, we have the consistency equations for all such that . We can combine the consistency equations for all neighbours of a patch to get
(1) 
where we use to weight the contributions as we expect a larger overlap to give a more robust estimate of the relative transformation. We can write eq. 1 as , where is a blockmatrix and is a blockmatrix. Thus, in the noisefree case, the columns of are eigenvectors of
with eigenvalue 1. Thus, following
(Cucuringu et al., 2012b, a), we can use the leading eigenvectors^{2}^{2}2While is not symmetric, it is similar to a symmetric matrix and thus admits a basis of real, orthogonal eigenvectors. of as the basis for estimating the transformations. Let be the matrix whose columns are the leading eigenvectors of , where is the block of corresponding to patch . We obtain the estimate of by finding the nearest orthogonal transformation to using an SVD (Horn et al., 1988), and hence the estimated rotationsynchronised embedding of patch is .3.2. Synchronisation over translations
After synchronising the rotation of the patches, we can estimate the translations by solving a least squares problem. Let be the (rotationsynchronised) embedding of node in patch ( is only defined if ). Let be the translation of patch , then in the noisefree case we have the consistency equations
(2) 
We can combine the conditions in eq. 2 for each edge in the patch graph to obtain
(3) 
where is the matrix such that the th row of is the translation of patch and is the incidence matrix of the patch graph with entries , where denotes the Kronecker delta. Equation 3 defines an overdetermined linear system that has the true patch translations as a solution in the noisefree case. In the practical case of noisy patch embeddings, we can instead solve eq. 3 in the leastsquares sense
(4) 
We estimate the aligned node embedding in a final step using the centroid of the aligned patch embeddings of a node, i.e.,
3.3. Scalability of the local2global algorithm
The patch alignment step of local2global is highly scalable and does not directly depend on the size of the input data. The cost for computing the matrix is where is the average overlap between connected patches (typically
) and the cost for computing the vector
is . Both operations are trivially parallelisable over patch edges. The translation problem can be solved with an iterative leastsquares solver with a periteration complexity of . The limiting step for local2global is usually the synchronisation over orthogonal transformations which requires finding eigenvectors of a sparse matrix with nonzero entries for a periteration complexity of . This means that in the typical scenario where we want to keep the patch size constant, the patch alignment scales almost linearly with the number of nodes in the dataset, as we can ensure that the patch graph remains sparse, such that scales almost linearly with the number of patches. The scaling puts some limitations on the embedding dimension attainable with the local2global approach, though, as we can see from the experiments in section 4.4, it remains feasible for reasonably high embedding dimension.The preprocessing to divide the network into patches scales as . The speedup attainable due to training patches in parallel depends on the oversampling ratio (i.e., the total number of edges in all patches divided by the number of edges in the original graph). As seen in section 4.4, we achieve good results with moderate oversampling ratios.
4. Experiments
4.1. Data sets
We consider two data sets to test the viability of the local2global
approach to graph embeddings, the Cora citation data set from
(Yang et al., 2016) and the Amazon photo data set from (Shchur et al., 2019). We consider only nodes and edges in the largest connected component (LCC). We show some statistics of the data sets in table 1.nodes in LCC  edges in LCC  features  

Cora  
Amazon photo 
is sampled with probability
;4.2. Patch graph construction
The first step in the local2global embedding pipeline is to divide the network into overlapping patches. In some federatedlearning applications, the network may already be partitioned and some or all of the following steps may be skipped provided the resulting patch graph is connected and satisfies the minimum overlap condition for the desired embedding dimension. Otherwise, we proceed by first partitioning the network into nonoverlapping clusters and then enlarging clusters to create overlapping patches. This twostep process makes it easier to ensure that patch overlaps satisfy the conditions for the local2global algorithm without introducing excessive overlaps than if we were to use a clustering algorithm that produces overlapping clusters directly. We use the following pipeline to create the patches:

Partition the network into nonoverlapping clusters such that for all . We use METIS (Karypis and Kumar, 1998) to cluster the networks for the experiments in section 4.4. However, for very large networks, more scalable clustering algorithms such as FENNEL (Tsourakakis et al., 2014) could be used.

Initialize the patches to and define the patch graph , where iff there exist nodes and such that . (Note that if is connected, is also connected.)

Sparsify the patch graph to have mean degree using algorithm 1 adapted from the effectiveresistance sampling algorithm of (Spielman and Srivastava, 2011).

Expand the patches to create the desired patch overlaps. We define a lower bound and upper bound for the desired patch overlaps and use algorithm 2 to expand the patches such that for all .
For Cora, we split the network into 10 patches and sparsify the patch graph to a target mean degree . We set the lower bound for the overlap to and upper bound to . For Amazon photo, we split the network into 20 patches and sparsify the patch graph to a target mean degree of . We set the lower bound for the overlap to and the upper bound to .
4.3. Embedding model
As embedding method we consider the variational graph autoencoder (VGAE) architecture of (Kipf and Welling, 2016). We use the Adam optimizer (Kingma and Ba, 2015) for training with learning rate set to 0.01 for Cora and 0.001 for Amazon photo and train all models for 200 epochs. We set the hidden dimension of the models to for Cora and to for Amazon photo where is the embedding dimension.
4.4. Results
As a first test case for the viability of the local2global approach, we consider a network reconstruction task. We train the models using all edges in the largest connected component and compare three training scenarios

[leftmargin=]
 full::

Model trained on the full data.
 l2g::

Separate models trained on the subgraph induced by each patch and stitched using the local2global algorithm.
 notrans::

Same training as l2g but node embeddings are obtained by taking the centroid over patch embeddings that contain the node without applying the alignment transformations.
We evaluate the network reconstruction error using the AUC scores based on all edges in the largest connected component as positive examples and the same number of randomly sampled nonedges as negative examples. We train the models for 200 epochs using fullbatch gradient descent. We show the results in fig. 1. For ‘full’, we report the best result out of 10 training runs. For ‘l2g’ and ‘notrans’, we first identify the best model out of 10 training runs on each patch and report the results for stitching the best models.
Overall, the gap between the results for ‘l2g‘ and ‘full‘ is small and essentially vanishes for higher embedding dimensions. The aligned ‘l2g‘ embeddings consistently outperform the unaligned ‘notrans’ baseline.
5. Conclusion
In this work, we introduced a framework that can significantly improve the computational scalability of generic graph embedding methods, rendering them scalable to realworld applications that involve massive graphs, potentially with millions or even billions of nodes. At the heart of our pipeline is the local2global algorithm, a divideandconquer approach that first decomposes the input graph into overlapping clusters (using one’s method of choice), computes entirely local embeddings via the preferred embedding method, for each resulting cluster (exclusively using information available at the nodes within the cluster), and finally stitches the resulting local embeddings into a globally consistent embedding, using established machinery from the group synchronization literature.
Our preliminary results on mediumscale data sets are promising and achieve comparable accuracy on graph reconstruction as globally trained VGAE embeddings. Our ongoing work consists of two keys steps. A first is to further demonstrate the scalability benefits of local2global on largescale data sets using a variety of embedding techniques and downstream tasks by comparing with stateoftheart synchronised subgraph sampling methods, as well as exploring the tradeoff between parallelisability and embedding quality as a function of patch size and overlap. A second is to demonstrate particular benefits of locality and asynchronous parameter learning. These have clear advantages for privacy preserving and federated learning setups. It would also be particularly interesting to assess the extent to which this local2global approach can outperform global methods. The intuition and hope in this direction stems from the fact that asynchronous locality can be construed as a regularizer (much like subsampling, and similar to dropout) and could potentially lead to better generalization and alleviate the oversmoothing issues of deep GCNs, as observed in (Chiang et al., 2019).
References
 (1)
 Bojchevski et al. (2020) Aleksandar Bojchevski, Johannes Klicpera, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. 2020. Scaling Graph Neural Networks with Approximate PageRank. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’20). ACM, USA, 2464–2473.

Busch
et al. (2020)
Julian Busch, Jiaxing Pi,
and Thomas Seidl. 2020.
PushNet: Efficient and Adaptive Neural Message
Passing. In
Proceedings of the 24th European Conference on Artificial Intelligence
(ECAI 2020). IOS Press, The Netherlands, 1039–1046.  Chen et al. (2018a) Jie Chen, Tengfei Ma, and Cao Xiao. 2018a. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018).

Chen
et al. (2018b)
Jianfei Chen, Jun Zhu,
and Le Song. 2018b.
Stochastic Training of Graph Convolutional Networks
with Variance Reduction. In
Proceedings of the 35th International Conference on Machine Learning
(PMLR, Vol. 80). PMLR, 942–950.  Chen et al. (2020) Ming Chen, Zhewei Wei, Bolin Ding, Yaliang Li, Ye Yuan, Xiaoyong Du, and JiRong Wen. 2020. Scalable Graph Neural Networks via Bidirectional Propagation. In Advances in Neural Information Processing Systems (NeurIPS 2020, Vol. 33). Curran Associates, Inc., USA, 14556–14566.
 Chiang et al. (2019) WeiLin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and ChoJui Hsieh. 2019. ClusterGCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks. In Proceedings of the 25th ACM SIGKDD Intl. Conference on Knowledge Discovery & Data Mining (KDD ’19). ACM, USA, 257–266.
 Cucuringu et al. (2012a) Mihai Cucuringu, Yaron Lipman, and Amit Singer. 2012a. Sensor network localization by eigenvector synchronization over the euclidean group. ACM Transactions on Sensor Networks 8, 3 (2012), 1–42.
 Cucuringu et al. (2012b) Mihai Cucuringu, Amit Singer, and David Cowburn. 2012b. Eigenvector synchronization, graph rigidity and the molecule problem. Information and Inference 1, 1 (2012), 21–67.
 Frasca et al. (2020) Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Ben Chamberlain, Michael Bronstein, and Federico Monti. 2020. SIGN: Scalable Inception Graph Neural Networks. arXiv:2004.11198 [cs.LG]
 Goyal and Ferrara (2018) Palash Goyal and Emilio Ferrara. 2018. Graph embedding techniques, applications, and performance: A survey. KnowledgeBased Systems 151 (2018), 78–94.
 Hamilton et al. (2018) William L. Hamilton, Rex Ying, and Jure Leskovec. 2018. Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems (NIPS ’17, Vol. 31). Curran Associates, Inc., USA, 1025–1035.
 Horn et al. (1988) Berthold K. P. Horn, Hugh M. Hilden, and Shahriar Negahdaripour. 1988. Closedform solution of absolute orientation using orthonormal matrices. Journal of the Optical Society of America A 5, 7 (1988), 1127–1135.
 Kairouz and McMahan (2021) Peter Kairouz and H. Brendan McMahan (Eds.). 2021. Advances and Open Problems in Federated Learning. Foundations and Trends in Machine Learning 14, 1 (2021).
 Karypis and Kumar (1998) George Karypis and Vipin Kumar. 1998. A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs. SIAM Journal on Scientific Computing 20, 1 (1998), 359–392.
 Kingma and Ba (2015) Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015). arXiv:1412.6980 [cs.LG]
 Kipf and Welling (2016) Thomas N. Kipf and Max Welling. 2016. Variational Graph AutoEncoders. Bayesian Deep Learning Workshop (NIPS 2016). arXiv:1611.07308 [stat.ML]
 Kipf and Welling (2017) Thomas N. Kipf and Max Welling. 2017. SemiSupervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017). arXiv:1609.02907 [cs.LG]
 Klicpera et al. (2019) Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. 2019. Predict then Propagate: Graph Neural Networks meet Personalized PageRank. In Proceedings of the 7th International Conference on Learning Representations (ICLR 2019). arXiv:1810.05997 [cs.LG]
 Shchur et al. (2019) Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. 2019. Pitfalls of Graph Neural Network Evaluation. arXiv:1811.05868 [cs.LG]
 Singer (2011) Amit Singer. 2011. Angular synchronization by eigenvectors and semidefinite programming. Applied and Computational Harmonic Analysis 30, 1 (2011), 20–36.
 Spielman and Srivastava (2011) Daniel A Spielman and Nikhil Srivastava. 2011. Graph sparsification by effective resistances. SIAM J. Comput. 40, 6 (2011), 1913–1926.
 Tsourakakis et al. (2014) Charalampos Tsourakakis, Christos Gkantsidis, Bozidar Radunovic, and Milan Vojnovic. 2014. FENNEL: Streaming Graph Partitioning for Massive Scale Graphs. In Proceedings of the 7th ACM international conference on Web search and data mining (WSDM ’14). ACM, USA, 333–342.
 Wu et al. (2019) Felix Wu, Tianyi Zhang, Amauri Holanda de Souza Jr., Christopher Fifty, Tao Yu, and Kilian Q. Weinberger. 2019. Simplifying Graph Convolutional Networks. In Proceedings of the 36th International Conference on Machine Learning (PMLR, Vol. 97). PMLR, 6861–6871.

Yang
et al. (2016)
Zhilin Yang, William W
Cohen, and Ruslan Salakhutdinov.
2016.
Revisiting SemiSupervised Learning with Graph Embeddings. In
Proceedings of the 33rd International Conference on Machine Learning (PMLR, Vol. 48). PMLR, 40–48.  Zeng et al. (2019) Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal. Kannan, and Viktor Prasanna. 2019. Accurate, Efficient and Scalable Graph Embedding. In 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE, USA, 462–471.
 Zeng et al. (2020) Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. 2020. GraphSAINT: Graph Sampling Based Inductive Learning Method. In Proceedings of the 8th Intl. Conference on Learning Representations (ICLR 2020).
 Zou et al. (2019) Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, and Quanquan Gu. 2019. LayerDependent Importance Sampling for Training Deep and Large Graph Convolutional Networks. In Advances in Neural Information Processing Systems (NeurIPS 2019, Vol. 32). Curran Associates, Inc., USA.
Comments
There are no comments yet.