Semi-supervised classification on graphs using explicit diffusion dynamics

09/24/2019 ∙ by Robert L. Peach, et al. ∙ Imperial College London 0

Classification tasks based on feature vectors can be significantly improved by including within deep learning a graph that summarises pairwise relationships between the samples. Intuitively, the graph acts as a conduit to channel and bias the inference of class labels. Here, we study classification methods that consider the graph as the originator of an explicit graph diffusion. We show that appending graph diffusion to feature-based learning as an a posteriori refinement achieves state-of-the-art classification accuracy. This method, which we call Graph Diffusion Reclassification (GDR), uses overshooting events of a diffusive graph dynamics to reclassify individual nodes. The method uses intrinsic measures of node influence, which are distinct for each node, and allows the evaluation of the relationship and importance of features and graph for classification. We also present diff-GCN, a simple extension of Graph Convolutional Neural Network (GCN) architectures that leverages explicit diffusion dynamics, and allows the natural use of directed graphs. To showcase our methods, we use benchmark datasets of documents with associated citation data.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, supervised learning has become a standard tool in data science, and a variety of algorithms have been developed and applied to a range of problems 

[22, 17]

. One such task is classification, whereby the known labels of a training dataset are used to infer the free parameters of a model (a ‘classifier’) in order to predict the unknown labels (or


) of other samples. Classic examples of such methods include the multi-layer perceptron (MLP), support vector machine (SVM) and random forest (RF), among many others 

[5]. In many situations there is only a small number of known labels, as compared to the number of labels to be learnt. To improve algorithmic performance, one can leverage additional information that may be available from the unlabeled samples using, e.g., generative models [19], low density separation [8]

or heuristic approaches 


In some cases, the additional information about the dataset is in the form of pairwise relationships between samples. Such relationships can be formalised as a weighted graph, where the nodes are samples and the edges represent similarities. The ensuing graph encodes global information about the full dataset (including the unlabeled samples), and can be used to guide the training of the classifier, e.g., by including the Laplacian or adjacency matrix of the graph within the composition of features in the deep learning layers [6]. In this way, the graph can guide the learning of the inferred parameters to reflect the additional information, thus leading to improved classification. Algorithms that take advantage of an associated graph to improve supervised learning come under the banner of

graph semi-supervised learning

, and include graph Laplacian regularisation and graph embedding approaches, such as label propagation (LP) [38] and semi-supervised embedding (SemiEmb) [34], DeepWalk [29] or Planetoid [35], among others. Recently, state-of-the-art performance was achieved through the introduction of graph convolutional neural network (GCN) architectures based on spectral representations of graph convolutions [18, 13, 7, 20]. Further improvements in performance have been achieved through tuning of the convolutional operators and the deep learning architectures of GCNs [33, 23, 9, 39, 36, 16, 26].

Although graph semi-supervised learning sits at the interface of machine learning and graph theory, algorithmic developments have focussed predominantly on the deep learning architecture, with less consideration devoted to exploiting relevant graph theoretical properties, and the relationship between feature-based and graph-based learning. For instance, for GCNs to improve classification it is necessary that the graph is

aligned with the similarities between sample feature vectors [30]. As discussed above, using a graph within GCNs is posited as a means to bias the propagation of label information between similar samples. However, standard GCN algorithms do not consider explicitly the dynamic diffusion of information on the graph, neither do they take advantge of the intrinsic inhomogeneities in the scale of sample neighbourhoods that naturally emerge from the diffusion dynamics.

Here, we present semi-supervised graph classification methods that view the graph as a generator of an explicit diffusive dynamics through its Laplacian. We start by exploiting this concept in its simplest form by using a continuous-time graph diffusion as a means to induce an a posteriori re-classification of any previously obtained assignment, be it based on features alone, or on graph and features combined. We take this prior probabilistic assignment as the initial condition of a diffusion dynamics, and we search the time evolution of node dynamics for large overshoots. This is the basis to relabel nodes to a different class. Hence the probabilistic output of a classifier is filtered a posteriori through a graph diffusion to obtain a re-labelling consistent with the graph structure. The unfolding provided by the graph diffusion has been used to capture features at multiple scales to detect communities in an unsupervised manner [14], and to obtain graph embeddings at different scales [31]. More recently, overshooting dynamics has been proposed as a basis to generalise the notion of graph centrality across scales [1]. Here, we use overshooting in the graph diffusion [2, 1] to capture the influence of nodes across all scales in the graph as a means to refine class labels. Note that this method affords distinct scales for each node, since overshooting events occur at different times for different nodes. We refer to this method as Graph Diffusion Re-classification (GDR). Tests on benchmark datasets of text documents for which a citation graph exist show that GDR achieves state-of-the-art performance.

We also show that a graph diffusion kernel with a time hyperparameters can be included within the GCN inference. This simple extension of GCN, which we denote

diffusive GCN (diff-GCN), affords additional flexibility through the inferred diffusion timescale and outperforms GCN for the benchmark datasets. Furthermore, the use of explicit graph diffusion lends itself naturally to the case of directed graphs [2, 4], thus allowing the use of the asymmetry of the relationships between samples for classification purposes. We exemplify the use of graph directionality on a directed citation network.

Structure of the paper.

In section 2 we give definitions and set up the problem; we review some classic methods of supervised and semi-supervised classification with and without a graph; and we describe the benchmark datasets used throughout. Section 3 describes the GDR algorithm that leverages explicit diffusion on the graph to reclassify nodes given a prior class assignment, and we show the result of its application to the benchmarks. In section 4, we propose an extension to GCN where the continuous diffusion operator is embedded within the GCN inference algorithm and we illustrate its performance. Section 5 presents how the ideas behind graph diffusion extend naturally to the case when the graph is directed, and we show how this additional information can be used to improve classification. We finish in Section 6 with a discussion and some concluding remarks.

2 The use of graphs in semi-supervised classification

Notation and statement of the problem

Let us consider a set of samples , each described by an -dimensional feature vector . Each sample belongs to one of classes and its membership is represented through an indicator vector , where is the vector of ones. Within the sample set, we have a training subset of samples, for which the class labels are known. The class labels are unknown for the remaining samples.

Given the feature vectors for all samples and the class labels for the samples in the training set, the task of supervised classification is to obtain a class assignment for the unknown labels.

Let us assume that we have access to pairwise similarities between all samples. Then we can define a graph with vertices and edges, where each node corresponds to a sample and the edge between nodes and has a weight reflecting the similarity between the samples and . The similarities thus form the weighted adjacency matrix of the graph. We first consider the case of symmetric similarities , i.e., when the graph is undirected. The case of directed graphs is presented in Section 5. Another important matrix associated with the graph is the Laplacian: , where is the diagonal matrix of weighted degrees.

The problem of semi-supervised graph classification uses the graph together with the features of the full dataset and the known classes of the training set to predict the classes of the remaining nodes in the graph.

For notational compactness, we arrange our samples into two data matrices with rows given by the feature vectors : corresponds to the training subset and corresponds to the samples with unknown class labels. Similarly, we compile the known class labels into a - membership matrix , with rows given by the membership vectors of the training set. Given (and potentially the graph adjacency matrix ), our classification task is therefore to infer a row-stochastic membership assignment matrix for the samples with unknown class labels.

The assignment matrix can be hard (-) or probabilistic. A given probabilistic assignment

can be turned into a hard assignment by taking the highest probability class, collected in a node assignment vector


where the operator is applied row-wise.

Some additional matrix operations

We use standard normalisations, defined here for completeness. Given a matrix , we define its row-normalised version as


The row-wise softmax, widely used in machine learning, can then be written as


where the matrix represents element-wise exponentiation, i.e., . By construction, both and are row stochastic matrices, i.e. .

Some deep learning methods mentioned below use nonlinear processing units. In particular, the rectified linear unit (ReLU) function for matrix

is defined as


where and are element-wise matrix operators, i.e., and .

We now provide brief descriptions of supervised and semi-supervised classification methods, without and with the use of a graph, which will serve as comparisons and prior classifiers for our diffusion-based re-classification algorithm (GDR) and the diffusive extension of GCN (diff-GCN).

2.1 Supervised classification without a graph.

Projection (no learning)

Perhaps the simplest approach to supervised classification (without a graph) is to classify the samples with unknown labels according to their projection on the centroids of the known classes of the training set. The projection operator is easily obtained from the data and membership matrices without any parametric inference or ‘learning’.

The centroids of the classes of the training data are given by the rows of the matrix


where denotes the pseudo-inverse of . Therefore, the projection of the remaining samples onto the centroids of the classes derived from the training set is simply , and a probabilistic membership matrix for is obtained by the row normalisation


where the element gives the probability that the sample belongs to the class .

Supervised classifiers (with learning)

Beyond a naive projection, many alternatives have been proposed for the problem of supervised classification through learning. These methods entail the definition of a classifier, i.e., a model with a particular structure (‘architecture’) and parameters to be inferred (‘learnt’) from the training set. Here, we exemplify our work with three standard, widely used algorithms: the multi-layer perceptron (MLP), support vector machine (SVM), and random forest (RF) algorithms [17]. Similarly to (6), each classifier outputs a probabilistic assignment matrix for the unknown class labels. For instance, an MLP classifier based on a two-layer perceptron with hidden units gives


where and

are inter-layer connectivity matrices containing the learnt parameters of the model, inferred through the optimisation of a loss function measuring the error of the prediction on the training data


The other two classifiers considered here (SVM, RF) use different heuristics to infer the corresponding assignment matrices and , respectively. Here, we choose these classic supervised classifiers to illustrate our work, but any other method could be used [17].

2.2 Semi-supervised classification with a graph

Given a weighted graph with adjacency matrix encoding pairwise similarities between all samples, several methods have been proposed to use this additional information during the inference of the model to enhance performance [38, 34, 29]. These methods, which are usually classed as semi-supervised graph classification algorithms, are transductive, i.e., they use information from the full dataset during the training phase, since the graph includes relationships between all samples including those with unknown class labels. Here we focus on the graph convolutional neural network (GCN) architecture [20], which has been shown to outperform other semi-supervised graph classifiers. As a comparison, we also show results from Planetoid [35], a method that learns a graph embedding from features.

Graph convolutional neural networks (GCNs)

GCNs originate from work in spectral graph convolutions and convolutional neural networks [13, 18]. The GCN architecture [20] is akin to other deep learning classifiers such as the MLP, i.e., layers of nonlinear units interconnected through matrices to be inferred through gradient learning, yet also including a graph convolution operation for every layer.

Consider an undirected graph with adjacency matrix , and let us define the matrix containing the feature vectors of all samples. In a GCN with two convolutional layers and hidden units, the classifier for the samples with unknown class labels has the form [20]:


where and are inter-layer connectivity matrices to be inferred by optimising a loss function that only includes the training nodes, . The topology of the graph is included through a symmetrised transition matrix of an associated graph with self loops:


where the adjacency matrix of the graph with self loops is , with

the identity matrix of size

, and . See [20] for further details.

Note that from a dynamical perspective, the convolution in each layer of (8)–(9) conveys a discrete diffusion process, since

is the transition matrix of a discrete-time Markov chain on the graph with self-loops Hence, each layer of the GCN applies a one-step random walk to the output of the nonlinear units. The training phase can thus be thought of as propagating the label information on the graph so as to bias the inference of the parameters in

and through the graph structure. Below we exploit this dynamical viewpoint in more detail through an explicit formulation of the diffusive dynamics on the graph.

3 Graph Diffusion Reclassification (GDR)

Let be a row-stochastic class assignment matrix for all the samples, where contains the known labels of the training set and is the probabilistic assignment for the remaining samples obtained using any of the methods described above (or any other). The method of graph diffusion reclassification (GDR) uses as the initial condition of a diffusion dynamics, and uses a particular feature of the ensuing time evolution (i.e., the presence of overshoots) to implement sample re-classification.

3.1 Reclassification based on overshooting of diffusive dynamics

Let us consider a diffusive process on the graph [21, 27, 31] governed by the (combinatorial) graph Laplacian


where the vector is defined on the nodes of the graph. This linear equation is solved by the matrix exponential



is the initial condition. If the graph is undirected, the Laplacian is a symmetric and doubly stochastic matrix, and the dynamics (

11) preserves the -norm. Hence the stationary state is the constant vector .


The exponential operator in (11) has been used as a means to reveal information about a graph across different scales [11, 15, 21, 31]. Here, in contrast, we use a multiscale notion also emanating from the diffusive dynamics, which captures the influence of each node on any other node of the graph [2, 1], i.e., the presence of overshooting in the approach of the dynamics (11) to stationarity.

To illustrate this phenomenon, consider as initial condition a delta impulse of mass at node : , where is the indicator vector at node . The -th coordinate of the solution (11) gives the time evolution at node : . For the source node, , decays towards the stationary value . For all other nodes, the value of increases from zero towards in two qualitative ways: if node is closely influenced by the source, undergoes an overshoot (i.e., it crosses the stationary value); if node does not feel strongly the influence of the source, then approaches monotonically from below. Note that, depending on the relative connectivity graph, an overshoot can happen at different times for different nodes, i.e., it is possible to observe a ‘late’ overshoot. The presence or absence of an overshoot (with respect to a node) over the whole time scale thus establishes a measure of influence of the node based on the diffusion on the graph. Given an initial class assignment on the graph, the node overshootings are obtained from the condition


where is a burn-in time to allow the decay of the dominance of the initial class assignment.

We refer to [1] for a more extended discussion of this notion and its use to define a multiscale node centrality measure in graphs.

The GDR reclassifier

Starting with a prior assignment matrix , a node is reclassified according to the largest overshoot induced by any of the classes. The values of all the node overshoots are captured compactly in matrix form as


and the reclassification of the nodes is given from the maximum overshoot across classes


where the is a row-wise operator finding the maximum across classes, and we define so that the indicator vector marks the set of non-overshooting nodes.

This reclassification vector is then used to update the prior (hard) assignment to give the GDR assignment


Clearly, the training set is never reclassified. The burn-in time is a hyperparameter which is tuned for each dataset using a validation subset.

3.2 Application of GDR to benchmark datasets


To test our models, we closely follow the experimental setups in [35, 20, 30]. We use three benchmark datasets [32] consisting of scientific articles Citeseer, Cora and Pubmed. Each document is described by a feature vector summarising its text, and belongs to a scientific topic (class). For each dataset, we also have access to the associated citation network (undirected). In addition, we use a Wikipedia dataset collected in [30] consisting of Wikipedia articles from subcategories, where the feature vectors are bag-of-words representations of the text, and the graph represents hyperlink citations. The Wikipedia dataset is an example where, contrary to the other three datasets, the classification task is not aided by the combination of graph and features [30]. Details of these datasets are summarised in Table 1.

Datasets Nodes Edges Classes Features
Table 1: Statistics of datasets as reported in [35] and [30].

Numerical experiments

We have used these datasets to test the performance of the reclassified vector as compared to the hard prior assignment from supervised classifiers without a graph (projection, RF, SVM, MLP) and from semi-supervised graph classifiers (Planetoid and GCN) presented in Sections 2.12.2. In order to test the improvement due to the graph information, we have also considered a uniform prior across samples, which ignores the information from the features of the samples. Each dataset was split into training, validation and test sets at different ratios, where the training set percentage of total samples were 3.6%, 5.2%, 0.3% and 3.5% for Citeseer, Cora, Pubmed and Wikipedia, respectively.

Method Citeseer Cora Pubmed Wikipedia
Uniform 7.7 13.0 18.0 28.7
GDR (Uniform) 50.6 (+42.9) 71.8 (+58.8) 73.2 (+55.2) 31.4 (+2.7)
Projection 61.8 59.0 72.0 32.5
RF 60.3 58.9 68.8 50.8
SVM 61.1 58.0 49.9 31.0
MLP 57.0 56.0 70.7 43.0
GDR (Projection) 70.4 (+8.7) 79.7 (+20.7) 75.8 (+3.8) 36.9 (+4.4)
GDR (RF) 70.5 (+10.2) 78.7 (+19.8) 72.2 (+3.2) 50.8 (+0.0)
GDR (SVM ) 70.3 (+9.2) 81.2 (+23.2) 52.4 (+2.5) 41.9 (+10.8)
GDR (MLP) 69.7(+12.7) 78.5 (+22.5) 75.5 (+4.8) 40.5 (-2.5)
Planetoid 64.7 75.7 72.2 -
GCN 70.3 81.1 79.0 39.2
GDR (GCN) 70.8 (+0.5) 82.2 (+1.1) 79.4 (+0.4) 39.5 (+0.3)
Table 2: Percentage classification accuracy before and after application of relabelling by GDR for various classifiers. We present the improvement of GDR on the uniform prediction (which ignores features). We also consider four supervised classifiers (which learn from features without the graph): projection, RF, SVM and MLP. For RF, we use a maximum depth of ; for SVM, we set ; for MLP, we implement the same architecture as GCN (-unit hidden layer, dropout, epochs, learning rate, loss function). Finally, we compare with two semi-supervised graph classifiers: GCN [20] and Planetoid [35]. The numbers in brackets record the change in accuracy accomplished by applying GDR on the corresponding prior classifier.

Classification performance

Table 2 summarises the percentage classification accuracy before and after the application of GDR for various prior classifiers. Our main observation is that for the Citeseer, Cora and Pubmed datasets, a posteriori relabelling by GDR improves significantly the classification accuracy of all prior classifiers, achieving comparable accuracy to GCN without the need for semi-supervised learning through the graph, i.e., GDR(RF) improves GCN by 0.2% on Citeseer; GDR(SVM) improves by 0.1% on Cora; GDR(Projection) falls short by 3.2% on Pubmed. Note that GDR consistently outperforms Planetoid, another top-ranking semi-supervised graph classification method, on these datasets.

Our results also provide insight into the relative importance and alignment [30] of the features and the graph for the purpose of classification. The comparison of the Uniform assignment (which has no information from the features) with the relabeling induced by GDR(Uniform) reveals large improvements in performance for all three datasets, from 43% in the case of Citeseer to 59% in the case of Cora. This observation underlines the fact that the graph contains useful information for classification even in the absence of feature information. On the other hand, adopting information from the features alone (without the graph) through supervised classifiers also induces large increases in performance in these three datasets (from 46% in Cora to 54% in Citeseer and Pubmed). When applying GDR to these feature-based classifiers we observe that the maximum synergistic improvement is obtained for Cora, whereas the improvement of GDR is smaller for Pubmed, signalling a lower alignment of the graph with the features and the ground truth, as discussed previously in [30].

The Wikipedia dataset constructed in [30] has low alignment between features, graph and ground truth. The heuristic behind this difference is simple: the hyperlinks in Wikipedia articles (citations) are not necessarily aligned with the content of the categories of the articles, i.e., an article from a mathematician will be linked to its country of birth. Hence, in this case, the graph contains information which is incongruous with the features and ground truth, so that the GCN performs worse than using features-only classifiers, such as MLP and especially RF. Interestingly, the random forest classifier for the Wikipedia dataset is the only case where GDR is not able to reclassify any nodes, suggesting that no information from the graph is able to improve the RF classification purely based on features. Similarly, the application of GDR to the output of GCN only induces marginal improvement, underscoring the fact that GCN has already made use of the information in the graph. In general, however, the application of the re-classification step (GDR) always increases the original accuracy (expect for MLP in Wikipedia), yet by different amounts, depending on the structure of the prior.

Our results above thus show that GDR achieves state-of-the-art classification accuracy, just by diffusing the class label information explicitly on the graph without the need for graph-based inference.

4 Diffusive GCN (diff-GCN)

A natural extension to the GCN architecture is to include an explicit diffusion within the learning phase of the GCN architecture with the aim to increase the classification accuracy. A straightforward approach is to replace the one-step transition matrix in each layer of (8) by the transition matrix of the diffusion, . The two-layer GCN model in (8) then becomes


where the matrices and , and the diffusion time parameter are inferred during the learning phase from the training set. Since is the same for all nodes in the graph, it can be thought of as a global scale for the convolution, which allows for additional flexibility in using the scales in the graph (beyond the one-step transitions in standard GCN). We call this construction diffusive GCN (diff-GCN).

The results of diff-GCN show a slight improvement on all the benchmark datasets, as shown in Table 3. The largest improvement in the accuracy was seen for the Wikipedia dataset, where the features were not aligned with the graph topology [30]. This suggests that using an adaptable, optimised continuous-time diffusion transition operator offers higher flexibility when exploring more subtle relationships between features and graph.

Model Citeseer Cora Pubmed Wikipedia
GCN 70.3 81.1 79.0 34.1
diff-GCN 71.9 82.3 79.3 45.9
Table 3: Percentage classification accuracy of GCN and its extension diff-GCN, which has an explicit diffusion operator (16).

5 Extensions to directed graphs

In many cases of interest, the pairwise relationships between samples are asymmetric, e.g., following vs. being followed on a social network [3]

, or the highly directed synaptic connectivity between neurons 

[2]. Such asymmetry can be highly informative of the structure of the dataset. Clearly, in such cases, the ensuing graph is directed, with a non-symmetric adjacency matrix, . We now present extensions of the GDR and diff-GCN frameworks to carry out classification tasks with directed graphs exploiting the natural connection of our methods with diffusive dynamics.

Extending GDR and diff-GCN to directed graphs follows closely from above, yet one needs to consider specifically the diffusive process on such graphs. Note that unless the directed graph is strongly connected, the stationary state of the diffusive dynamics is concentrated on the ‘dangling nodes’ (i.e., the nodes without outgoing edges). To avoid such trivial asymptotic behaviours, it is customary to consider an associated diffusive process that contains a ‘reinjection’ (also known as Google teleportation) that guarantees ergodicity [21]. The transition matrix of this associated process is:


where denotes the pseudo-inverse of the out-degree matrix, , and is the indicator vector for the dangling nodes (i.e., nodes with vanishing out-degree). This process evolves on the directed edges of the graph with probability  (customarily chosen to be ), and transitions uniformly to any node in the graph with probability  [28, 10, 3]. The probability at the dangling nodes is reinjected with probability 1. Clearly, the transition matrix is row-stochastic (i.e.,

). The Perron left eigenvector

of this matrix (i.e., ) is the well-known Pagerank [28]. See [10] for more details.

To remain consistent with the undirected cases presented above, we use here the symmetric part of the combinatorial directed Laplacian [10]


where . Clearly, if then . (If and , we recover for the undirected case.) With these definitions, the extension of our methods to directed graphs is straightforward. Given a directed graph of binary relationships between the samples with adjacency matrix , we have the following:

GDR on directed graphs

The GDR algorithm remains as described in Section 3, except that we use instead of to compute the overshootings in Eq. (13).

GCN and diff-GCN on directed graphs

The original formulation of GCN did not pursue the use of directed graphs [20]. Here we have implemented a directed version of GCN, which applies the same equations (8)–(9) to an asymmetric adjacency matrix .

Similarly, the only change for the application of diff-GCN to a directed graph is to substitute for in Eq. (16).

Augmented (bi-directional) GCN and diff-GCN on directed graphs

In a directed graph, the adjacency matrix is associated with a transition matrix for forward propagation, whereas its transpose is associated with backward propagation of the process. In some cases, important features about the dataset can be extracted from both forward and backward information propagation [12, 4].

The GCN architecture can naturally accommodate learning through convolutions operating in parallel on different graphs [20]. We have applied this principle to create an augmented GCN model where the forward and backward propagation are unfolded to enable the inference of separate parameters for each channel and in parallel. This strategy allows the flexibility to learn the most relevant convolution operators for the directed graph and its reverse, as if they were two different graphs defined on the same set of nodes.

To write the augmented GCN model for a directed graph with adjacency matrix , we first define the forward and backward symmetrised transition matrices, which have the same form as (9):


where , , and . We compile these two matrices in the augmented matrix .

The augmented two-layer model can be compactly represented in matrix form using the Kronecker product:


where is the identity matrix of dimension 2 and the parameters to be inferred for each of the models (forward and backward) are compiled in the augmented matrices and .

Similarly, the augmented diff-GCN model has the same form as (21) with the substitution of the operator for an augmented matrix containing the explicit forward and backward diffusion operators based on and :


where is defined in (18) and we fix .

5.1 Application to the directed Cora dataset

Citations are not reciprocal. By their own nature, citation networks like the ones considered in this paper are therefore highly directional. It is worth noting that the publicly available citation graphs for the Cora, Citeseer and Pubmed datasets are all undirected. In our computations above (Tables 2 and 3), we have used the available undirected graphs to facilitate comparisons of our results with published work. However, the question remains as to the impact of the directionality of citations in the classification task.

To examine this point, we have returned to the original Cora dataset and construct the directed graph of citations from the raw data111The directed graph from Cora is available on We then use this directed graph (and its transpose) within the GDR and diff-GCN frameworks.

Undirected Directed (fw) Directed (bw)
GDR (Projection) 79.7 62.1 64.6
GDR (RF) 78.7 58.0 57.6
GDR (SVM) 81.2 63.6 62.1
GDR (MLP) 78.5 57.3 56.4
Table 4: Accuracy of GDR using the undirected, directed, and reverse directed graphs of the Cora dataset.

The results of GDR for the Cora dataset for the directed , its transpose and the undirected version are compared in Table 4. In all cases, we see a reduction of accuracy for the directed graphs, as compared to the previous undirected algorithm. This indicates that the classes benefit from considering both directions (who cites and who is cited) in the inference of scientific topic.

The results of GCN and diff-GCN using the undirected, directed, reverse directed, and bi-directional (i.e., augmented) versions of the graph classification algorithms are presented in Table 5. Again, we observe that the use of both directions for label diffusion improves the classification. The best performance ( accuracy) is achieved with the augmented diff-GCN, due to the flexible use of both directions of the diffusion together with the optimised scale provided by the diffusion time .

Undirected Directed (fw) Directed (bw) Augmented (fw,bw)
GCN 81.1 67.4 79.8 79.9
diff-GCN 82.3 80.3 81.7 83.0
Table 5: Accuracy of GCN and diff-GCN using the undirected, directed, reverse directed, and bidirectional (augmented) graphs of the Cora dataset.

6 Discussion

In this paper, we have introduced two methods for semi-supervised classification that make use of a graph capturing the relationships between samples. First, we presented GDR, a re-classification algorithm that leverages the diffusion on the graph to relabel any prior probabilistic classification on the samples. Our numerical experiments on three benchmark datasets with several prior classifiers show that GDR consistently improves the accuracy of the original classifier without the need for a high quality prior classification. Hence GDR can be used as a post-processing tool to refine class assignments by taking into account a graph that formalises additional information about relationships between samples.

In addition, our results show that GDR provides comparable results to GCN, the state-of-the-art semi-supervised graph classification method, yet without the need for graph-biased inference. Deep learning methods (such as GCN) infer the classifier from the features and the graph simultaneously. However, this assumes that the feature or label mixing on the graph should be homogeneous. Instead, our GDR relabeling allows us to take a node-centric view of classification: the amount of information gathered from the graph to reclassify a node is different for each node. GDR side-steps the use of graph-based deep learning architectures by carrying out the classification in two steps: a feature-based classification followed by a reclassification incorporating the graph information. This approach also allows to establish the relative importance (and the alignment) of the information contained in the features and the graph (as shown by the Wikipedia example in Table 2[30].

As a second method based on diffusion, we introduced diff-GCN, a simple extension of the GCN algorithm that embeds the explicit diffusion operator with time as a hyper-parameter to be inferred. Our results show that this additional flexibility improves slightly the original GCN method on the benchmark cases. The dynamics-based viewpoint also allows us to introduce extensions of diff-GCN for directed graphs based on ergodicised diffusions on such graphs. We showed that allowing both the forward and backward diffusions to take place on the graph improved the accuracy to in the directed graph for one of our datasets (Cora). This is one of the highest accuracies on the Cora dataset, just falling short of the recently set by Dual GCN [39], which uses an additional long range convolution.

The examples in this paper show that using graph diffusion in conjunction with classification algorithms can provide natural extensions and interpretations to deep learning architectures. Although we have concentrated here on classification problems, similar ideas could be used for dimensionality reduction and multiscale unsupervised clustering, where graph-based methods also provide interesting links with spectral methods in clustering and machine learning [21, 1, 24, 25]. These directions will be the object of further study.

Code and data availability

The Python code to compute GDR is available at


We thank Dominik Klein, Hossein Abbas, Paul Expert, Yifan Qian, Asher Mullokandov and Sophia Yaliraki for valuable discussions. We acknowledge EPSRC funding through award EP/N014529/1 via the EPSRC Centre for Mathematics of Precision Healthcare.


  • [1] A. Arnaudon, L. R. Peach, and M. Barahona. Graph centrality is a question of scale. Submitted, arXiv:1907.08624, 2019.
  • [2] K. A. Bacik, M. T. Schaub, M. Beguerisse-Díaz, Y. N. Billeh, and M. Barahona. Flow-Based Network Analysis of the Caenorhabditis elegans Connectome. PLoS Computational Biology, 12(8), 2016.
  • [3] M. Beguerisse-Diaz, G. Garduno-Hernandez, B. Vangelov, S. N. Yaliraki, and M. Barahona. Interest communities and flow roles in directed networks: the twitter network of the uk riots. Journal of The Royal Society Interface, 11(101):20140940, 2014.
  • [4] M. Beguerisse-Diaz, B. Vangelov, and M. Barahona. Finding role communities in directed networks using role-based similarity, markov stability and the relaxed minimum spanning tree. In 2013 IEEE Global Conference on Signal and Information Processing, pages 937–940, Dec 2013.
  • [5] C. M. Bishop. Pattern recognition and machine learning. springer, 2006.
  • [6] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
  • [7] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral Networks and Locally Connected Networks on Graphs. pages 1–14, 2013.
  • [8] O. Chapelle and A. Zien. Semi-supervised classification by low density separation. In AISTATS, volume 2005, pages 57–64. Citeseer, 2005.
  • [9] J. Chen, J. Zhu, and L. Song.

    Stochastic Training of Graph Convolutional Networks with Variance Reduction.

  • [10] F. Chung. Laplacians and the Cheeger inequality for directed graphs. Annals of Combinatorics, 2005.
  • [11] R. R. Coifman and S. Lafon. Diffusion maps. Applied and computational harmonic analysis, 21(1):5–30, 2006.
  • [12] K. Cooper and M. Barahona. Role-based similarity in directed networks. arXiv e-prints, page arXiv:1012.2726, Dec 2010.
  • [13] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844–3852, 2016.
  • [14] J.-C. Delvenne, S. N. Yaliraki, and M. Barahona. Stability of graph communities across time scales. Proceedings of the National Academy of Sciences of the United States of America, 107(29):12755–12760, 2010.
  • [15] F. Fouss, A. Pirotte, J. Renders, and M. Saerens. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Transactions on Knowledge and Data Engineering, 19(3):355–369, March 2007.
  • [16] H. Gao, Z. Wang, and S. Ji. Large-Scale Learnable Graph Convolutional Networks. 2018.
  • [17] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT press, 2016.
  • [18] D. K. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011.
  • [19] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pages 3581–3589, 2014.
  • [20] T. N. Kipf and M. Welling. Semi-Supervised Classification with Graph Convolutional Networks. arXiv:1609.02907v4, pages 1–14, 2016.
  • [21] R. Lambiotte, J.-C. Delvenne, and M. Barahona. Random walks, markov processes and the multiscale modular organization of complex networks. IEEE Transactions on Network Science and Engineering, 1(2):76–90, 2014.
  • [22] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. nature, 521(7553):436, 2015.
  • [23] R. Levie, F. Monti, X. Bresson, and M. M. Bronstein. CayleyNets: Graph Convolutional Neural Networks with Complex Rational Spectral Filters. IEEE Transactions on Signal Processing, pages 1–20, 2018.
  • [24] Z. Liu and M. Barahona. Geometric multiscale community detection: Markov stability and vector partitioning. Journal of Complex Networks, 6(2):157–172, 07 2017.
  • [25] Z. Liu and M. Barahona. Graph-based data clustering via multiscale community detection. arXiv e-prints, page arXiv:1909.04491, Sep 2019.
  • [26] Z. Liu, C. Chen, L. Li, J. Zhou, X. Li, L. Song, and Y. Qi. GeniePath: Graph Neural Networks with Adaptive Receptive Paths. (1), 2018.
  • [27] N. Masuda, M. A. Porter, and R. Lambiotte. Random walks and diffusion on networks. Physics reports, 716:1–58, 2017.
  • [28] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank Citation Ranking: Bringing Order to the Web, 1999.
  • [29] B. Perozzi, R. Al-Rfou, and S. Skiena. DeepWalk: Online Learning of Social Representations Bryan. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’14, 2014.
  • [30] Y. Qian, P. Expert, T. Rieu, P. Panzarasa, and M. Barahona. Quantifying the alignment of graph and features in deep learning. arXiv preprint arXiv:1905.12921, 2019.
  • [31] M. T. Schaub, J.-C. Delvenne, R. Lambiotte, and M. Barahona. Multiscale dynamical embeddings of complex networks. Phys. Rev. E, 99:062308, Jun 2019.
  • [32] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective Classification in Network Data. AI Magazine, 2008.
  • [33] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph Attention Networks. pages 1–12, 2017.
  • [34] J. Weston, F. Ratle, H. Mobahi, and R. Collobert. Deep learning via semi-supervised embedding.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

    , 2012.
  • [35] Z. Yang, W. W. Cohen, and R. Salakhutdinov. Revisiting Semi-Supervised Learning with Graph Embeddings. arXiv:1603.08861v2, 48, 2016.
  • [36] J. Zhang, X. Shi, J. Xie, H. Ma, I. King, and D.-Y. Yeung. GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs. 2018.
  • [37] X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pages 912–919, 2003.
  • [38] X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. In Proceedings of the 20th International Conference on Machine Learning, 2003.
  • [39] C. Zhuang and Q. Ma. Dual Graph Convolutional Networks for Graph-Based Semi-Supervised Classification. 0(3):499–508, 2018.