Understanding Graph Neural Networks from Graph Signal Denoising Perspectives

06/08/2020 ∙ by Guoji Fu, et al. ∙ 73

Graph neural networks (GNNs) have attracted much attention because of their excellent performance on tasks such as node classification. However, there is inadequate understanding on how and why GNNs work, especially for node representation learning. This paper aims to provide a theoretical framework to understand GNNs, specifically, spectral graph convolutional networks and graph attention networks, from graph signal denoising perspectives. Our framework shows that GNNs are implicitly solving graph signal denoising problems: spectral graph convolutions work as denoising node features, while graph attentions work as denoising edge weights. We also show that a linear self-attention mechanism is able to compete with the state-of-the-art graph attention methods. Our theoretical results further lead to two new models, GSDN-F and GSDN-EF, which work effectively for graphs with noisy node features and/or noisy edges. We validate our theoretical findings and also the effectiveness of our new models by experiments on benchmark datasets. The source code is available at <https://github.com/fuguoji/GSDN>.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 9

Code Repositories

GSDN

The source code of the paper "Understanding Graph Neural Networks from Graph Signal Denoising Perspectives"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graph data are ubiquitous today and examples include social networks, semantic web graphs, knowledge graphs, and telecom networks. However, graph data are not in the Euclidean space, which renders learning on graph data using traditional machine learning algorithms less effective 

(Battaglia et al., 2018; Wu et al., 2019b). To this end, graph neural networks (GNNs) have been proposed as specialized neural network models for learning on graph data. GNNs achieve excellent performance for many important tasks, especially for semi-supervised node classification (Kipf and Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018).

GNNs are often categorized as spectral approaches and non-spectral approaches (Wu et al., 2019b). Spectral approaches, i.e., spectral graph convolution networks (SGCNs(Defferrard et al., 2016; Kipf and Welling, 2017; Wu et al., 2019a), are mainly inspired by graph signal processing (GSP(Sandryhaila and Moura, 2013). SGCNs define graph convolution operators as the multiplication of a graph signal with a graph filter in the Fourier domain. Non-spectral approaches, which include spatial graph neural networks (SGNNs(Hamilton et al., 2017; Fey and Lenssen, 2019) and graph attention networks (GANNs(Velickovic et al., 2018; Thekumparampil et al., 2018), are mainly motivated by convolution operators (LeCun and Bengio, 1995; LeCun et al., 2015) or attention mechanisms (Bahdanau et al., 2015; Vaswani et al., 2017)

used in computer vision. Similar to convolutions that are defined based on nearby pixels within a window in an image, SGNNs define graph convolutions based on nodes’ spatial relations and aggregate the node features of the neighbors of a node within a window. GANNs utilize graph attentions to learn the connection strength of nodes that are connected by an edge, which work similarly to attentions that are used to learn the aggregation weights of pixels.

In spite of GNNs’ remarkable performance for representation learning, there has been inadequate work to explain how and why GNNs work so effectively. Recent attempts to explain the working mechanisms of GNNs are either too coarse-grained on the whole graph scale or too restricted for specific GNN models only, e.g., GCN (Kipf and Welling, 2017). Besides, spectral approaches and non-spectral approaches are studied and explained in different, separate theoretical frameworks, but a unified framework to understand them has been lacking. For example, Xu et al. (2019) demonstrate that GNNs are at most as powerful as the Weisfeiler-Lehman test in distinguishing the whole graph structures, Li et al. (2018) reveal that GCN is conducting Laplacian smoothing on node features, Wu et al. (2019a) and NT and Maehara (2019) show that the graph convolution of GCN is a low-pass filter, and Ying et al. (2019) use two reasoning tasks to empirically show that the strength of graph attentions comes from their ability to generalize to more complex or noisy graphs at test time. However, existing work does not focus on understanding the reasons behind the performance of GNNs (including spectral approaches and non-spectral approaches) for node representation learning.

In this paper, we aim to provide a unified theoretical framework to understand how and why spectral and non-spectral approaches, specifically SGCNs and GANNs, work for node representation learning from graph signal denoising (GSD) perspectives. Our framework reveals that GNNs are implicitly solving GSD problems. Specifically, we found that: (1) the graph convolutions of SGCNs, e.g., ChebyNet (Defferrard et al., 2016), GCN (Kipf and Welling, 2017), and SGC (Wu et al., 2019a), work as denoising and smoothing node features; and (2) the graph attentions of GANNs, e.g., GAT (Velickovic et al., 2018) and AGNN (Thekumparampil et al., 2018), work as denoising edge weights. Based on the theoretical findings, we further design two new GNN models, GSDN-F and GSDN-EF, which conduct effective node representation learning on graphs with noisy node features and/or noisy edges by working through a tradeoff between node feature denoising and smoothing.

Our empirical results on benchmark graphs (with/without noise) validate that the performance of SGCNs on node classification indeed benefits from their ability to denoise and smooth node features. When there is little noise in node features, the node classification performance of SGCNs mainly comes from their ability of node feature smoothing. However, when there is more noise, denoising contributes greater to their performance. The results of node classification on graphs with noisy edges show that the performance of GANNs benefits from their ability to denoise edge weights. Moreover, the results also demonstrate that our proposed models, GSDN-F and GSDN-EF, achieve comparable performance with the state-of-the-art GNNs on graphs with little noise and become more effective on graphs having more noise. The superior performance of GSDN-EF also verifies that its linear edge denoising mechanism can compete with the state-of-the-art graph attentions.

2 Preliminary and Background

We first define the notations used in this paper and introduce the background of graph signal denoising, spectral graph convolutions, and graph attentions.

As in previous works (Kipf and Welling, 2017; Wu et al., 2019a), we introduce GNNs in the context of node classification. In this paper, the input of a GNN is a graph with node features and some node labels, the output is the predicted labels for unlabeled nodes. Let be a graph. is a set of nodes and is a symmetric adjacency matrix, where denotes the edge weight between nodes and such that if , are connected and otherwise. The degree of node is defined as and denotes the degree matrix. The normalized adjacency matrix is defined as . The Laplacian matrix is defined as and the normalized Laplacian matrix is . We define as the eigen-decomposition of , where

is the matrix of eigenvectors ordered by eigenvalues and

is the diagonal matrix of eigenvalues.

Let be the input node feature matrix or graph signals, be the features of node , and be the -th signal of all nodes. We measure the smoothness of node features by their total variation w.r.t. a graph  (Sandryhaila and Moura, 2014). The total variation of node features is defined as

(1)

where indicates the trace of a matrix. The smaller , the smoother are node features . And we say that are smooth if is small.

2.1 Graph Signal Denoising

Given a graph with noisy node features , and assume that the ground-truth node features, denoted as , are smooth, graph signal denoising (GSD) (Berger et al., 2017; Chen et al., 2015) aims to recover smooth graph signals from noisy input node features . The GSD problem of a noisy graph can be formulated as an optimization problem that aims to obtain the smoothest node features subject to the effect of noise in the graph.

2.2 Spectral Graph Convolutions

SGCNs were mainly inspired by graph signal processing (GSP). In the field of GSP, the graph Fourier transform (Hammond et al., 2011; Sandryhaila and Moura, 2013) of is defined as and the inverse

graph Fourier transform is defined as

. Then, the spectral graph convolution of the input signal with a filter parameterized by in the Fourier domain (Henaff et al., 2015) is defined as

(2)

where denotes the convolution operation. ChebyNet (Defferrard et al., 2016) uses -order Chebyshev polynomails to approximate the filter and obtains a new convolution:

(3)

where , and , , and

is a vector of Chebyshev coefficients. GCN 

(Kipf and Welling, 2017) introduces a first-order approximation of ChebyNet using and and the renormalization trick . The resulting graph convolution becomes

(4)

where , , and . SGC (Wu et al., 2019a) successively removes nonlinearities and collapses the weight matrices between consecutive layers of GCN and further simplifies the convolution as

(5)

2.3 Graph Attentions

Unlike SGCN, GANNs were mainly motivated by attention mechanisms in computer vision. GANNs suggest that the contributions of neighboring nodes to the central node are different during the aggregation. Here, we discuss two representative GANN models, GAT (Velickovic et al., 2018) and AGNN (Thekumparampil et al., 2018). GAT utilizes a graph attention mechanism to learn the aggregation strengths between two connected nodes in each layer. Let be the -th layer output of a neural network, the graph attention mechanism of GAT is defined as

(6)

where is the LeakyReLU function, the matrix and the vector are learnable parameters. Then, GAT aggregates neighborhood information in terms of the learned attention coefficients:

(7)

where and

is the activation function. AGNN defines the graph attention mechanism in terms of the cosine similarity of node embeddings between connected nodes:

(8)

where and is a learnable parameter. Similarly, AGNN  aggregates neighborhood information by

(9)

3 Graph Convolutions Work as Denoising and Smoothing Graph Signals

In this section, we study the relation between graph signal denoising and SGCNs/GANNs. We show that SGCNs and GANNs are implicitly solving graph signal denoising problems.

3.1 Spectral Graph Convolutions Work as Denoising and Smoothing Node Features

We first study the graph signal denoising problem for the case with noisy node features.

Problem 1:

Graph signal denoising for node features.

We assume that the input node features are slightly disrupted with noise and the real (ground-truth) node features, denoted as , are smooth. Formally, we make the following two assumptions.

Assumption 1.

The ground-truth node features are smooth w.r.t. a graph .

Assumption 2.

The magnitude of the node feature noise is small.

Assumption 1 requires the total variation of to be small and Assumption 2 implies that can be upper-bounded. By Assumptions 1 and 2, we model Problem 1 as:

(10)

where controls the noise level. The Lagrangian form of the above problem is , where is the Lagrangian multiplier. Then, we have the following solution by KKT conditions (Gordon and Tibshirani, 2012):

(11)

Let , then . The polynomial expansion of is given by

(12)

By Eq.12, the following proposition holds.

Proposition 1.

The results of the single layer convolutions of ChebyNet and SGC applied on node feature are -order polynomial approximations to the solution of Problem 1, while that of GCN is a first-order polynomial approximation to the solution of Problem 1.

The proof of Proposition 1 is given in Appendix A.1. Proposition 1 shows that the graph convolutions of ChebyNet, GCN and SGC are implicitly solving Problem 1. Instead of extracting high-level features, spectral graph convolution operators are simply denoising and smoothing the input node features. We give an example in Figure 1 to show that SGCNs such as GCN and SGC are not extracting high-level features, but simply denoising and smoothing the noisy node features. From the figure, we can see SGCNs such as GCN and SGC are not extracting high-level features, but simply denoising and smoothing the noisy node features.

Figure 1: An example of node feature denoising and smoothing by spectral graph convolutions (GSDN-F is to be introduced in Section 4.1). The original node features are disrupted by the Gaussian noise with and . Best viewed in color
(a) Cora ()
(b) CiteSeer ()
(c) Cora ()
(d) CiteSeer ()
Figure 2: The effect of using the renormalization trick on node feature denoising on graphs with synthetic Gaussian noise ( and ) on node features (the details of Cora and CiteSeer datasets are to be introduced in Section 5.1)

Why Renormalization Trick Works.

We further discuss the effect of the renormalization trick used in GCN. Without the renormalization trick, the result of the graph convolution of GCN on signal , defined as , is a first-order polynomial approximation to of Problem 1 with . As a result of , it only intends to smooth node features and overlooks node feature denoising. Therefore, it is easy to be disrupted by noise. The renormalization trick shrinks into and allows GCN to consider both node feature smoothing and denoising by

(13)

where and . Thus, it is more robust to the noise. We empirically validated the effect of the renormalization trick on node feature denoising. The results of node feature denoising on Cora and CiteSeer with Gaussian noise are shown in Figure 2.

As we can see in Figure 2, the noise in denoised node features obtained by is not reduced, indicating that does not denoise node features. It validates that overlooks node feature denoising by setting . However, Figure 2 shows that and significantly reduce the noise. It demonstrates that graph convolutions with the renormalization trick are capable to denoise node features and validates our conclusion that the renormalization trick works as it shrinks into and allows GCN to be able to work on node feature denoising.

3.2 Graph Attentions Work as Denoising Edge Weights

In addition to node features, edge weights may be noisy too in real-world applications. We study the graph signal denoising problem for the case with both noisy node features and noisy edge weights.

Problem 2:

Graph signal denoising for node features and edge weights.

In addition to Assumptions 1 and 2, we make Assumptions 3 and 4, which assume that the ground-truth edge weights exactly indicate the smoothness between source nodes and target nodes, and edge weights are slightly disrupted with noise.

Assumption 3.

The ground-truth edge weight is inversely proportional to the feature variation between the source node and the target node .

Assumption 4.

The magnitude of the edge weight noise is small.

Assumptions 1 and 3 require that the total variation of w.r.t. should be small and Assumption 4 implies that can be upper-bounded. Then, we model Problem 2 below in terms of Assumptions 1, 2, 3 and 4.

(14)

where and control the noise level of node features and edge weights, respectively. The Lagrangian form of Problem 2 is , where and are Lagrangian multipliers. Similar to Problem 1, we have the following solution for Problem 2:

(15)
(16)

The forms of graph attentions of GAT and AGNN (as illustrated in Section 2.3) and Eq.16 are in the form of calculating the similarity between paired node features. The connection strength of a pair, which is the ground-truth weight in Eq.16 or represented as attention coefficient in GAT and AGNN, depends on the similarity between the features of nodes and .

The differences between them are: (1) The graph attentions of GAT and AGNN only focus on learning the optimal connection strength of the connected nodes in a graph. However, Eq.16 does not focus only on the connected nodes but any pair of nodes. As a result, it is able to optimize the weights of the existing edges and also create new edges. (2) There are non-linearities in the graph attention mechanisms, while Eq.16 suggests that a linear form is sufficient to learn the optimal connection strength. Thus, we have Proposition 2.

Proposition 2.

The attention coefficients of GAT and AGNN can be regarded as the results of denoised weights on the existing edges in a graph.

Proposition 2 indicates that the graph attentions of GAT and AGNN are implicitly denoising the weights of the existing edges in a graph.

4 Graph Signal Denoising Neural Networks

Based on the results of Section 3, we develop two new GNN models, called GSDN-F and GSDN-EF, as the solutions of Problems 1 and 2, respectively.

4.1 Gsdn-F

For the case where node features are noisy, Section 3.1 shows that the graph convolutions of SGCNs work as denoising and smoothing node features . Their results on are polynomial approximations to the solution of Problem 1. Moreover, they are not able to adjust the balance between denoising and smoothing node features in terms of the noise magnitude in the node features. To this end, based on Eq.12, we design a new graph convolution as

(17)

where controls the balance between smoothing and denoising the node features. We further study the effect of on the performance of the graph convolution of GSDN-F as follows.

The bias-variance decomposition of the mean square error between the estimator

and the ground-truth is given as:

(18)

where , , and represent the mean square error, variance, and bias, respectively. Suppose that the node features are slightly disrupted, i.e., Assumption 2 holds and the noise

follows the normal distribution

, we have the following proposition:

Proposition 3.

For , provides a variance and bias trade-off of mean square error between and . Increasing decreases the variance and increases the bias.

The proof of Proposition 3 is given in Appendix A.2. If the input node features are severely disrupted, then Assumption 2 no longer holds, implying that the noise can be lower-bounded, i.e., . Then we can obtain in terms of the methods of Lagrangian multipliers and KKT conditions. Thus, we suggest to choose in this case. We also conduct experiments on node classification to study the parameter sensitivity of . The results are given in Appendix B.2.

4.2 Gsdn-Ef

For the case when both node features and edges are noisy, we design the graph convolution of GSDN-EF in terms of the solutions and of Problem 2. The new graph convolution scheme is designed as first denoising the adjacency matrix to obtain following Eq.19, and then aggregating the input signal by Eq.20 based on :

(19)
(20)

where is a learnable parameter, , and . While GAT and AGNN only focus on optimizing the connection strength between connected nodes (i.e., edges), GSDN-EF considers any pair of nodes and does not only optimize the weights of the existing edges but is also able to create new edges. GSDN-EF can be effective for the cases where we miss some edges in a graph during data collection or graph construction.

5 Experiments and Discussion

We validate our theoretical findings by experiments on benchmark graph datasets with/without noise. We conducted experiments for node feature denoising and smoothing, as well as for semi-supervised node classification.

5.1 Datasets and Experimental Setup

Here we present the details of the datasets and the experimental setup.

Datasets.

We used Cora, CiteSeer, and Pubmed citation networks (Sen et al., 2008) in the transductive learning task and the PPI protein-protein interaction network (Zitnik and Leskovec, 2017) in the inductive learning task. We also conducted experiments by adding noise to node features or edges in the Cora and CiteSeer datasets to evaluate the performance of GNNs on graphs with noise. Some information about the datasets are listed in Table 1.

Dataset #Nodes #Edges Train/Dev/Test
Cora 2,708 5,429 140/500/1,000
CiteSeer 3,327 4,723 120/500/1,000
Pubmed 19,717 44,338 60/500/1,000
PPI 56,944 818,716 44,906/6,514/5,524
Table 1: Datasets

Baselines.

For the baselines of spectral approaches of GNNs, we compared our models with ChebyNet (Defferrard et al., 2016), GCN (Kipf and Welling, 2017), and SGC (Wu et al., 2019a). For non-spectral approaches, we chose GraphSage (Hamilton et al., 2017), GAT (Velickovic et al., 2018), and AGNN (Thekumparampil et al., 2018)

. We used the released implementations of these baselines in PyTorch Geometric 

(Fey and Lenssen, 2019).

Parameter Setting.

We give the parameter setting for our models and baselines in our experiments here. For GSDN-F and GSDN-EF, we set the learning rate as , regularization weight as and the polynomial degree as 4. In transductive learning tasks, we set the number of hidden units as , the number of layers as . We also use two settings for , 0.6 and 1.2, for both GSDN-F and GSDN-EF and denote the models with different as GSDN-F-0.6, GSDN-F-1.2, GSDN-EF-0.6, and GSDN-EF-1.2, respectively. For the inductive learning task, we set hidden number as 256, the number of layers as 3 and as 0.6. For ChebyNet and SGC, we used two settings for

, 2 and 4, represented as ChebyNet-2, ChebyNet-4, SGC-2, and SGC-4. Other hyperparameters for ChebyNet, SGC, and other baselines were set by following the settings in

(Kipf and Welling, 2017) and (Velickovic et al., 2018).

5.2 Node Feature Denoising and Smoothing

We first normalized the node features and then added Gaussian noise with mean

and standard deviation

to the original node features (denoted as ) in the Cora and CiteSeer graphs. We studied the performance of spectral graph convolutions on node feature denoising and smoothing in this experiment.

Node feature denoising.

We used a spectral graph convolution to denoise node features by , where is the denoised features. The noise magnitude of each node, measured by , is reported in Figure 3. The results show that when there is little noise in node features (), all spectral graph convolutions do not reduce noise. When , the graph convolutions of SGC, GCN, and GSDN-F () are denoising node features. In all the cases, GSDN-F obtains the best overall performance, i.e., the lowest noise magnitude, while ChebyNet is not denoising the node features.

(a) Cora ()
(b) Cora ()
(c) Cora ()
(d) CiteSeer ()
(e) CiteSeer ()
(f) CiteSeer ()
Figure 3: Results of node feature denoising by graph convolutions (best viewed in color)

Node feature smoothing.

We measured the smoothness of the denoised node features by their total variation as defined in Eq.1. Figure 4 shows that the results of all spectral graph convolutions are smoother than the original noisy features , indicating that they all smooth the input node features. When there is less noise () in the node features, the results of GCN, SGC, and GSDN-F () are close to each other. However, when there is more noise (e.g., ), the results of GCN and SGC are smoother than that of GSDN-F. In all cases, the results of ChebyNet are much less smooth than those of the others.

(a) Cora
(b) CiteSeer
Figure 4: Total variation (the smaller, the smoother) of node features (best viewed in color)

Analysis.

The results of Figures 3 and 4 show that the graph convolutions of GCN, SGC, and GSDN-F are denoising and smoothing node features. When there is little noise (e.g., ) in the node features, all of the methods are simply smoothing node features. However, when there is more noise (e.g., ), GSDN-F conducts more denoising, while GCN and SGC conduct more smoothing. The results show that GSDN-F is more likely to denoise node features, while GCN and SGC are more committed to smoothing in the cases with more noise.

5.3 Semi-Supervised Node Classification

To explore how GNNs benefit from node feature and/or edge weight denoising and node feature smoothing, we conducted experiments for semi-supervised node classification on graphs with/without noise in node features and edges.

Results on graphs without noise.

Table 2 reports the classification accuracy of transductive learning on Cora, CiteSeer, and Pubmed. The results show that the performance of GCN, SGC, and GSDN-F is close to each other. Their similar node classification accuracy can be explained by their similar performance on node feature denoising and smoothing on graphs with little noise (i.e., ) as reported in Figures 3 and 4. Table 3 reports the Micro-F1 score of inductive learning on the PPI dataset. As GSDN-EF ran out of memory on PPI, we used a sparse version, GSDN-EF(Sparse), which only denoises the original edges in a graph (but does not create new edges). The results in Table 3, and also Table 2, show the performance of GSDN-EF is comparable with that of GAT and AGNN, while it significantly outperforms the spectral approaches (i.e., GCN, SGC, ChebyNet). We remark that our paper focuses on theoretical understandings of GNN models, while the experiments serve more as a validation of our theoretical findings, i.e., the performance of SGCNs on node classification benefits from their ability of node feature denoising and smoothing, while GANNs benefits from their ability of edge weight denoising. Thus, on graphs without noise, our models only have comparable performance with existing models because of their similar performance on denoising and smoothing.

Transductive
Method Cora CiteSeer Pubmed GCN 81.3 0.7 70.8 0.9 78.6 0.7 ChebyNet-2 79.2 0.8 70.1 0.8 78.0 0.6 ChebyNet-4 80.1 0.9 70.0 1.2 73.3 2.1 SGC-2 79.6 0.6 72.0 0.8 77.5 0.2 SGC-4 81.0 0.6 72.8 0.4 75.1 0.2 GraphSage 81.5 0.8 70.3 0.7 78.6 0.4 GAT 82.3 0.7 71.3 0.8 78.1 0.6 AGNN 82.0 0.6 70.9 0.9 79.2 0.5 GSDN-F-0.6 81.5 0.8 70.6 1.2 79.0 0.8 GSDN-F-1.2 81.0 0.7 70.0 0.6 78.7 0.5 GSDN-EF-0.6 82.6 0.7 71.1 1.0 78.9 0.4 GSDN-EF-1.2 81.9 0.7 70.0 0.9 78.1 1.7

Table 2: Node classification accuracy (%) averaged over 20 runs on citation networks

Inductive
Method PPI GCN 68.7 1.7 AGNN 84.8 1.3 GAT 97.1 0.7 GSDN-F 77.2 2.6 GSDN-EF(sparse) 95.7 0.8

Table 3: Micro-F1 score (%) averaged over 10 runs on the PPI dataset

Results on graphs with noise.

We used Cora and CiteSeer by adding noise to node features and/or edges. The results on more datasets (i.e., Pubmed and Coauthor CS) are given in Appendix B.1. We first normalized the node features and then added Gaussian noise with and into the node features, and report the results in Figures 5(a) and 5(d). We randomly added or removed edges with noise ratio , and report the results in Figures 5(b) and 5(e). We also added Gaussian noise into node features with and and randomly added or removed edges with , and report the results in Figures 5(c) and 5(f).

(a) Cora w/ node feature noise
(b) Cora w/ edge noise
(c) Cora w/ node feature noise and edge noise
(d) CiteSeer w/ node feature noise
(e) CiteSeer w/ edge noise
(f) CiteSeer w/ node feature noise and edge noise
Figure 5: Results of semi-supervised node classification

Figures 5(a) and 5(d) show that the performance of GCN, SGC and GSDN-F () is close to each other and they significantly outperform ChebyNet in the noisy node feature cases. On the one hand, the results indicate that the performance of GCN, SGC and GSDN-F () on node classification benefits from their superior ability on node feature denoising and smoothing, while ChebyNet performs the worst as it does not work well for both node feature denoising and smoothing. On the other hand, the results in Section 5.2 show that GSDN-F () is better on node feature denoising, while GCN and SGC are better on node feature smoothing. As a result, GSDN-F () outperforms GCN and SGC on node classification for graphs with noisy node features.

Figures 5(b) and 5(e) show that the performance of GAT, AGNN and GSDN-EF () is superior than that of the other methods. The performance of GSDN-EF () is close to that GAT and AGNN, indicating that the linear edge denoising mechanism used in GSDN-EF is sufficient to compete with the graph attentions of GAT and AGNN. The results also show that GSDN-F () and GSDN-EF () do not work well in the noisy edge cases. The reason is that is suggested for graphs that have node features with much noise, as discussed in Section 4.1, and thus not suitable for the cases here with only noisy edges.

For graphs with both noisy node features and edges, Figures 5(c) and 5(f) show that when there is little noise (e.g., and ), GAT, AGNN and GSDN-EF ( outperform the other methods on Cora, while GCN, GSDN-F () and GSDN-EF () are superior on CiteSeer. However, when there is more noise (e.g., and ), GSDN-EF () and GSDN-F () outperform the other methods on both Cora and CiteSeer. The results show that when there is little noise, the GNNs benefit from their ability to denoise both noisy node features and noisy edges. But when there is more noise, the ability of denoising noisy node features becomes more important and thus GSDN-EF and GSDN-F become more effective than the other methods.

Analysis.

The performance of GSDN-F and SGCNs (i.e., ChebyNet, GCN, and SGC) benefits from their ability of node feature denoising and smoothing. It requires different tradeoffs between node feature denoising and smoothing to tackle the cases with different noise levels. For the case when there is no noise, GSDN-F and SGCNs are mainly conducting node feature smoothing. When there is little noise, their good performance comes from both node feature denoising and smoothing. However, when there is more noise, denoising starts to have greater contribution to the performance. The results also show the superior performance of GAT, AGNN and GSDN-EF in the noisy edge cases, which demonstrates their effectiveness in denoising edge weights. Their similar performance on denoising edges indicates the linear edge denoising mechanism used in GSDN-EF is comparable with the graph attentions of GAT and AGNN. In addition, we also observe that GSDN-F and GSDN-EF are more effective on graphs with a lot of noise in the node features. Therefore, our results validate our theoretical findings in Section 3 as well as the effectiveness of our models (i.e., GSDN-F and GSDN-EF) designed based on the theoretical results.

6 Conclusions

To better understand the mechanisms of SGCNs and AGNNs for node classification, we presented a theoretical framework to explain how and why they work from graph signal denoising (GSD) perspectives. Our framework shows that the results of the graph convolutions of ChebyNet, GCN, and SGC on node features are polynomial approximations to the solution of a GSD problem on graphs with noisy node features. This indicates that spectral graph convolutions work as denoising and smoothing node features. Similarly, GAT and AGNN are implicitly solving a GSD problem on graphs with noisy node features and edge weights. Based on the theoretical results, we designed two new models, GSDN-F and GSDN-EF, which work effectively for graphs with noisy node features and/or noisy edges. We validated our results with experiments on node feature denoising and smoothing and semi-supervised node classification.

References

  • D. Bahdanau, K. Cho, and Y. Bengio (2015) Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Cited by: §1.
  • P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. F. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, Ç. Gülçehre, H. F. Song, A. J. Ballard, J. Gilmer, G. E. Dahl, A. Vaswani, K. R. Allen, C. Nash, V. Langston, C. Dyer, N. Heess, D. Wierstra, P. Kohli, M. Botvinick, O. Vinyals, Y. Li, and R. Pascanu (2018)

    Relational inductive biases, deep learning, and graph networks

    .
    CoRR abs/1806.01261. External Links: 1806.01261 Cited by: §1.
  • P. Berger, G. Hannak, and G. Matz (2017) Graph signal recovery via primal-dual algorithms for total variation minimization. J. Sel. Topics Signal Processing 11 (6), pp. 842–855. Cited by: §2.1.
  • S. Chen, A. Sandryhaila, J. M. F. Moura, and J. Kovacevic (2015) Signal recovery on graphs: variation minimization. IEEE Trans. Signal Processing 63 (17), pp. 4609–4624. Cited by: §2.1.
  • M. Defferrard, X. Bresson, and P. Vandergheynst (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems, December 5-10, 2016, Barcelona, Spain, pp. 3837–3845. Cited by: §1, §1, §2.2, §5.1.
  • M. Fey and J. E. Lenssen (2019) Fast graph representation learning with PyTorch Geometric. In 7th International Conference on Learning Representations, New Orleans, LA, USA, May 6-9, 2019, Workshop on Representation Learning on Graphs and Manifolds, Cited by: §1, §5.1.
  • G. Gordon and R. Tibshirani (2012) Karush-kuhn-tucker conditions. Optimization 10 (725/36), pp. 725. Cited by: §3.1.
  • W. L. Hamilton, Z. Ying, and J. Leskovec (2017) Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, 4-9 December 2017, Long Beach, CA, USA, pp. 1024–1034. Cited by: §1, §1, §5.1.
  • D. K. Hammond, P. Vandergheynst, and R. Gribonval (2011) Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis 30 (2), pp. 129–150. Cited by: §2.2.
  • M. Henaff, J. Bruna, and Y. LeCun (2015) Deep convolutional networks on graph-structured data. CoRR abs/1506.05163. External Links: 1506.05163 Cited by: §2.2.
  • T. N. Kipf and M. Welling (2017) Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, Toulon, France, April 24-26, 2017, Conference Track Proceedings, Cited by: §1, §1, §1, §1, §2.2, §2, §5.1, §5.1.
  • Y. LeCun, Y. Bengio, and G. E. Hinton (2015) Deep learning. Nature 521 (7553), pp. 436–444. Cited by: §1.
  • Y. LeCun and Y. Bengio (1995) Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361 (10), pp. 1995. Cited by: §1.
  • Q. Li, Z. Han, and X. Wu (2018)

    Deeper insights into graph convolutional networks for semi-supervised learning

    .
    In

    Proceedings of the 32nd Conference on Artificial Intelligence, New Orleans, Louisiana, USA, February 2-7, 2018

    ,
    pp. 3538–3545. Cited by: §1.
  • H. NT and T. Maehara (2019) Revisiting graph neural networks: all we have is low-pass filters. CoRR abs/1905.09550. External Links: 1905.09550 Cited by: §1.
  • A. Sandryhaila and J. M. F. Moura (2013) Discrete signal processing on graphs. IEEE Trans. Signal Processing 61 (7), pp. 1644–1656. Cited by: §1, §2.2.
  • A. Sandryhaila and J. M. F. Moura (2014) Discrete signal processing on graphs: frequency analysis. IEEE Trans. Signal Processing 62 (12), pp. 3042–3054. Cited by: §2.
  • P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad (2008) Collective classification in network data. AI Magazine 29 (3), pp. 93–106. Cited by: §5.1.
  • K. K. Thekumparampil, C. Wang, S. Oh, and L. Li (2018) Attention-based graph neural network for semi-supervised learning. CoRR abs/1803.03735. External Links: 1803.03735 Cited by: §1, §1, §2.3, §5.1.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, 4-9 December 2017, Long Beach, CA, USA, pp. 5998–6008. Cited by: §1.
  • P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio (2018) Graph attention networks. In 6th International Conference on Learning Representations, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, Cited by: §1, §1, §1, §2.3, §5.1, §5.1.
  • F. Wu, A. H. S. Jr., T. Zhang, C. Fifty, T. Yu, and K. Q. Weinberger (2019a) Simplifying graph convolutional networks. In Proceedings of the 36th International Conference on Machine Learning, 9-15 June 2019, Long Beach, California, USA, pp. 6861–6871. Cited by: §1, §1, §1, §2.2, §2, §5.1.
  • Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu (2019b) A comprehensive survey on graph neural networks. CoRR abs/1901.00596. External Links: 1901.00596 Cited by: §1, §1.
  • K. Xu, W. Hu, J. Leskovec, and S. Jegelka (2019) How powerful are graph neural networks?. In 7th International Conference on Learning Representations, New Orleans, LA, USA, May 6-9, 2019, Conference Track Proceedings, Cited by: §1.
  • Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec (2019) GNNExplainer: generating explanations for graph neural networks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, 8-14 December 2019, Vancouver, BC, Canada, pp. 9240–9251. Cited by: §1.
  • M. Zitnik and J. Leskovec (2017) Predicting multicellular function through multi-layer tissue networks. Bioinformatics 33 (14), pp. 190–198. Cited by: §5.1.

Appendix A Theorem

a.1 Proof of Proposition 1

Proof.

For ChebyNet, by Eq.3 we have

(21)

where indicates that we re-parametrize the coefficients. Eq.12 and Eq.21 show that the result of ChebyNet graph convolution on node features is a -order polynomial approximation of .

For SGC, when , by Eq.5 we have:

(22)

where and . Therefore, by Eq.12 and Eq.22 the result of SGC on node features is a -order polynomial approximation of . Note that GSDN-F and SGC are somewhat similar in the form of approximating the optimal solution of Problem 1. However, the approximation of GSDN-F is better than that of SGC. The reason is that SGC does not well approximate the coefficients of the Taylor expansion of the optimal solution. In contrast, GSDN-F directly uses the Taylor expansion of the optimal solution as its convolution kernel and thus the gap between the results of GSDN-F and the optimal solution should be smaller than the gap between the results of SGC and the optimal solution. SGC can be viewed as a degenerate form of GSDN-F.

For GCN, . Then . Therefore, the result of GCN on node features is a first-order polynomial approximation of . ∎

a.2 Proof of Proposition 3

Proof.

Let , when and is large, we obtain:

(23)

where is the eigen-decomposition of , is the matrix of eigenvectors of , is its diagonal matrix of eigenvalues and are the eigenvalues. Then we have:

(24)

By , we have for . Then, increasing will lead to lower variance. Similarly,

(25)

Therefore, increasing will lead to higher bias. ∎

Appendix B Additional Experiments

b.1 Experiments on Other Graphs with Noise

We conducted experiments on another two datasets, Pubmed and Coauthor CS, using the same settings on graphs with noise in Section 5.3. We first normalized the node features and then added Gaussian noise with and into the node features, and report the results in Figures 6(a) and 6(d). We randomly added or removed edges with noise ratio , and report the results in Figures 6(b) and 6(e). We also added Gaussian noise into node features with and and randomly added or removed edges with , and report the results in Figures 6(c) and 6(f).

(a) Pubmed w/ node feature noise
(b) Pubmed w/ edge noise
(c) Pubmed w/ node feature noise and edge noise
(d) Coauthor CS w/ node feature noise
(e) Coauthor CS w/ edge noise
(f) Coauthor CS w/ node feature noise and edge noise
Figure 6: Results of semi-supervised node classification on the Pubmed and Coauthor CS datasets

Figures 6(a) and 6(d) show that the performance of GSDN-F () is better than the other models in the noisy node feature cases. The results on Pubmed and Coauthor CS are similar to those on Cora and CiteSeer reported in Figures 5(a) and 5(d), which consistently show that GSDN-F () outperforms GCN and SGC on node classification for graphs with noisy node features.

Figures 6(b) and 6(e) show that GSDN-F () also obtained superior performance in the noisy edge cases on Pubmed and Coauthor CS. The results validate that GSDN-F () works well in the noisy edge cases on Pubmed and Coauthor CS.

For graphs with both noisy node features and edges, the results of Figures 6(c) and 6(f) demonstrate that GSDN-F () outperforms the models in most cases. Again, GSDN-F () has the best performance in the noisy feature and noisy edge cases on Pubmed and Coauthor CS.

Thus, the results in Figure 6 demonstrate the effectiveness of GSDN-F on graphs with noisy features and/or noisy edges, which is similar with the results obtained on Cora and CiteSeer presented in Section 5.3.

b.2 Parameter Sensitivity Tests

In this experiment, we studied the sensitivity of and . The results are reported in Figure 7.

(a)
(b)
Figure 7: Results of parameter sensitivity
(a) Original graphs
(b) Graphs with Gaussian noise (, ) in node features
Figure 8: Results of parameter sensitivity of for

Variance-bias trade-off by .

When , Figure 7(a) shows that as increases, the performance of GSDN-F improves first and then dramatically degrades when is larger than 0.8. The result illustrates that the choice of can seriously affect the performance of node classification and validates the variance-bias trade-off by as stated in Proposition 3.

The effect of polynomial approximation order .

Figure 7(b) shows that the performance of GSDN-F improves as increases and then becomes stable when is large enough. This is because the larger the value of is, the more accurate is the approximation and hence the better performance of GSDN-F is.

Parameter Sensitivity for .

Here we study the sensitivity of for . We conducted the experiments for semi-supervised node classification on Cora, CiteSeer, and Pubmed with and without Gaussian noise (, ). The results are reported in Figure 8.

When , Figure 8 shows that as increases, the performance of GSDN-F dramatically improves first and slightly degrades when is larger than 1.2 on both the original graphs and the graphs with Gaussian noise in node features. The results suggest that choosing GSDN-F with around 1.2 can obtain good performance for graphs with a lot of noise.