New Insights into Graph Convolutional Networks using Neural Tangent Kernels

10/08/2021
by   Mahalakshmi Sabanayagam, et al.
0

Graph Convolutional Networks (GCNs) have emerged as powerful tools for learning on network structured data. Although empirically successful, GCNs exhibit certain behaviour that has no rigorous explanation – for instance, the performance of GCNs significantly degrades with increasing network depth, whereas it improves marginally with depth using skip connections. This paper focuses on semi-supervised learning on graphs, and explains the above observations through the lens of Neural Tangent Kernels (NTKs). We derive NTKs corresponding to infinitely wide GCNs (with and without skip connections). Subsequently, we use the derived NTKs to identify that, with suitable normalisation, network depth does not always drastically reduce the performance of GCNs – a fact that we also validate through extensive simulation. Furthermore, we propose NTK as an efficient `surrogate model' for GCNs that does not suffer from performance fluctuations due to hyper-parameter tuning since it is a hyper-parameter free deterministic kernel. The efficacy of this idea is demonstrated through a comparison of different skip connections for GCNs using the surrogate NTKs.

READ FULL TEXT

page 9

page 20

research
11/23/2018

On Filter Size in Graph Convolutional Networks

Recently, many researchers have been focusing on the definition of neura...
research
10/20/2020

Graph Fairing Convolutional Networks for Anomaly Detection

Graph convolution is a fundamental building block for many deep neural n...
research
11/27/2022

A Kernel Perspective of Skip Connections in Convolutional Networks

Over-parameterized residual networks (ResNets) are amongst the most succ...
research
01/11/2023

Determinate Node Selection for Semi-supervised Classification Oriented Graph Convolutional Networks

Graph Convolutional Networks (GCNs) have been proved successful in the f...
research
01/06/2022

Skip Vectors for RDF Data: Extraction Based on the Complexity of Feature Patterns

The Resource Description Framework (RDF) is a framework for describing m...
research
10/06/2021

SIRe-Networks: Skip Connections over Interlaced Multi-Task Learning and Residual Connections for Structure Preserving Object Classification

Improving existing neural network architectures can involve several desi...
research
10/14/2022

Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again

Despite the enormous success of Graph Convolutional Networks (GCNs) in m...

Please sign up or login with your details

Forgot password? Click here to reset