
Improving the LongRange Performance of Gated Graph Neural Networks
Many popular variants of graph neural networks (GNNs) that are capable o...
read it

Graph Neural Networks Inspired by Classical Iterative Algorithms
Despite the recent success of graph neural networks (GNN), common archit...
read it

Graph Neural Networks for Improved El Niño Forecasting
Deep learningbased models have recently outperformed stateoftheart s...
read it

What graph neural networks cannot learn: depth vs width
This paper studies the capacity limits of graph neural networks (GNN). R...
read it

Stochastic Graph Neural Networks
Graph neural networks (GNNs) model nonlinear representations in graph da...
read it

GRAND: Graph Neural Diffusion
We present Graph Neural Diffusion (GRAND) that approaches deep learning ...
read it

Deep Lagrangian Constraintbased Propagation in Graph Neural Networks
Several realworld applications are characterized by data that exhibit a...
read it
Implicit Graph Neural Networks
Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graphstructured data. Due to the finite nature of the underlying recurrent structure, current GNN methods may struggle to capture longrange dependencies in underlying graphs. To overcome this difficulty, we propose a graph learning framework, called Implicit Graph Neural Networks (IGNN), where predictions are based on the solution of a fixedpoint equilibrium equation involving implicitly defined "state" vectors. We use the PerronFrobenius theory to derive sufficient conditions that ensure wellposedness of the framework. Leveraging implicit differentiation, we derive a tractable projected gradient descent method to train the framework. Experiments on a comprehensive range of tasks show that IGNNs consistently capture longrange dependencies and outperform the stateoftheart GNN models.
READ FULL TEXT
Comments
There are no comments yet.