TIDE: Time Derivative Diffusion for Deep Learning on Graphs

12/05/2022
by   Maximilian Krahn, et al.
0

A prominent paradigm for graph neural networks is based on the message passing framework. In this framework, information communication is realized only between neighboring nodes. The challenge of approaches that use this paradigm is to ensure efficient and accurate long distance communication between nodes, as deep convolutional networks are prone to over-smoothing. In this paper, we present a novel method based on time derivative graph diffusion (TIDE), with a learnable time parameter. Our approach allows to adapt the spatial extent of diffusion across different tasks and network channels, thus enabling medium and long-distance communication efficiently. Furthermore, we show that our architecture directly enables local message passing and thus inherits from the expressive power of local message passing approaches. We show that on widely used graph benchmarks we achieve comparable performance and on a synthetic mesh dataset we outperform state-of-the-art methods like GCN or GRAND by a significant margin.

READ FULL TEXT
research
02/28/2023

Framelet Message Passing

Graph neural networks (GNNs) have achieved champion in wide applications...
research
09/21/2019

Deep Message Passing on Sets

Modern methods for learning over graph input data have shown the fruitfu...
research
08/17/2023

Half-Hop: A graph upsampling approach for slowing down message passing

Message passing neural networks have shown a lot of success on graph-str...
research
05/11/2023

E(n) Equivariant Message Passing Simplicial Networks

This paper presents E(n) Equivariant Message Passing Simplicial Networks...
research
08/17/2020

Learning Graph Edit Distance by Graph Neural Networks

The emergence of geometric deep learning as a novel framework to deal wi...
research
04/29/2022

FRANCIS: Fast Reaction Algorithms for Network Coordination In Switches

Distributed protocols are widely used to support network functions such ...
research
04/28/2022

A Hardware-aware and Stable Orthogonalization Framework

The orthogonalization process is an essential building block in Krylov s...

Please sign up or login with your details

Forgot password? Click here to reset