
A Lagrangian Approach to Information Propagation in Graph Neural Networks
In many real world applications, data are characterized by a complex str...
read it

GRAND: Graph Neural Diffusion
We present Graph Neural Diffusion (GRAND) that approaches deep learning ...
read it

Local Propagation in Constraintbased Neural Network
In this paper we study a constraintbased representation of neural netwo...
read it

XAI for Graphs: Explaining Graph Neural Network Predictions by Identifying Relevant Walks
Graph Neural Networks (GNNs) are a popular approach for predicting graph...
read it

Breaking the Limit of Graph Neural Networks by Improving the Assortativity of Graphs with Local Mixing Patterns
Graph neural networks (GNNs) have achieved tremendous success on multipl...
read it

Implicit Graph Neural Networks
Graph Neural Networks (GNNs) are widely used deep learning models that l...
read it

A Lagrangian Dual Framework for Deep Neural Networks with Constraints
A variety of computationally challenging constrained optimization proble...
read it
Deep Lagrangian Constraintbased Propagation in Graph Neural Networks
Several realworld applications are characterized by data that exhibit a complex structure that can be represented using graphs. The popularity of deep learning techniques renewed the interest in neural architectures able to process these patterns, inspired by the Graph Neural Network (GNN) model. GNNs encode the state of the nodes of the graph by means of an iterative diffusion procedure that, during the learning stage, must be computed at every epoch, until the fixed point of a learnable state transition function is reached, propagating the information among the neighbouring nodes. We propose a novel approach to learning in GNNs, based on constrained optimization in the Lagrangian framework. Learning both the transition function and the node states is the outcome of a joint process, in which the state convergence procedure is implicitly expressed by a constraint satisfaction mechanism, avoiding iterative epochwise procedures and the network unfolding. Our computational structure searches for saddle points of the Lagrangian in the adjoint space composed of weights, nodes state variables and Lagrange multipliers. This process is further enhanced by multiple layers of constraints that accelerate the diffusion process. An experimental analysis shows that the proposed approach compares favourably with popular models on several benchmarks.
READ FULL TEXT
Comments
There are no comments yet.