A Gaze into the Internal Logic of Graph Neural Networks, with Logic

08/05/2022
by   Paul Tarau, et al.
0

Graph Neural Networks share with Logic Programming several key relational inference mechanisms. The datasets on which they are trained and evaluated can be seen as database facts containing ground terms. This makes possible modeling their inference mechanisms with equivalent logic programs, to better understand not just how they propagate information between the entities involved in the machine learning process but also to infer limits on what can be learned from a given dataset and how well that might generalize to unseen test data. This leads us to the key idea of this paper: modeling with the help of a logic program the information flows involved in learning to infer from the link structure of a graph and the information content of its nodes properties of new nodes, given their known connections to nodes with possibly similar properties. The problem is known as graph node property prediction and our approach will consist in emulating with help of a Prolog program the key information propagation steps of a Graph Neural Network's training and inference stages. We test our a approach on the ogbn-arxiv node property inference benchmark. To infer class labels for nodes representing papers in a citation network, we distill the dependency trees of the text associated to each node into directed acyclic graphs that we encode as ground Prolog terms. Together with the set of their references to other papers, they become facts in a database on which we reason with help of a Prolog program that mimics the information propagation in graph neural networks predicting node properties. In the process, we invent ground term similarity relations that help infer labels in the test set by propagating node properties from similar nodes in the training set and we evaluate their effectiveness in comparison with that of the graph's link structure. Finally, we implement explanation generators that unveil performance upper bounds inherent to the dataset. As a practical outcome, we obtain a logic program, that, when seen as machine learning algorithm, performs close to the state of the art on the node property prediction benchmark.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/17/2021

Membership Inference Attack on Graph Neural Networks

Graph Neural Networks (GNNs), which generalize traditional deep neural n...
research
11/11/2022

A New Graph Node Classification Benchmark: Learning Structure from Histology Cell Graphs

We introduce a new benchmark dataset, Placenta, for node classification ...
research
06/05/2019

Can Graph Neural Networks Help Logic Reasoning?

Effectively combining logic reasoning and probabilistic inference has be...
research
06/03/2022

On Calibration of Graph Neural Networks for Node Classification

Graphs can model real-world, complex systems by representing entities an...
research
04/29/2023

Leveraging Label Non-Uniformity for Node Classification in Graph Neural Networks

In node classification using graph neural networks (GNNs), a typical mod...
research
07/09/2020

Active Learning on Attributed Graphs via Graph Cognizant Logistic Regression and Preemptive Query Generation

Node classification in attributed graphs is an important task in multipl...
research
03/22/2019

Graph Temporal Logic Inference for Classification and Identification

Inferring spatial-temporal properties from data is important for many co...

Please sign up or login with your details

Forgot password? Click here to reset