On the power of graph neural networks and the role of the activation function

07/10/2023
by   Sammy Khalife, et al.
0

In this article we present new results about the expressivity of Graph Neural Networks (GNNs). We prove that for any GNN with piecewise polynomial activations, whose architecture size does not grow with the graph input sizes, there exists a pair of non-isomorphic rooted trees of depth two such that the GNN cannot distinguish their root vertex up to an arbitrary number of iterations. The proof relies on tools from the algebra of symmetric polynomials. In contrast, it was already known that unbounded GNNs (those whose size is allowed to change with the graph sizes) with piecewise polynomial activations can distinguish these vertices in only two iterations. Our results imply a strict separation between bounded and unbounded size GNNs, answering an open question formulated by [Grohe, 2021]. We next prove that if one allows activations that are not piecewise polynomial, then in two iterations a single neuron perceptron can distinguish the root vertices of any pair of nonisomorphic trees of depth two (our results hold for activations like the sigmoid, hyperbolic tan and others). This shows how the power of graph neural networks can change drastically if one changes the activation function of the neural networks. The proof of this result utilizes the Lindemann-Weierstrauss theorem from transcendental number theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2023

The Descriptive Complexity of Graph Neural Networks

We analyse the power of graph neural networks (GNNs) in terms of Boolean...
research
04/10/2022

Expressiveness and Approximation Properties of Graph Neural Networks

Characterizing the separation power of graph neural networks (GNNs) prov...
research
11/06/2022

Exponentially Improving the Complexity of Simulating the Weisfeiler-Lehman Test with Graph Neural Networks

Recent work shows that the expressive power of Graph Neural Networks (GN...
research
02/22/2023

Some Might Say All You Need Is Sum

The expressivity of Graph Neural Networks (GNNs) is dependent on the agg...
research
10/29/2018

Median activation functions for graph neural networks

Graph neural networks (GNNs) have been shown to replicate convolutional ...
research
06/10/2023

Neural Injective Functions for Multisets, Measures and Graphs via a Finite Witness Theorem

Injective multiset functions have a key role in the theoretical study of...

Please sign up or login with your details

Forgot password? Click here to reset