-
Stability Properties of Graph Neural Networks
Data stemming from networks exhibit an irregular support, whereby each d...
read it
-
Graph and graphon neural network stability
Graph neural networks (GNNs) are learning architectures that rely on kno...
read it
-
Discriminability of Single-Layer Graph Neural Networks
Network data can be conveniently modeled as a graph signal, where data v...
read it
-
Wide and Deep Graph Neural Networks with Distributed Online Learning
Graph neural networks (GNNs) learn representations from network data wit...
read it
-
Invariance-Preserving Localized Activation Functions for Graph Neural Networks
Graph signals are signals with an irregular structure that can be descri...
read it
-
Stability of Algebraic Neural Networks to Small Perturbations
Algebraic neural networks (AlgNNs) are composed of a cascade of layers e...
read it
-
PairNorm: Tackling Oversmoothing in GNNs
The performance of graph neural nets (GNNs) is known to gradually decrea...
read it
Stability of Graph Neural Networks to Relative Perturbations
Graph neural networks (GNNs), consisting of a cascade of layers applying a graph convolution followed by a pointwise nonlinearity, have become a powerful architecture to process signals supported on graphs. Graph convolutions (and thus, GNNs), rely heavily on knowledge of the graph for operation. However, in many practical cases the GSO is not known and needs to be estimated, or might change from training time to testing time. In this paper, we are set to study the effect that a change in the underlying graph topology that supports the signal has on the output of a GNN. We prove that graph convolutions with integral Lipschitz filters lead to GNNs whose output change is bounded by the size of the relative change in the topology. Furthermore, we leverage this result to show that the main reason for the success of GNNs is that they are stable architectures capable of discriminating features on high eigenvalues, which is a feat that cannot be achieved by linear graph filters (which are either stable or discriminative, but cannot be both). Finally, we comment on the use of this result to train GNNs with increased stability and run experiments on movie recommendation systems.
READ FULL TEXT
Comments
There are no comments yet.