Invariance-Preserving Localized Activation Functions for Graph Neural Networks

03/29/2019
by   Luana Ruiz, et al.
0

Graph signals are signals with an irregular structure that can be described by a graph. Graph neural networks (GNNs) are information processing architectures tailored to these graph signals and made of stacked layers that compose graph convolutional filters with nonlinear activation functions. Graph convolutions endow GNNs with invariance to permutations of the graph nodes' labels. In this paper, we consider the design of trainable nonlinear activation functions that take into consideration the structure of the graph. This is accomplished by using graph median filters and graph max filters, which mimic linear graph convolutions and are shown to retain the permutation invariance of GNNs. We also discuss modifications to the backpropagation algorithm necessary to train local activation functions. The advantages of localized activation function architectures are demonstrated in three numerical experiments: source localization on synthetic graphs, authorship attribution of 19th century novels and prediction of movie ratings. In all cases, localized activation functions are shown to improve model capacity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset