Graphs, Convolutions, and Neural Networks

by   Fernando Gama, et al.

Network data can be conveniently modeled as a graph signal, where data values are assigned to nodes of a graph that describes the underlying network topology. Successful learning from network data is built upon methods that effectively exploit this graph structure. In this work, we overview graph convolutional filters, which are linear, local and distributed operations that adequately leverage the graph structure. We then discuss graph neural networks (GNNs), built upon graph convolutional filters, that have been shown to be powerful nonlinear learning architectures. We show that GNNs are permutation equivariant and stable to changes in the underlying graph topology, allowing them to scale and transfer. We also introduce GNN extensions using edge-varying and autoregressive moving average graph filters and discuss their properties. Finally, we study the use of GNNs in learning decentralized controllers for robot swarm and in addressing the recommender system problem.


Discriminability of Single-Layer Graph Neural Networks

Network data can be conveniently modeled as a graph signal, where data v...

Dynamic Filters in Graph Convolutional Neural Networks

Over the last few years, we have seen increasing data generated from non...

Space-Time Graph Neural Networks

We introduce space-time graph neural network (ST-GNN), a novel GNN archi...

Dist2Cycle: A Simplicial Neural Network for Homology Localization

Simplicial complexes can be viewed as high dimensional generalizations o...

Wide and Deep Graph Neural Networks with Distributed Online Learning

Graph neural networks (GNNs) learn representations from network data wit...

Node-Variant Graph Filters in Graph Neural Networks

Graph neural networks (GNNs) have been successfully employed in a myriad...

Graph Neural Networks for Decentralized Controllers

Dynamical systems comprised of autonomous agents arise in many relevant ...