Graph2Graph Learning with Conditional Autoregressive Models

by   Guan Wang, et al.

We present a graph neural network model for solving graph-to-graph learning problems. Most deep learning on graphs considers “simple” problems such as graph classification or regressing real-valued graph properties. For such tasks, the main requirement for intermediate representations of the data is to maintain the structure needed for output, i.e., keeping classes separated or maintaining the order indicated by the regressor. However, a number of learning tasks, such as regressing graph-valued output, generative models, or graph autoencoders, aim to predict a graph-structured output. In order to successfully do this, the learned representations need to preserve far more structure. We present a conditional auto-regressive model for graph-to-graph learning and illustrate its representational capabilities via experiments on challenging subgraph predictions from graph algorithmics; as a graph autoencoder for reconstruction and visualization; and on pretraining representations that allow graph classification with limited labeled data.


page 1

page 2

page 3

page 4


Learning Graph Representations

Social and information networks are gaining huge popularity recently due...

A Simple Proof of the Universality of Invariant/Equivariant Graph Neural Networks

We present a simple proof for the universality of invariant and equivari...

Learning Deep Generative Models of Graphs

Graphs are fundamental data structures which concisely capture the relat...

Machine Learning on Graphs: A Model and Comprehensive Taxonomy

There has been a surge of recent interest in learning representations fo...

Graph Filtration Learning

We propose an approach to learning with graph-structured data in the pro...

Graph Mixture Density Networks

We introduce the Graph Mixture Density Network, a new family of machine ...