Auto-decoding Graphs

06/04/2020
by   Sohil Atul Shah, et al.
0

We present an approach to synthesizing new graph structures from empirically specified distributions. The generative model is an auto-decoder that learns to synthesize graphs from latent codes. The graph synthesis model is learned jointly with an empirical distribution over the latent codes. Graphs are synthesized using self-attention modules that are trained to identify likely connectivity patterns. Graph-based normalizing flows are used to sample latent codes from the distribution learned by the auto-decoder. The resulting model combines accuracy and scalability. On benchmark datasets of large graphs, the presented model outperforms the state of the art by a factor of 1.5 in mean accuracy and average rank across at least three different graph statistics, with a 2x speedup during inference.

READ FULL TEXT
research
05/30/2019

Graph Normalizing Flows

We introduce graph normalizing flows: a new, reversible graph neural net...
research
06/09/2020

Graph-Aware Transformer: Is Attention All Graphs Need?

Graphs are the natural data structure to represent relational and struct...
research
04/13/2022

LDPC codes: comparing cluster graphs to factor graphs

We present a comparison study between a cluster and factor graph represe...
research
06/13/2023

Vector-Quantized Graph Auto-Encoder

In this work, we addresses the problem of modeling distributions of grap...
research
07/18/2021

GraphGen-Redux: a Fast and Lightweight Recurrent Model for labeled Graph Generation

The problem of labeled graph generation is gaining attention in the Deep...
research
05/29/2021

Learning Graphon Autoencoders for Generative Graph Modeling

Graphon is a nonparametric model that generates graphs with arbitrary si...
research
06/29/2021

Few-Shot Electronic Health Record Coding through Graph Contrastive Learning

Electronic health record (EHR) coding is the task of assigning ICD codes...

Please sign up or login with your details

Forgot password? Click here to reset