Graph autoencoder with constant dimensional latent space

01/28/2022
by   Adam Małkowski, et al.
0

Invertible transformation of large graphs into constant dimensional vectors (embeddings) remains a challenge. In this paper we address it with recursive neural networks: The encoder and the decoder. The encoder network transforms embeddings of subgraphs into embeddings of larger subgraphs, and eventually into the embedding of the input graph. The decoder does the opposite. The dimension of the embeddings is constant regardless of the size of the (sub)graphs. Simulation experiments presented in this paper confirm that our proposed graph autoencoder can handle graphs with even thousands of vertices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2019

Constant Amortized Time Enumeration of Independent Sets for Graphs with Forbidden Subgraphs on Fixed Number of Vertices

In this paper, we address the independent set enumeration problem. Altho...
research
08/05/2007

Network synchronizability analysis: the theory of subgraphs and complementary graphs

In this paper, subgraphs and complementary graphs are used to analyze th...
research
12/21/2021

On the Size and Width of the Decoder of a Boolean Threshold Autoencoder

In this paper, we study the size and width of autoencoders consisting of...
research
06/22/2015

GraphMaps: Browsing Large Graphs as Interactive Maps

Algorithms for laying out large graphs have seen significant progress in...
research
04/21/2020

On the Compressive Power of Boolean Threshold Autoencoders

An autoencoder is a layered neural network whose structure can be viewed...
research
12/13/2018

Conditional Graph Neural Processes: A Functional Autoencoder Approach

We introduce a novel encoder-decoder architecture to embed functional pr...
research
05/03/2023

Transforming Visual Scene Graphs to Image Captions

We propose to Transform Scene Graphs (TSG) into more descriptive caption...

Please sign up or login with your details

Forgot password? Click here to reset