MaskGAE: Masked Graph Modeling Meets Graph Autoencoders

05/20/2022
by   Jintang Li, et al.
0

We present masked graph autoencoder (MaskGAE), a self-supervised learning framework for graph-structured data. Different from previous graph autoencoders (GAEs), MaskGAE adopts masked graph modeling (MGM) as a principled pretext task: masking a portion of edges and attempting to reconstruct the missing part with partially visible, unmasked graph structure. To understand whether MGM can help GAEs learn better representations, we provide both theoretical and empirical evidence to justify the benefits of this pretext task. Theoretically, we establish the connections between GAEs and contrastive learning, showing that MGM significantly improves the self-supervised learning scheme of GAEs. Empirically, we conduct extensive experiments on a number of benchmark datasets, demonstrating the superiority of MaskGAE over several state-of-the-arts on both link prediction and node classification tasks. Our code is publicly available at <https://github.com/EdisonLeeeee/MaskGAE>.

READ FULL TEXT
research
01/07/2022

MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs

We introduce a novel masked graph autoencoder (MGAE) framework to perfor...
research
08/18/2023

GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent Space Reconstruction

Self-supervised learning with masked autoencoders has recently gained po...
research
10/15/2022

How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders

Masked Autoencoders (MAE) based on a reconstruction task have risen to b...
research
11/07/2018

Multi-Task Graph Autoencoders

We examine two fundamental tasks associated with graph representation le...
research
05/23/2023

ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings

We propose ConGraT(Contrastive Graph-Text pretraining), a general, self-...
research
05/22/2022

GraphMAE: Self-Supervised Masked Graph Autoencoders

Self-supervised learning (SSL) has been extensively explored in recent y...
research
03/23/2023

Towards Better Dynamic Graph Learning: New Architecture and Unified Library

We propose DyGFormer, a new Transformer-based architecture for dynamic g...

Please sign up or login with your details

Forgot password? Click here to reset