Graph Masked Autoencoder

02/17/2022
by   Hongxu Chen, et al.
0

Transformers have achieved state-of-the-art performance in learning graph representations. However, there are still some challenges when applying transformers to real-world scenarios due to the fact that deep transformers are hard to be trained from scratch and the memory consumption is large. To address the two challenges, we propose Graph Masked Autoencoders (GMAE), a self-supervised model for learning graph representations, where vanilla graph transformers are used as the encoder and the decoder. GMAE takes partially masked graphs as input, and reconstructs the features of the masked nodes. We adopt asymmetric encoder-decoder design, where the encoder is a deep graph transformer and the decoder is a shallow graph transformer. The masking mechanism and the asymmetric design make GMAE a memory-efficient model compared with conventional transformers. We show that, compared with training from scratch, the graph transformer pre-trained using GMAE can achieve much better performance after fine-tuning. We also show that, when serving as a conventional self-supervised graph representation model and using an SVM model as the downstream graph classifier, GMAE achieves state-of-the-art performance on 5 of the 7 benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2022

Stateful Memory-Augmented Transformers for Dialogue Modeling

Transformer encoder-decoder models have shown impressive performance in ...
research
09/13/2022

SeRP: Self-Supervised Representation Learning Using Perturbed Point Clouds

We present SeRP, a framework for Self-Supervised Learning of 3D point cl...
research
11/01/2021

Transformers for prompt-level EMA non-response prediction

Ecological Momentary Assessments (EMAs) are an important psychological d...
research
06/15/2023

Fast Training of Diffusion Models with Masked Transformers

We propose an efficient approach to train large diffusion models with ma...
research
11/25/2022

BatmanNet: Bi-branch Masked Graph Transformer Autoencoder for Molecular Representation

Although substantial efforts have been made using graph neural networks ...
research
03/06/2023

ST-KeyS: Self-Supervised Transformer for Keyword Spotting in Historical Handwritten Documents

Keyword spotting (KWS) in historical documents is an important tool for ...
research
04/13/2023

Remote Sensing Change Detection With Transformers Trained from Scratch

Current transformer-based change detection (CD) approaches either employ...

Please sign up or login with your details

Forgot password? Click here to reset