Latent Augmentation For Better Graph Self-Supervised Learning

06/26/2022
by   Jiashun Cheng, et al.
0

Graph self-supervised learning has been vastly employed to learn representations from unlabeled graphs. Existing methods can be roughly divided into predictive learning and contrastive learning, where the latter one attracts more research attention with better empirical performance. We argue that, however, predictive models weaponed with latent augmentations and powerful decoder could achieve comparable or even better representation power than contrastive models. In this work, we introduce data augmentations into latent space for superior generalization and better efficiency. A novel graph decoder named Wiener Graph Deconvolutional Network is correspondingly designed to perform information reconstruction from augmented latent representations. Theoretical analysis proves the superior reconstruction ability of graph wiener filter. Extensive experimental results on various datasets demonstrate the effectiveness of our approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2020

Contrastive Self-supervised Learning for Graph Classification

Graph classification is a widely studied problem and has broad applicati...
research
02/16/2022

Self-Supervised Representation Learning via Latent Graph Prediction

Self-supervised learning (SSL) of graph neural networks is emerging as a...
research
01/11/2023

Generative-Contrastive Learning for Self-Supervised Latent Representations of 3D Shapes from Multi-Modal Euclidean Input

We propose a combined generative and contrastive neural architecture for...
research
01/12/2021

Explicit homography estimation improves contrastive self-supervised learning

The typical contrastive self-supervised algorithm uses a similarity meas...
research
07/20/2021

Group Contrastive Self-Supervised Learning on Graphs

We study self-supervised learning on graphs using contrastive methods. A...
research
05/30/2023

A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets Spiking Neural Networks

While contrastive self-supervised learning has become the de-facto learn...
research
09/21/2023

DimCL: Dimensional Contrastive Learning For Improving Self-Supervised Learning

Self-supervised learning (SSL) has gained remarkable success, for which ...

Please sign up or login with your details

Forgot password? Click here to reset