Graph Barlow Twins: A self-supervised representation learning framework for graphs

06/04/2021
by   Piotr Bielak, et al.
25

The self-supervised learning (SSL) paradigm is an essential exploration area, which tries to eliminate the need for expensive data labeling. Despite the great success of SSL methods in computer vision and natural language processing, most of them employ contrastive learning objectives that require negative samples, which are hard to define. This becomes even more challenging in the case of graphs and is a bottleneck for achieving robust representations. To overcome such limitations, we propose a framework for self-supervised graph representation learning – Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative samples. Moreover, it does not rely on non-symmetric neural network architectures – in contrast to state-of-the-art self-supervised graph representation learning method BGRL. We show that our method achieves as competitive results as BGRL, best self-supervised methods, and fully supervised ones while requiring substantially fewer hyperparameters and converging in an order of magnitude training steps earlier.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2020

Self-supervised Learning: Generative or Contrastive

Deep supervised learning has achieved great success in the last decade. ...
research
06/12/2023

CARL-G: Clustering-Accelerated Representation Learning on Graphs

Self-supervised learning on graphs has made large strides in achieving g...
research
01/28/2023

Unbiased and Efficient Self-Supervised Incremental Contrastive Learning

Contrastive Learning (CL) has been proved to be a powerful self-supervis...
research
08/28/2023

Diversified Ensemble of Independent Sub-Networks for Robust Self-Supervised Representation Learning

Ensembling a neural network is a widely recognized approach to enhance m...
research
05/28/2019

Greedy InfoMax for Biologically Plausible Self-Supervised Representation Learning

We propose a novel deep learning method for local self-supervised repres...
research
05/23/2023

Self-Supervised Gaussian Regularization of Deep Classifiers for Mahalanobis-Distance-Based Uncertainty Estimation

Recent works show that the data distribution in a network's latent space...
research
12/23/2021

Self-supervised Representation Learning of Neuronal Morphologies

Understanding the diversity of cell types and their function in the brai...

Please sign up or login with your details

Forgot password? Click here to reset