RRLFSOR: An Efficient Self-Supervised Learning Strategy of Graph Convolutional Networks

08/17/2021
by   Feng Sun, et al.
7

To further improve the performance and the self-learning ability of GCNs, in this paper, we propose an efficient self-supervised learning strategy of GCNs, named randomly removed links with a fixed step at one region (RRLFSOR). In addition, we also propose another self-supervised learning strategy of GCNs, named randomly removing links with a fixed step at some blocks (RRLFSSB), to solve the problem that adjacent nodes have no selected step. Experiments on transductive link prediction tasks show that our strategies outperform the baseline models consistently by up to 21.34 benchmark datasets.

READ FULL TEXT
research
02/28/2019

Multi-Stage Self-Supervised Learning for Graph Convolutional Networks

Graph Convolutional Networks(GCNs) play a crucial role in graph learning...
research
06/03/2020

Self-supervised Training of Graph Convolutional Networks

Graph Convolutional Networks (GCNs) have been successfully applied to an...
research
10/25/2022

MOFormer: Self-Supervised Transformer model for Metal-Organic Framework Property Prediction

Metal-Organic Frameworks (MOFs) are materials with a high degree of poro...
research
06/06/2021

Self-supervised Rubik's Cube Solver

This work demonstrates that deep neural networks (DNNs) can solve a comb...
research
05/25/2021

Graph Self Supervised Learning: the BT, the HSIC, and the VICReg

Self-supervised learning and pre-training strategies have developed over...
research
11/01/2022

RGMIM: Region-Guided Masked Image Modeling for COVID-19 Detection

Self-supervised learning has developed rapidly and also advances compute...
research
09/02/2022

BinImg2Vec: Augmenting Malware Binary Image Classification with Data2Vec

Rapid digitalisation spurred by the Covid-19 pandemic has resulted in mo...

Please sign up or login with your details

Forgot password? Click here to reset