Graph-Based Neural Network Models with Multiple Self-Supervised Auxiliary Tasks

11/14/2020
by   Franco Manessi, et al.
0

Self-supervised learning is currently gaining a lot of attention, as it allows neural networks to learn robust representations from large quantities of unlabeled data. Additionally, multi-task learning can further improve representation learning by training networks simultaneously on related tasks, leading to significant performance improvements. In this paper, we propose a general framework to improve graph-based neural network models by combining self-supervised auxiliary learning tasks in a multi-task fashion. Since Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points, we use them as a building block to achieve competitive results on standard semi-supervised graph classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2021

Self-supervised Auxiliary Learning for Graph Neural Networks via Meta-Learning

In recent years, graph neural networks (GNNs) have been widely adopted i...
research
06/16/2020

When Does Self-Supervision Help Graph Convolutional Networks?

Self-supervision as an emerging technique has been employed to train con...
research
02/28/2019

Multi-Stage Self-Supervised Learning for Graph Convolutional Networks

Graph Convolutional Networks(GCNs) play a crucial role in graph learning...
research
01/07/2021

Contextual Classification Using Self-Supervised Auxiliary Models for Deep Neural Networks

Classification problems solved with deep neural networks (DNNs) typicall...
research
06/18/2020

UV-Net: Learning from Curve-Networks and Solids

Parametric curves, surfaces and boundary representations are the basis f...
research
01/12/2022

Multi-task Joint Strategies of Self-supervised Representation Learning on Biomedical Networks for Drug Discovery

Self-supervised representation learning (SSL) on biomedical networks pro...
research
06/15/2022

Self-Supervised Implicit Attention: Guided Attention by The Model Itself

We propose Self-Supervised Implicit Attention (SSIA), a new approach tha...

Please sign up or login with your details

Forgot password? Click here to reset