Rethinking the Setting of Semi-supervised Learning on Graphs

05/28/2022
by   Ziang Li, et al.
6

We argue that the present setting of semisupervised learning on graphs may result in unfair comparisons, due to its potential risk of over-tuning hyper-parameters for models. In this paper, we highlight the significant influence of tuning hyper-parameters, which leverages the label information in the validation set to improve the performance. To explore the limit of over-tuning hyperparameters, we propose ValidUtil, an approach to fully utilize the label information in the validation set through an extra group of hyper-parameters. With ValidUtil, even GCN can easily get high accuracy of 85.8 To avoid over-tuning, we merge the training set and the validation set and construct an i.i.d. graph benchmark (IGB) consisting of 4 datasets. Each dataset contains 100 i.i.d. graphs sampled from a large graph to reduce the evaluation variance. Our experiments suggest that IGB is a more stable benchmark than previous datasets for semisupervised learning on graphs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2019

Dynamic Self-training Framework for Graph Convolutional Networks

Graph neural networks (GNN) such as GCN, GAT, MoNet have achieved state-...
research
09/05/2019

On the discriminative power of Hyper-parameters in Cross-Validation and how to choose them

Hyper-parameters tuning is a crucial task to make a model perform at its...
research
07/05/2022

A Safe Semi-supervised Graph Convolution Network

In the semi-supervised learning field, Graph Convolution Network (GCN), ...
research
03/17/2020

The Value of Nullspace Tuning Using Partial Label Information

In semi-supervised learning, information from unlabeled examples is used...
research
05/10/2019

Semi-supervised and Population Based Training for Voice Commands Recognition

We present a rapid design methodology that combines automated hyper-para...
research
03/16/2020

Tuning Ranking in Co-occurrence Networks with General Biased Exchange-based Diffusion on Hyper-bag-graphs

Co-occurence networks can be adequately modeled by hyper-bag-graphs (hb-...
research
11/17/2021

Self-Learning Tuning for Post-Silicon Validation

Increasing complexity of modern chips makes design validation more diffi...

Please sign up or login with your details

Forgot password? Click here to reset