Bandit Samplers for Training Graph Neural Networks

06/10/2020
by   Ziqi Liu, et al.
0

Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs). However, due to the intractable computation of optimal sampling distribution, these sampling algorithms are suboptimal for GCNs and are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT). The fundamental reason is that the embeddings of the neighbors or learned weights involved in the optimal sampling distribution are changing during the training and not known a priori, but only partially observed when sampled, thus making the derivation of an optimal variance reduced samplers non-trivial. In this paper, we formulate the optimization of the sampling variance as an adversary bandit problem, where the rewards are related to the node embeddings and learned weights, and can vary constantly. Thus a good sampler needs to acquire variance information about more neighbors (exploration) while at the same time optimizing the immediate sampling variance (exploit). We theoretically show that our algorithm asymptotically approaches the optimal variance within a factor of 3. We show the efficiency and effectiveness of our approach on multiple datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2020

Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become a...
research
10/08/2022

Hierarchical Graph Transformer with Adaptive Node Sampling

The Transformer architecture has achieved remarkable success in a number...
research
08/08/2017

Stochastic Optimization with Bandit Sampling

Many stochastic optimization algorithms work by estimating the gradient ...
research
07/27/2022

Label-Only Membership Inference Attack against Node-Level Graph Neural Networks

Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks ...
research
03/01/2021

A Biased Graph Neural Network Sampler with Near-Optimal Regret

Graph neural networks (GNN) have recently emerged as a vehicle for apply...
research
03/29/2019

Online Variance Reduction with Mixtures

Adaptive importance sampling for stochastic optimization is a promising ...
research
10/24/2022

(LA)yer-neigh(BOR) Sampling: Defusing Neighborhood Explosion in GNNs

Graph Neural Networks have recently received a significant attention, ho...

Please sign up or login with your details

Forgot password? Click here to reset