Not Half Bad: Exploring Half-Precision in Graph Convolutional Neural Networks

10/23/2020
by   John Brennan, et al.
2

With the growing significance of graphs as an effective representation of data in numerous applications, efficient graph analysis using modern machine learning is receiving a growing level of attention. Deep learning approaches often operate over the entire adjacency matrix – as the input and intermediate network layers are all designed in proportion to the size of the adjacency matrix – leading to intensive computation and large memory requirements as the graph size increases. It is therefore desirable to identify efficient measures to reduce both run-time and memory requirements allowing for the analysis of the largest graphs possible. The use of reduced precision operations within the forward and backward passes of a deep neural network along with novel specialised hardware in modern GPUs can offer promising avenues towards efficiency. In this paper, we provide an in-depth exploration of the use of reduced-precision operations, easily integrable into the highly popular PyTorch framework, and an analysis of the effects of Tensor Cores on graph convolutional neural networks. We perform an extensive experimental evaluation of three GPU architectures and two widely-used graph analysis tasks (vertex classification and link prediction) using well-known benchmark and synthetically generated datasets. Thus allowing us to make important observations on the effects of reduced-precision operations and Tensor Cores on computational and memory usage of graph convolutional neural networks – often neglected in the literature.

READ FULL TEXT

page 1

page 4

page 6

page 7

page 8

page 9

page 10

research
04/11/2022

T- Hop: Tensor representation of paths in graph convolutional networks

We describe a method for encoding path information in graphs into a 3-d ...
research
09/14/2022

Efficient Quantized Sparse Matrix Operations on Tensor Cores

The exponentially growing model size drives the continued success of dee...
research
02/04/2018

MotifNet: a motif-based Graph Convolutional Network for directed graphs

Deep learning on graphs and in particular, graph convolutional neural ne...
research
06/23/2021

APNN-TC: Accelerating Arbitrary Precision Neural Networks on Ampere GPU Tensor Cores

Over the years, accelerating neural networks with quantization has been ...
research
06/08/2019

5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory

In-memory computing is an emerging computing paradigm that could enable ...
research
02/11/2022

Learning from distinctive candidates to optimize reduced-precision convolution program on tensor cores

Convolution is one of the fundamental operations of deep neural networks...
research
11/30/2018

Scalable Graph Learning for Anti-Money Laundering: A First Look

Organized crime inflicts human suffering on a genocidal scale: the Mexic...

Please sign up or login with your details

Forgot password? Click here to reset