Deep Graph Library Optimizations for Intel(R) x86 Architecture

07/13/2020
by   Sasikanth Avancha, et al.
0

The Deep Graph Library (DGL) was designed as a tool to enable structure learning from graphs, by supporting a core abstraction for graphs, including the popular Graph Neural Networks (GNN). DGL contains implementations of all core graph operations for both the CPU and GPU. In this paper, we focus specifically on CPU implementations and present performance analysis, optimizations and results across a set of GNN applications using the latest version of DGL(0.4.3). Across 7 applications, we achieve speed-ups ranging from1 1.5x-13x over the baseline CPU implementations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/14/2021

DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks

Full-batch training on Graph Neural Networks (GNN) to learn the structur...
research
12/22/2022

Accelerating Barnes-Hut t-SNE Algorithm by Efficient Parallelization on Multi-Core CPUs

t-SNE remains one of the most popular embedding techniques for visualizi...
research
02/11/2023

Porting numerical integration codes from CUDA to oneAPI: a case study

We present our experience in porting optimized CUDA implementations to o...
research
11/10/2021

Graph Neural Network Training with Data Tiering

Graph Neural Networks (GNNs) have shown success in learning from graph-s...
research
12/25/2020

Graph500 from OCaml-Multicore Perspective

OCaml is an industrial-strength, multi-paradigm programming language, wi...
research
10/08/2022

Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and GPU

Dynamic graph neural network (DGNN) is becoming increasingly popular bec...
research
04/16/2015

Caffe con Troll: Shallow Ideas to Speed Up Deep Learning

We present Caffe con Troll (CcT), a fully compatible end-to-end version ...

Please sign up or login with your details

Forgot password? Click here to reset