dMath: A Scalable Linear Algebra and Math Library for Heterogeneous GP-GPU Architectures

04/05/2016
by   Steven Eliuk, et al.
0

A new scalable parallel math library, dMath, is presented in this paper that demonstrates leading scaling when using intranode, or internode, hybrid-parallelism for deep-learning. dMath provides easy-to-use distributed base primitives and a variety of domain-specific algorithms. These include matrix multiplication, convolutions, and others allowing for rapid development of highly scalable applications, including Deep Neural Networks (DNN), whereas previously one was restricted to libraries that provided effective primitives for only a single GPU, like Nvidia cublas and cudnn or DNN primitives from Nervana neon framework. Development of HPC software is difficult, labor-intensive work, requiring a unique skill set. dMath allows a wide range of developers to utilize parallel and distributed hardware easily. One contribution of this approach is that data is stored persistently on the GPU hardware, avoiding costly transfers between host and device. Advanced memory management techniques are utilized, including caching of transferred data and memory reuse through pooling. A key contribution of dMath is that it delivers performance, portability, and productivity to its specific domain of support. It enables algorithm and application programmers to quickly solve problems without managing the significant complexity associated with multi-level parallelism.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2016

dMath: Distributed Linear Algebra for DL

The paper presents a parallel math library, dMath, that demonstrates lea...
research
11/21/2016

A Metaprogramming and Autotuning Framework for Deploying Deep Learning Applications

In recent years, deep neural networks (DNNs), have yielded strong result...
research
06/28/2021

Combinatorial BLAS 2.0: Scaling combinatorial algorithms on distributed-memory systems

Combinatorial algorithms such as those that arise in graph analysis, mod...
research
06/02/2020

PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives

Deep Neural Networks (DNNs) have revolutionized many aspects of our live...
research
02/06/2020

PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives

At the heart of deep learning training and inferencing are computational...
research
12/12/2017

Integrated Model, Batch and Domain Parallelism in Training Neural Networks

We propose a new integrated method of exploiting model, batch and domain...
research
01/10/2022

DiOS – An Extended Reality Operating System for the Metaverse

Driven by the recent improvements in device and networks capabilities, E...

Please sign up or login with your details

Forgot password? Click here to reset