At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation

07/28/2020
by   Mert Hidayetoglu, et al.
0

This paper presents GPU performance optimization and scaling results for inference models of the Sparse Deep Neural Network Challenge 2020. Demands for network quality have increased rapidly, pushing the size and thus the memory requirements of many neural networks beyond the capacity of available accelerators. Sparse deep neural networks (SpDNN) have shown promise for reining in the memory footprint of large neural networks. However, there is room for improvement in implementing SpDNN operations on GPUs. This work presents optimized sparse matrix multiplication kernels fused with the ReLU function. The optimized kernels reuse input feature maps from the shared memory and sparse weights from registers. For multi-GPU parallelism, our SpDNN implementation duplicates weights and statically partition the feature maps across GPUs. Results for the challenge benchmarks show that the proposed kernel design and multi-GPU parallelization achieve up to 180 tera-edges per second inference throughput. These results are up to 4.3x faster for a single GPU and an order of magnitude faster at full scale than those of the champion of the 2019 Sparse Deep Neural Network Graph Challenge for the same generation of NVIDIA V100 GPUs. Using the same implementation, we also show single-GPU throughput on NVIDIA A100 is 2.37× faster than V100.

READ FULL TEXT
research
06/18/2020

Sparse GPU Kernels for Deep Learning

Scientific workloads have traditionally exploited high levels of sparsit...
research
07/21/2017

Memory-Efficient Implementation of DenseNets

The DenseNet architecture is highly computationally efficient as a resul...
research
11/16/2020

Indirection Stream Semantic Register Architecture for Efficient Sparse-Dense Linear Algebra

Sparse-dense linear algebra is crucial in many domains, but challenging ...
research
06/06/2023

Towards Memory-Efficient Training for Extremely Large Output Spaces – Learning with 500k Labels on a Single Commodity GPU

In classification problems with large output spaces (up to millions of l...
research
04/21/2016

Deep Adaptive Network: An Efficient Deep Neural Network with Sparse Binary Connections

Deep neural networks are state-of-the-art models for understanding the c...
research
02/24/2020

Optimizing High Performance Markov Clustering for Pre-Exascale Architectures

HipMCL is a high-performance distributed memory implementation of the po...
research
07/21/2023

Chrion: Optimizing Recurrent Neural Network Inference by Collaboratively Utilizing CPUs and GPUs

Deploying deep learning models in cloud clusters provides efficient and ...

Please sign up or login with your details

Forgot password? Click here to reset