TBD: Benchmarking and Analyzing Deep Neural Network Training

03/16/2018
by   Hongyu Zhu, et al.
0

The recent popularity of deep neural networks (DNNs) has generated a lot of research interest in performing DNN-related computation efficiently. However, the primary focus is usually very narrow and limited to (i) inference -- i.e. how to efficiently execute already trained models and (ii) image classification networks as the primary benchmark for evaluation. Our primary goal in this work is to break this myopic view by (i) proposing a new benchmark for DNN training, called TBD (TBD is short for Training Benchmark for DNNs), that uses a representative set of DNN models that cover a wide range of machine learning applications: image classification, machine translation, speech recognition, object detection, adversarial networks, reinforcement learning, and (ii) by performing an extensive performance analysis of training these different applications on three major deep learning frameworks (TensorFlow, MXNet, CNTK) across different hardware configurations (single-GPU, multi-GPU, and multi-machine). TBD currently covers six major application domains and eight different state-of-the-art models. We present a new toolchain for performance analysis for these models that combines the targeted usage of existing performance analysis tools, careful selection of new and existing metrics and methodologies to analyze the results, and utilization of domain specific characteristics of DNN training. We also build a new set of tools for memory profiling in all three major frameworks; much needed tools that can finally shed some light on precisely how much memory is consumed by different data structures (weights, activations, gradients, workspace) in DNN training. By using our tools and methodologies, we make several important observations and recommendations on where the future research and optimization of DNN training should be focused.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2017

Performance Modeling and Evaluation of Distributed Deep Learning Frameworks on GPUs

Deep learning frameworks have been widely deployed on GPU servers for de...
research
11/09/2019

Optimizing Deep Learning Inference on Embedded Systems Through Adaptive Model Selection

Deep neural networks ( DNNs ) are becoming a key enabling technology for...
research
07/14/2020

Analyzing and Mitigating Data Stalls in DNN Training

Training Deep Neural Networks (DNNs) is resource-intensive and time-cons...
research
04/09/2019

Automated Search for Configurations of Deep Neural Network Architectures

Deep Neural Networks (DNNs) are intensively used to solve a wide variety...
research
12/19/2018

Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks

In recent years, deep neural networks (DNNs) have been applied to variou...
research
10/01/2018

Benchmark Analysis of Representative Deep Neural Network Architectures

This work presents an in-depth analysis of the majority of the deep neur...
research
07/04/2020

On Connections between Regularizations for Improving DNN Robustness

This paper analyzes regularization terms proposed recently for improving...

Please sign up or login with your details

Forgot password? Click here to reset