Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training

05/21/2018
by   Liang Luo, et al.
0

Distributed deep neural network (DDNN) training constitutes an increasingly important workload that frequently runs in the cloud. Larger DNN models and faster compute engines are shifting DDNN training bottlenecks from computation to communication. This paper characterizes DDNN training to precisely pinpoint these bottlenecks. We found that timely training requires high performance parameter servers (PSs) with optimized network stacks and gradient processing pipelines, as well as server and network hardware with balanced computation and communication resources. We therefore propose PHub, a high performance multi-tenant, rack-scale PS design. PHub co-designs the PS software and hardware to accelerate rack-level and hierarchical cross-rack parameter exchange, with an API compatible with many DDNN training frameworks. PHub provides a performance improvement of up to 2.7x compared to state-of-the-art distributed training techniques for cloud-based ImageNet workloads, with 25 better throughput per dollar.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2018

Parameter Hub: High Performance Parameter Servers for Efficient Distributed Deep Neural Network Training

Most work in the deep learning systems community has focused on faster i...
research
05/28/2021

Cloud Collectives: Towards Cloud-aware Collectives forML Workloads with Rank Reordering

ML workloads are becoming increasingly popular in the cloud. Good cloud ...
research
09/24/2021

Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models

Deep Neural Networks have gained significant attraction due to their wid...
research
11/18/2016

GaDei: On Scale-up Training As A Service For Deep Learning

Deep learning (DL) training-as-a-service (TaaS) is an important emerging...
research
06/25/2018

Pushing the boundaries of parallel Deep Learning -- A practical approach

This work aims to assess the state of the art of data parallel deep neur...
research
04/29/2020

Caramel: Accelerating Decentralized Distributed Deep Learning with Computation Scheduling

The method of choice for parameter aggregation in Deep Neural Network (D...
research
02/13/2023

SWIFT: Expedited Failure Recovery for Large-scale DNN Training

As the size of deep learning models gets larger and larger, training tak...

Please sign up or login with your details

Forgot password? Click here to reset