Elastic Model Aggregation with Parameter Service

04/07/2022
by   Juncheng Gu, et al.
0

Model aggregation, the process that updates model parameters, is an important step for model convergence in distributed deep learning (DDL). However, the parameter server (PS), a popular paradigm of performing model aggregation, causes CPU underutilization in deep learning (DL) clusters, due to the bursty nature of aggregation and static resource allocation. To remedy this problem, we propose Parameter Service, an elastic model aggregation framework for DDL training, which decouples the function of model aggregation from individual training jobs and provides a shared model aggregation service to all jobs in the cluster. In Parameter Service, model aggregations are efficiently packed and dynamically migrated to fit into the available CPUs with negligible time overhead. Furthermore, Parameter Service can elastically manage its CPU resources based on its load to enhance resource efficiency. We have implemented Parameter Service in a prototype system called AutoPS and evaluated it via testbed experimentation and trace-driven simulations. AutoPS reduces up to 75 of CPU consumption with little or no performance impact on the training jobs. The design of Parameter Service is transparent to the users and can be incorporated in popular DL frameworks.

READ FULL TEXT

page 8

page 15

research
04/04/2023

DLRover: An Elastic Deep Training Extension with Auto Job Resource Recommendation

The cloud is still a popular platform for distributed deep learning (DL)...
research
11/18/2016

GaDei: On Scale-up Training As A Service For Deep Learning

Deep learning (DL) training-as-a-service (TaaS) is an important emerging...
research
05/17/2018

Dependability in a Multi-tenant Multi-framework Deep Learning as-a-Service Platform

Deep learning (DL), a form of machine learning, is becoming increasingly...
research
06/24/2020

Effective Elastic Scaling of Deep Learning Workloads

The increased use of deep learning (DL) in academia, government and indu...
research
11/04/2020

InferBench: Understanding Deep Learning Inference Serving with an Automatic Benchmarking System

Deep learning (DL) models have become core modules for many applications...
research
04/03/2019

Model Slicing for Supporting Complex Analytics with Elastic Inference Cost and Resource Constraints

Deep learning models have been used to support analytics beyond simple a...
research
10/10/2020

A Predictive Autoscaler for Elastic Batch Jobs

Large batch jobs such as Deep Learning, HPC and Spark require far more c...

Please sign up or login with your details

Forgot password? Click here to reset