Cloudless-Training: A Framework to Improve Efficiency of Geo-Distributed ML Training

03/09/2023
by   Wenting Tan, et al.
0

Geo-distributed ML training can benefit many emerging ML scenarios (e.g., large model training, federated learning) with multi-regional cloud resources and wide area network. However, its efficiency is limited due to 2 challenges. First, efficient elastic scheduling of multi-regional cloud resources is usually missing, affecting resource utilization and performance of training. Second, training communication on WAN is still the main overhead, easily subjected to low bandwidth and high fluctuations of WAN. In this paper, we propose a framework, Cloudless-Training, to realize efficient PS-based geo-distributed ML training in 3 aspects. First, it uses a two-layer architecture with control and physical training planes to support elastic scheduling and communication for multi-regional clouds in a serverless maner.Second, it provides an elastic scheduling strategy that can deploy training workflows adaptively according to the heterogeneity of available cloud resources and distribution of pre-existing training datasets. Third, it provides 2 new synchronization strategies for training partitions among clouds, including asynchronous SGD with gradient accumulation (ASGD-GA) and inter-PS model averaging (MA). It is implemented with OpenFaaS and evaluated on Tencent Cloud. Experiments show that Cloudless-Training can support general ML training in a geo-distributed way, greatly improve resource utilization (e.g., 9.2 training speedup over baseline at most) with model correctness guarantees.

READ FULL TEXT
research
02/27/2022

PARIS and ELSA: An Elastic Scheduling Algorithm for Reconfigurable Multi-GPU Inference Servers

In cloud machine learning (ML) inference systems, providing low latency ...
research
11/14/2022

Federated Learning Framework in Fogbus2-based Edge Computing Environments

Federated learning refers to conducting training on multiple distributed...
research
04/02/2020

Cocktail: Cost-efficient and Data Skew-aware Online In-Network Distributed Machine Learning for Intelligent 5G and Beyond

To facilitate the emerging applications in the 5G networks and beyond, m...
research
08/22/2018

Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms

State-of-the-art distributed machine learning suffers from significant d...
research
05/28/2021

Cloud Collectives: Towards Cloud-aware Collectives forML Workloads with Rank Reordering

ML workloads are becoming increasingly popular in the cloud. Good cloud ...
research
08/17/2023

Multi-FedLS: a Framework for Cross-Silo Federated Learning Applications on Multi-Cloud Environments

Federated Learning (FL) is a distributed Machine Learning (ML) technique...
research
12/30/2013

Consistent Bounded-Asynchronous Parameter Servers for Distributed ML

In distributed ML applications, shared parameters are usually replicated...

Please sign up or login with your details

Forgot password? Click here to reset