DeepAI AI Chat
Log In Sign Up

Collaborative Deep Learning Across Multiple Data Centers

by   Kele Xu, et al.

Valuable training data is often owned by independent organizations and located in multiple data centers. Most deep learning approaches require to centralize the multi-datacenter data for performance purpose. In practice, however, it is often infeasible to transfer all data to a centralized data center due to not only bandwidth limitation but also the constraints of privacy regulations. Model averaging is a conventional choice for data parallelized training, but its ineffectiveness is claimed by previous studies as deep neural networks are often non-convex. In this paper, we argue that model averaging can be effective in the decentralized environment by using two strategies, namely, the cyclical learning rate and the increased number of epochs for local model training. With the two strategies, we show that model averaging can provide competitive performance in the decentralized mode compared to the data-centralized one. In a practical environment with multiple data centers, we conduct extensive experiments using state-of-the-art deep network architectures on different types of data. Results demonstrate the effectiveness and robustness of the proposed method.


page 1

page 2

page 3

page 4


Decentralized Parallel Algorithm for Training Generative Adversarial Nets

Generative Adversarial Networks (GANs) are powerful class of generative ...

Efficient Decentralized Deep Learning by Dynamic Model Averaging

We propose an efficient protocol for decentralized training of deep neur...

A Comparative Evaluation of FedAvg and Per-FedAvg Algorithms for Dirichlet Distributed Heterogeneous Data

In this paper, we investigate Federated Learning (FL), a paradigm of mac...

Supernet Training for Federated Image Classification under System Heterogeneity

Efficient deployment of deep neural networks across many devices and res...

Consensus Control for Decentralized Deep Learning

Decentralized training of deep learning models enables on-device learnin...

Layerwise Linear Mode Connectivity

In the federated setup one performs an aggregation of separate local mod...

Iterate Averaging Helps: An Alternative Perspective in Deep Learning

Iterate averaging has a rich history in optimisation, but has only very ...