A Family of Hybrid Federated and Centralized Learning Architectures in Machine Learning

05/07/2021 ∙ by Ahmet M. Elbir, et al. ∙ 0

Many of the machine learning tasks focus on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) entailing huge communication overhead. To overcome this, federated learning (FL) has been a promising tool, wherein the clients send only the model updates to the PS instead of the whole dataset. However, FL demands powerful computational resources from the clients. Therefore, not all the clients can participate in training if they do not have enough computational resources. To address this issue, we introduce a more practical approach called hybrid federated and centralized learning (HFCL), wherein only the clients with sufficient resources employ FL, while the remaining ones send their datasets to the PS, which computes the model on behalf of them. Then, the model parameters corresponding to all clients are aggregated at the PS. To improve the efficiency of dataset transmission, we propose two different techniques: increased computation-per-client and sequential data transmission. The HFCL frameworks outperform FL with up to 20% improvement in the learning accuracy when only half of the clients perform FL while having 50% less communication overhead than CL since all the clients collaborate on the learning process with their datasets.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 8

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.