Fast Federated Learning by Balancing Communication Trade-Offs

by   Milad Khademi Nori, et al.

Federated Learning (FL) has recently received a lot of attention for large-scale privacy-preserving machine learning. However, high communication overheads due to frequent gradient transmissions decelerate FL. To mitigate the communication overheads, two main techniques have been studied: (i) local update of weights characterizing the trade-off between communication and computation and (ii) gradient compression characterizing the trade-off between communication and precision. To the best of our knowledge, studying and balancing those two trade-offs jointly and dynamically while considering their impacts on convergence has remained unresolved even though it promises significantly faster FL. In this paper, we first formulate our problem to minimize learning error with respect to two variables: local update coefficients and sparsity budgets of gradient compression who characterize trade-offs between communication and computation/precision, respectively. We then derive an upper bound of the learning error in a given wall-clock time considering the interdependency between the two variables. Based on this theoretical analysis, we propose an enhanced FL scheme, namely Fast FL (FFL), that jointly and dynamically adjusts the two variables to minimize the learning error. We demonstrate that FFL consistently achieves higher accuracies faster than similar schemes existing in the literature.



page 1

page 14


Efficient Adaptive Federated Optimization of Federated Learning for IoT

The proliferation of the Internet of Things (IoT) and widespread use of ...

Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation

There is an increasing interest in a fast-growing machine learning techn...

Federated Learning with Local Differential Privacy: Trade-offs between Privacy, Utility, and Communication

Federated learning (FL) allows to train a massive amount of data private...

Wireless Quantized Federated Learning: A Joint Computation and Communication Design

Recently, federated learning (FL) has sparked widespread attention as a ...

FedDQ: Communication-Efficient Federated Learning with Descending Quantization

Federated learning (FL) is an emerging privacy-preserving distributed le...

Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning

We study a family of algorithms, which we refer to as local update metho...

Federated Evaluation and Tuning for On-Device Personalization: System Design Applications

We describe the design of our federated task processing system. Original...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.