Adaptive Federated Learning via New Entropy Approach

03/27/2023
by   Shensheng Zheng, et al.
0

Federated Learning (FL) has recently emerged as a popular framework, which allows resource-constrained discrete clients to cooperatively learn the global model under the orchestration of a central server while storing privacy-sensitive data locally. However, due to the difference in equipment and data divergence of heterogeneous clients, there will be parameter deviation between local models, resulting in a slow convergence rate and a reduction of the accuracy of the global model. The current FL algorithms use the static client learning strategy pervasively and can not adapt to the dynamic training parameters of different clients. In this paper, by considering the deviation between different local model parameters, we propose an adaptive learning rate scheme for each client based on entropy theory to alleviate the deviation between heterogeneous clients and achieve fast convergence of the global model. It's difficult to design the optimal dynamic learning rate for each client as the local information of other clients is unknown, especially during the local training epochs without communications between local clients and the central server. To enable a decentralized learning rate design for each client, we first introduce mean-field schemes to estimate the terms related to other clients' local model parameters. Then the decentralized adaptive learning rate for each client is obtained in closed form by constructing the Hamilton equation. Moreover, we prove that there exist fixed point solutions for the mean-field estimators, and an algorithm is proposed to obtain them. Finally, extensive experimental results on real datasets show that our algorithm can effectively eliminate the deviation between local model parameters compared to other recent FL algorithms.

READ FULL TEXT
research
03/28/2023

Fast Convergent Federated Learning with Aggregated Gradients

Federated Learning (FL) is a novel machine learning framework, which ena...
research
09/27/2022

A Snapshot of the Frontiers of Client Selection in Federated Learning

Federated learning (FL) has been proposed as a privacy-preserving approa...
research
01/11/2023

Federated Learning under Heterogeneous and Correlated Client Availability

The enormous amount of data produced by mobile and IoT devices has motiv...
research
12/11/2021

Server-Side Local Gradient Averaging and Learning Rate Acceleration for Scalable Split Learning

In recent years, there have been great advances in the field of decentra...
research
08/07/2022

Federated Adversarial Learning: A Framework with Convergence Analysis

Federated learning (FL) is a trending training paradigm to utilize decen...
research
09/18/2023

FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup for Non-IID Data

Federated learning is an emerging distributed machine learning method, e...
research
01/25/2023

When to Trust Aggregated Gradients: Addressing Negative Client Sampling in Federated Learning

Federated Learning has become a widely-used framework which allows learn...

Please sign up or login with your details

Forgot password? Click here to reset