Server Averaging for Federated Learning

03/22/2021
by   George Pu, et al.
0

Federated learning allows distributed devices to collectively train a model without sharing or disclosing the local dataset with a central server. The global model is optimized by training and averaging the model parameters of all local participants. However, the improved privacy of federated learning also introduces challenges including higher computation and communication costs. In particular, federated learning converges slower than centralized training. We propose the server averaging algorithm to accelerate convergence. Sever averaging constructs the shared global model by periodically averaging a set of previous global models. Our experiments indicate that server averaging not only converges faster, to a target accuracy, than federated averaging (FedAvg), but also reduces the computation costs on the client-level through epoch decay.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2020

Local Averaging Helps: Hierarchical Federated Learning and Convergence Analysis

Federated learning is an effective approach to realize collaborative lea...
research
10/11/2020

Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms

Federated learning is typically approached as an optimization problem, w...
research
07/22/2021

Federated Learning Versus Classical Machine Learning: A Convergence Comparison

In the past few decades, machine learning has revolutionized data proces...
research
06/19/2020

FedFMC: Sequential Efficient Federated Learning on Non-iid Data

As a mechanism for devices to update a global model without sharing data...
research
03/07/2020

Ternary Compression for Communication-Efficient Federated Learning

Learning over massive data stored in different locations is essential in...
research
02/15/2020

Federated Learning with Matched Averaging

Federated learning allows edge devices to collaboratively learn a shared...
research
02/10/2020

Federated Learning of a Mixture of Global and Local Models

We propose a new optimization formulation for training federated learnin...

Please sign up or login with your details

Forgot password? Click here to reset