Log In Sign Up

Server Averaging for Federated Learning

by   George Pu, et al.

Federated learning allows distributed devices to collectively train a model without sharing or disclosing the local dataset with a central server. The global model is optimized by training and averaging the model parameters of all local participants. However, the improved privacy of federated learning also introduces challenges including higher computation and communication costs. In particular, federated learning converges slower than centralized training. We propose the server averaging algorithm to accelerate convergence. Sever averaging constructs the shared global model by periodically averaging a set of previous global models. Our experiments indicate that server averaging not only converges faster, to a target accuracy, than federated averaging (FedAvg), but also reduces the computation costs on the client-level through epoch decay.


page 1

page 2

page 3

page 4


Local Averaging Helps: Hierarchical Federated Learning and Convergence Analysis

Federated learning is an effective approach to realize collaborative lea...

Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms

Federated learning is typically approached as an optimization problem, w...

Federated Learning Versus Classical Machine Learning: A Convergence Comparison

In the past few decades, machine learning has revolutionized data proces...

FedFMC: Sequential Efficient Federated Learning on Non-iid Data

As a mechanism for devices to update a global model without sharing data...

Ternary Compression for Communication-Efficient Federated Learning

Learning over massive data stored in different locations is essential in...

Federated Learning with Matched Averaging

Federated learning allows edge devices to collaboratively learn a shared...

Federated Learning of a Mixture of Global and Local Models

We propose a new optimization formulation for training federated learnin...