A Convergence Theory for Federated Average: Beyond Smoothness

11/03/2022
by   Xiaoxiao Li, et al.
5

Federated learning enables a large amount of edge computing devices to learn a model without data sharing jointly. As a leading algorithm in this setting, Federated Average FedAvg, which runs Stochastic Gradient Descent (SGD) in parallel on local devices and averages the sequences only once in a while, have been widely used due to their simplicity and low communication cost. However, despite recent research efforts, it lacks theoretical analysis under assumptions beyond smoothness. In this paper, we analyze the convergence of FedAvg. Different from the existing work, we relax the assumption of strong smoothness. More specifically, we assume the semi-smoothness and semi-Lipschitz properties for the loss function, which have an additional first-order term in assumption definitions. In addition, we also assume bound on the gradient, which is weaker than the commonly used bounded gradient assumption in the convergence analysis scheme. As a solution, this paper provides a theoretical convergence study on Federated Learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2019

On the Convergence of FedAvg on Non-IID Data

Federated learning enables a large amount of edge computing devices to l...
research
10/24/2020

Local Averaging Helps: Hierarchical Federated Learning and Convergence Analysis

Federated learning is an effective approach to realize collaborative lea...
research
02/17/2022

Federated Stochastic Gradient Descent Begets Self-Induced Momentum

Federated learning (FL) is an emerging machine learning method that can ...
research
01/06/2022

Federated Optimization of Smooth Loss Functions

In this work, we study empirical risk minimization (ERM) within a federa...
research
05/28/2019

Analysis of Gradient Clipping and Adaptive Scaling with a Relaxed Smoothness Condition

We provide a theoretical explanation for the fast convergence of gradien...
research
01/17/2023

Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes

In this work, we consider the problem of minimizing the sum of Moreau en...
research
03/23/2022

Contextual Model Aggregation for Fast and Robust Federated Learning in Edge Computing

Federated learning is a prime candidate for distributed machine learning...

Please sign up or login with your details

Forgot password? Click here to reset