Federated Optimization of Smooth Loss Functions

01/06/2022
by   Ali Jadbabaie, et al.
2

In this work, we study empirical risk minimization (ERM) within a federated learning framework, where a central server minimizes an ERM objective function using training data that is stored across m clients. In this setting, the Federated Averaging (FedAve) algorithm is the staple for determining ϵ-approximate solutions to the ERM problem. Similar to standard optimization algorithms, the convergence analysis of FedAve only relies on smoothness of the loss function in the optimization parameter. However, loss functions are often very smooth in the training data too. To exploit this additional smoothness, we propose the Federated Low Rank Gradient Descent (FedLRGD) algorithm. Since smoothness in data induces an approximate low rank structure on the loss function, our method first performs a few rounds of communication between the server and clients to learn weights that the server can use to approximate clients' gradients. Then, our method solves the ERM problem at the server using inexact gradient descent. To show that FedLRGD can have superior performance to FedAve, we present a notion of federated oracle complexity as a counterpart to canonical oracle complexity. Under some assumptions on the loss function, e.g., strong convexity in parameter, η-Hölder smoothness in data, etc., we prove that the federated oracle complexity of FedLRGD scales like ϕ m(p/ϵ)^Θ(d/η) and that of FedAve scales like ϕ m(p/ϵ)^3/4 (neglecting sub-dominant factors), where ϕ≫ 1 is a "communication-to-computation ratio," p is the parameter dimension, and d is the data dimension. Then, we show that when d is small and the loss function is sufficiently smooth in the data, FedLRGD beats FedAve in federated oracle complexity. Finally, in the course of analyzing FedLRGD, we also establish a result on low rank approximation of latent variable models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2020

Gradient-Based Empirical Risk Minimization using Local Polynomial Regression

In this paper, we consider the problem of empirical risk minimization (E...
research
04/23/2021

Decentralized Federated Averaging

Federated averaging (FedAvg) is a communication efficient algorithm for ...
research
04/26/2021

Communication-Efficient Federated Learning with Dual-Side Low-Rank Compression

Federated learning (FL) is a promising and powerful approach for trainin...
research
11/01/2017

Optimizing quantum optimization algorithms via faster quantum gradient computation

We consider a generic framework of optimization algorithms based on grad...
research
04/24/2022

The Multiscale Structure of Neural Network Loss Functions: The Effect on Optimization and Origin

Local quadratic approximation has been extensively used to study the opt...
research
11/03/2022

A Convergence Theory for Federated Average: Beyond Smoothness

Federated learning enables a large amount of edge computing devices to l...
research
09/06/2022

Faster federated optimization under second-order similarity

Federated learning (FL) is a subfield of machine learning where multiple...

Please sign up or login with your details

Forgot password? Click here to reset