DeepAI
Log In Sign Up

Federated Optimization of Smooth Loss Functions

01/06/2022
by   Ali Jadbabaie, et al.
2

In this work, we study empirical risk minimization (ERM) within a federated learning framework, where a central server minimizes an ERM objective function using training data that is stored across m clients. In this setting, the Federated Averaging (FedAve) algorithm is the staple for determining ϵ-approximate solutions to the ERM problem. Similar to standard optimization algorithms, the convergence analysis of FedAve only relies on smoothness of the loss function in the optimization parameter. However, loss functions are often very smooth in the training data too. To exploit this additional smoothness, we propose the Federated Low Rank Gradient Descent (FedLRGD) algorithm. Since smoothness in data induces an approximate low rank structure on the loss function, our method first performs a few rounds of communication between the server and clients to learn weights that the server can use to approximate clients' gradients. Then, our method solves the ERM problem at the server using inexact gradient descent. To show that FedLRGD can have superior performance to FedAve, we present a notion of federated oracle complexity as a counterpart to canonical oracle complexity. Under some assumptions on the loss function, e.g., strong convexity in parameter, η-Hölder smoothness in data, etc., we prove that the federated oracle complexity of FedLRGD scales like ϕ m(p/ϵ)^Θ(d/η) and that of FedAve scales like ϕ m(p/ϵ)^3/4 (neglecting sub-dominant factors), where ϕ≫ 1 is a "communication-to-computation ratio," p is the parameter dimension, and d is the data dimension. Then, we show that when d is small and the loss function is sufficiently smooth in the data, FedLRGD beats FedAve in federated oracle complexity. Finally, in the course of analyzing FedLRGD, we also establish a result on low rank approximation of latent variable models.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/04/2020

Gradient-Based Empirical Risk Minimization using Local Polynomial Regression

In this paper, we consider the problem of empirical risk minimization (E...
04/23/2021

Decentralized Federated Averaging

Federated averaging (FedAvg) is a communication efficient algorithm for ...
11/01/2017

Optimizing quantum optimization algorithms via faster quantum gradient computation

We consider a generic framework of optimization algorithms based on grad...
09/06/2022

Faster federated optimization under second-order similarity

Federated learning (FL) is a subfield of machine learning where multiple...
04/24/2022

The Multiscale Structure of Neural Network Loss Functions: The Effect on Optimization and Origin

Local quadratic approximation has been extensively used to study the opt...
06/16/2022

Gradient Descent for Low-Rank Functions

Several recent empirical studies demonstrate that important machine lear...
05/31/2020

Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs

We study estimation of a gradient-sparse parameter vector θ^* ∈ℝ^p, havi...