Log In Sign Up

Fast Composite Optimization and Statistical Recovery in Federated Learning

by   Yajie Bao, et al.

As a prevalent distributed learning paradigm, Federated Learning (FL) trains a global model on a massive amount of devices with infrequent communication. This paper investigates a class of composite optimization and statistical recovery problems in the FL setting, whose loss function consists of a data-dependent smooth loss and a non-smooth regularizer. Examples include sparse linear regression using Lasso, low-rank matrix recovery using nuclear norm regularization, etc. In the existing literature, federated composite optimization algorithms are designed only from an optimization perspective without any statistical guarantees. In addition, they do not consider commonly used (restricted) strong convexity in statistical recovery problems. We advance the frontiers of this problem from both optimization and statistical perspectives. From optimization upfront, we propose a new algorithm named Fast Federated Dual Averaging for strongly convex and smooth loss and establish state-of-the-art iteration and communication complexity in the composite setting. In particular, we prove that it enjoys a fast rate, linear speedup, and reduced communication rounds. From statistical upfront, for restricted strongly convex and smooth loss, we design another algorithm, namely Multi-stage Federated Dual Averaging, and prove a high probability complexity bound with linear speedup up to optimal statistical precision. Experiments in both synthetic and real data demonstrate that our methods perform better than other baselines. To the best of our knowledge, this is the first work providing fast optimization algorithms and statistical recovery guarantees for composite problems in FL.


page 1

page 2

page 3

page 4


Federated Composite Optimization

Federated Learning (FL) is a distributed learning paradigm which scales ...

Federated Learning's Blessing: FedAvg has Linear Speedup

Federated learning (FL) learns a model jointly from a set of participati...

FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data

Federated Learning (FL) has become a popular paradigm for learning from ...

Asynchronous Federated Optimization

Federated learning enables training on a massive number of edge devices....

Compositional Federated Learning: Applications in Distributionally Robust Averaging and Meta Learning

In the paper, we propose an effective and efficient Compositional Federa...

On Convergence of FedProx: Local Dissimilarity Invariant Bounds, Non-smoothness and Beyond

The FedProx algorithm is a simple yet powerful distributed proximal poin...

Adaptive Data Fusion for Multi-task Non-smooth Optimization

We study the problem of multi-task non-smooth optimization that arises u...