Global Convergence of Federated Learning for Mixed Regression

by   Lili Su, et al.

This paper studies the problem of model training under Federated Learning when clients exhibit cluster structure. We contextualize this problem in mixed regression, where each client has limited local data generated from one of k unknown regression models. We design an algorithm that achieves global convergence from any initialization, and works even when local data volume is highly unbalanced – there could exist clients that contain O(1) data points only. Our algorithm first runs moment descent on a few anchor clients (each with Ω̃(k) data points) to obtain coarse model estimates. Then each client alternately estimates its cluster labels and refines the model estimates based on FedAvg or FedProx. A key innovation in our analysis is a uniform estimate on the clustering errors, which we prove by bounding the VC dimension of general polynomial concept classes based on the theory of algebraic geometry.


page 1

page 2

page 3

page 4


Data Selection for Efficient Model Update in Federated Learning

The Federated Learning workflow of training a centralized model with dis...

Active Federated Learning

Federated Learning allows for population level models to be trained with...

Multi-Model Federated Learning

Federated learning is a form of distributed learning with the key challe...

Is Shapley Value fair? Improving Client Selection for Mavericks in Federated Learning

Shapley Value is commonly adopted to measure and incentivize client part...

Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies

Federated learning is a distributed optimization paradigm that enables a...

Accelerating Federated Learning via Sampling Anchor Clients with Large Batches

Using large batches in recent federated learning studies has improved co...

Federated Geometric Monte Carlo Clustering to Counter Non-IID Datasets

Federated learning allows clients to collaboratively train models on dat...