Federated Clustering via Matrix Factorization Models: From Model Averaging to Gradient Sharing

02/12/2020
by   Shuai Wang, et al.
0

Recently, federated learning (FL) has drawn significant attention due to its capability of training a model over the network without knowing the client's private raw data. In this paper, we study the unsupervised clustering problem under the FL setting. By adopting a generalized matrix factorization model for clustering, we propose two novel (first-order) federated clustering (FedC) algorithms based on principles of model averaging and gradient sharing, respectively, and present their theoretical convergence conditions. We show that both algorithms have a O(1/T) convergence rate, where T is the total number of gradient evaluations per client, and the communication cost can be effectively reduced by controlling the local epoch length and allowing partial client participation within each communication round. Numerical experiments show that the FedC algorithm based on gradient sharing outperforms that based on model averaging, especially in scenarios with non-i.i.d. data, and can perform comparably as or exceed the centralized clustering algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2023

Faster Federated Learning with Decaying Number of Local SGD Steps

In Federated Learning (FL) client devices connected over the internet co...
research
11/17/2020

Federated Composite Optimization

Federated Learning (FL) is a distributed learning paradigm which scales ...
research
12/23/2022

Graph Federated Learning with Hidden Representation Sharing

Learning on Graphs (LoG) is widely used in multi-client systems when eac...
research
11/09/2022

Knowledge Distillation for Federated Learning: a Practical Guide

Federated Learning (FL) enables the training of Deep Learning models wit...
research
01/24/2022

Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning

Federated learning (FL), as an emerging edge artificial intelligence par...
research
09/04/2023

DRAG: Divergence-based Adaptive Aggregation in Federated learning on Non-IID Data

Local stochastic gradient descent (SGD) is a fundamental approach in ach...
research
12/06/2020

Accurate and Fast Federated Learning via Combinatorial Multi-Armed Bandits

Federated learning has emerged as an innovative paradigm of collaborativ...

Please sign up or login with your details

Forgot password? Click here to reset