DeepAI
Log In Sign Up

FedPara: Low-rank Hadamard Product Parameterization for Efficient Federated Learning

08/13/2021
by   Nam Hyeon-Woo, et al.
0

To overcome the burdens on frequent model uploads and downloads during federated learning (FL), we propose a communication-efficient re-parameterization, FedPara. Our method re-parameterizes the model's layers using low-rank matrices or tensors followed by the Hadamard product. Different from the conventional low-rank parameterization, our method is not limited to low-rank constraints. Thereby, our FedPara has a larger capacity than the low-rank one, even with the same number of parameters. It can achieve comparable performance to the original models while requiring 2.8 to 10.1 times lower communication costs than the original models, which is not achievable by the traditional low-rank parameterization. Moreover, the efficiency can be further improved by combining our method and other efficient FL techniques because our method is compatible with others. We also extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/29/2021

FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization

The underlying assumption of recent federated learning (FL) paradigms is...
04/26/2021

Communication-Efficient Federated Learning with Dual-Side Low-Rank Compression

Federated learning (FL) is a promising and powerful approach for trainin...
01/27/2022

Achieving Personalized Federated Learning with Sparse Local Models

Federated learning (FL) is vulnerable to heterogeneously distributed dat...
02/19/2021

Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques

We study the optimization aspects of personalized Federated Learning (FL...
12/16/2020

More Industry-friendly: Federated Learning with High Efficient Design

Although many achievements have been made since Google threw out the par...
03/07/2021

Spectral Tensor Train Parameterization of Deep Learning Layers

We study low-rank parameterizations of weight matrices with embedded spe...
02/01/2022

Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?

In this paper, we question the rationale behind propagating large number...