FedPara: Low-rank Hadamard Product Parameterization for Efficient Federated Learning

08/13/2021
by   Nam Hyeon-Woo, et al.
0

To overcome the burdens on frequent model uploads and downloads during federated learning (FL), we propose a communication-efficient re-parameterization, FedPara. Our method re-parameterizes the model's layers using low-rank matrices or tensors followed by the Hadamard product. Different from the conventional low-rank parameterization, our method is not limited to low-rank constraints. Thereby, our FedPara has a larger capacity than the low-rank one, even with the same number of parameters. It can achieve comparable performance to the original models while requiring 2.8 to 10.1 times lower communication costs than the original models, which is not achievable by the traditional low-rank parameterization. Moreover, the efficiency can be further improved by combining our method and other efficient FL techniques because our method is compatible with others. We also extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2021

FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization

The underlying assumption of recent federated learning (FL) paradigms is...
research
04/26/2021

Communication-Efficient Federated Learning with Dual-Side Low-Rank Compression

Federated learning (FL) is a promising and powerful approach for trainin...
research
07/26/2023

Low-Parameter Federated Learning with Large Language Models

We study few-shot Natural Language Understanding (NLU) tasks with Large ...
research
06/04/2023

Riemannian Low-Rank Model Compression for Federated Learning with Over-the-Air Aggregation

Low-rank model compression is a widely used technique for reducing the c...
research
02/21/2023

Fusion of Global and Local Knowledge for Personalized Federated Learning

Personalized federated learning, as a variant of federated learning, tra...
research
02/01/2022

Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?

In this paper, we question the rationale behind propagating large number...
research
05/30/2023

Rank-adaptive spectral pruning of convolutional layers during training

The computing cost and memory demand of deep learning pipelines have gro...

Please sign up or login with your details

Forgot password? Click here to reset