DeepAI AI Chat
Log In Sign Up

FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization

by   Dezhong Yao, et al.
Lehigh University
Huazhong University of Science u0026 Technology

The underlying assumption of recent federated learning (FL) paradigms is that local models usually share the same network architecture as the global model, which becomes impractical for mobile and IoT devices with different setups of hardware and infrastructure. A scalable federated learning framework should address heterogeneous clients equipped with different computation and communication capabilities. To this end, this paper proposes FedHM, a novel federated model compression framework that distributes the heterogeneous low-rank models to clients and then aggregates them into a global full-rank model. Our solution enables the training of heterogeneous local models with varying computational complexities and aggregates a single global model. Furthermore, FedHM not only reduces the computational complexity of the device, but also reduces the communication cost by using low-rank models. Extensive experimental results demonstrate that our proposed outperforms the current pruning-based FL approaches in terms of test Top-1 accuracy (4.6 accuracy gain on average), with smaller model size (1.5x smaller on average) under various heterogeneous FL settings.


page 1

page 2

page 3

page 4


HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients

Federated Learning (FL) is a method of training machine learning models ...

FedPara: Low-rank Hadamard Product Parameterization for Efficient Federated Learning

To overcome the burdens on frequent model uploads and downloads during f...

Communication-Efficient Federated Learning with Dual-Side Low-Rank Compression

Federated learning (FL) is a promising and powerful approach for trainin...

Memory-adaptive Depth-wise Heterogenous Federated Learning

Federated learning is a promising paradigm that allows multiple clients ...

Federated Learning with Additional Mechanisms on Clients to Reduce Communication Costs

Federated learning (FL) enables on-device training over distributed netw...

Distributed Learning on Heterogeneous Resource-Constrained Devices

We consider a distributed system, consisting of a heterogeneous set of d...

FedTiny: Pruned Federated Learning Towards Specialized Tiny Models

Neural network pruning has been a well-established compression technique...