ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity

08/04/2022
by   Xinchi Qiu, et al.
7

When the available hardware cannot meet the memory and compute requirements to efficiently train high performing machine learning models, a compromise in either the training quality or the model complexity is needed. In Federated Learning (FL), nodes are orders of magnitude more constrained than traditional server-grade hardware and are often battery powered, severely limiting the sophistication of models that can be trained under this paradigm. While most research has focused on designing better aggregation strategies to improve convergence rates and in alleviating the communication costs of FL, fewer efforts have been devoted to accelerating on-device training. Such stage, which repeats hundreds of times (i.e. every round) and can involve thousands of devices, accounts for the majority of the time required to train federated models and, the totality of the energy consumption at the client side. In this work, we present the first study on the unique aspects that arise when introducing sparsity at training time in FL workloads. We then propose ZeroFL, a framework that relies on highly sparse operations to accelerate on-device training. Models trained with ZeroFL and 95 accuracy compared to competitive baselines obtained from adapting a state-of-the-art sparse training framework to the FL setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2023

Towards Benchmarking Power-Performance Characteristics of Federated Learning Clients

Federated Learning (FL) is a decentralized machine learning approach whe...
research
12/18/2018

Expanding the Reach of Federated Learning by Reducing Client Resource Requirements

Communication on heterogeneous edge networks is a fundamental bottleneck...
research
04/13/2023

Decentralized federated learning methods for reducing communication cost and energy consumption in UAV networks

Unmanned aerial vehicles (UAV) or drones play many roles in a modern sma...
research
01/01/2023

Efficient On-device Training via Gradient Filtering

Despite its importance for federated learning, continuous learning and m...
research
12/18/2021

Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better

Federated learning (FL) enables distribution of machine learning workloa...
research
02/27/2023

MoDeST: Bridging the Gap between Federated and Decentralized Learning with Decentralized Sampling

Federated and decentralized machine learning leverage end-user devices f...
research
02/05/2021

DEAL: Decremental Energy-Aware Learning in a Federated System

Federated learning struggles with their heavy energy footprint on batter...

Please sign up or login with your details

Forgot password? Click here to reset