An Optimization Framework for Federated Edge Learning

11/26/2021
by   Yangchen Li, et al.
0

The optimal design of federated learning (FL) algorithms for solving general machine learning (ML) problems in practical edge computing systems with quantized message passing remains an open problem. This paper considers an edge computing system where the server and workers have possibly different computing and communication capabilities and employ quantization before transmitting messages. To explore the full potential of FL in such an edge computing system, we first present a general FL algorithm, namely GenQSGD, parameterized by the numbers of global and local iterations, mini-batch size, and step size sequence. Then, we analyze its convergence for an arbitrary step size sequence and specify the convergence results under three commonly adopted step size rules, namely the constant, exponential, and diminishing step size rules. Next, we optimize the algorithm parameters to minimize the energy cost under the time constraint and convergence error constraint, with the focus on the overall implementing process of FL. Specifically, for any given step size sequence under each considered step size rule, we optimize the numbers of global and local iterations and mini-batch size to optimally implement FL for applications with preset step size sequences. We also optimize the step size sequence along with these algorithm parameters to explore the full potential of FL. The resulting optimization problems are challenging non-convex problems with non-differentiable constraint functions. We propose iterative algorithms to obtain KKT points using general inner approximation (GIA) and tricks for solving complementary geometric programming (CGP). Finally, we numerically demonstrate the remarkable gains of GenQSGD with optimized algorithm parameters over existing FL algorithms and reveal the significance of optimally designing general FL algorithms.

READ FULL TEXT

page 5

page 6

page 7

page 11

page 12

page 13

page 26

page 34

research
10/25/2021

Optimization-Based GenQSGD for Federated Edge Learning

Optimal algorithm design for federated learning (FL) remains an open pro...
research
06/13/2023

GQFedWAvg: Optimization-Based Quantized Federated Learning in General Edge Computing Systems

The optimal implementation of federated learning (FL) in practical edge ...
research
01/23/2023

FedExP: Speeding up Federated Averaging Via Extrapolation

Federated Averaging (FedAvg) remains the most popular algorithm for Fede...
research
07/02/2020

Balancing Rates and Variance via Adaptive Batch-Size for Stochastic Optimization Problems

Stochastic gradient descent is a canonical tool for addressing stochasti...
research
11/14/2022

Optimal Privacy Preserving in Wireless Federated Learning System over Mobile Edge Computing

Federated Learning (FL) with quantization and deliberately added noise o...
research
06/08/2023

FLEdge: Benchmarking Federated Machine Learning Applications in Edge Computing Systems

Federated Machine Learning (FL) has received considerable attention in r...
research
01/22/2021

F3ORNITS: A Flexible Variable Step Size Non-Iterative Co-simulation Method handling Subsystems with Hybrid Advanced Capabilities

This paper introduces the F3ORNITS non-iterative co-simulation algorithm...

Please sign up or login with your details

Forgot password? Click here to reset