Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise

07/17/2020
by   Marten van Dijk, et al.
0

The feasibility of federated learning is highly constrained by the server-clients infrastructure in terms of network communication. Most newly launched smartphones and IoT devices are equipped with GPUs or sufficient computing hardware to run powerful AI models. However, in case of the original synchronous federated learning, client devices suffer waiting times and regular communication between clients and server is required. This implies more sensitivity to local model training times and irregular or missed updates, hence, less or limited scalability to large numbers of clients and convergence rates measured in real time will suffer. We propose a new algorithm for asynchronous federated learning which eliminates waiting times and reduces overall network communication - we provide rigorous theoretical analysis for strongly convex objective functions and provide simulation results. By adding Gaussian noise we show how our algorithm can be made differentially private – new theorems show how the aggregated added Gaussian noise is significantly reduced.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/14/2022

Privatized Graph Federated Learning

Federated learning is a semi-distributed algorithm, where a server commu...
research
06/01/2023

CSMAAFL: Client Scheduling and Model Aggregation in Asynchronous Federated Learning

Asynchronous federated learning aims to solve the straggler problem in h...
research
10/03/2022

Unbounded Gradients in Federated Learning with Buffered Asynchronous Aggregation

Synchronous updates may compromise the efficiency of cross-device federa...
research
10/30/2022

Evaluation and comparison of federated learning algorithms for Human Activity Recognition on smartphones

Pervasive computing promotes the integration of smart devices in our liv...
research
09/14/2023

Communication Efficient Private Federated Learning Using Dithering

The task of preserving privacy while ensuring efficient communication is...
research
08/01/2023

Asynchronous Federated Learning with Bidirectional Quantized Communications and Buffered Aggregation

Asynchronous Federated Learning with Buffered Aggregation (FedBuff) is a...
research
06/13/2021

DP-NormFedAvg: Normalizing Client Updates for Privacy-Preserving Federated Learning

In this paper, we focus on facilitating differentially private quantized...

Please sign up or login with your details

Forgot password? Click here to reset