Federated learning with multichannel ALOHA

12/13/2019
by   Jinho Choi, et al.
0

In this paper, we study federated learning in a cellular system with a base station (BS) and a large number of users with local data sets. We show that multichannel random access can provide a better performance than sequential polling when some users are unable to compute local updates (due to other tasks) or in dormant state. In addition, for better aggregation in federated learning, the access probabilities of users can be optimized for given local updates. To this end, we formulate an optimization problem and show that a distributed approach can be used within federated learning to adaptively decide the access probabilities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2021

FedCostWAvg: A new averaging for better Federated Learning

We propose a simple new aggregation strategy for federated learning that...
research
06/12/2018

Next generation portal for federated testbeds MySlice v2: from prototype to production

A number of projects in computer science around the world have contribut...
research
05/03/2022

Revisiting Communication-Efficient Federated Learning with Balanced Global and Local Updates

In federated learning (FL), a number of devices train their local models...
research
11/25/2022

Inverse Solvability and Security with Applications to Federated Learning

We introduce the concepts of inverse solvability and security for a gene...
research
02/20/2020

Dynamic Federated Learning

Federated learning has emerged as an umbrella term for centralized coord...
research
06/12/2020

Towards Flexible Device Participation in Federated Learning for Non-IID Data

Traditional federated learning algorithms impose strict requirements on ...
research
07/21/2022

Federated Learning on Adaptively Weighted Nodes by Bilevel Optimization

We propose a federated learning method with weighted nodes in which the ...

Please sign up or login with your details

Forgot password? Click here to reset