Towards Flexible Device Participation in Federated Learning for Non-IID Data

06/12/2020
by   Yichen Ruan, et al.
0

Traditional federated learning algorithms impose strict requirements on the participation rates of devices, which limit the potential reach of federated learning. In this paper, we extend the current learning paradigm and consider devices that may become inactive, compute incomplete updates, and leave or join in the middle of training. We derive analytical results to illustrate how the flexible participation of devices could affect the convergence when data is not independently and identically distributed (IID), and when devices are heterogeneous. This paper proposes a new federated aggregation scheme that converges even when devices may be inactive or return incomplete updates. We finally discuss practical research questions an operator would encounter during the training, and provide answers based on our convergence analysis.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset