Asynchronous Federated Optimization

03/10/2019
by   Cong Xie, et al.
0

Federated learning enables training on a massive number of edge devices. To improve flexibility and scalability, we propose a new asynchronous federated optimization algorithm. We prove that the proposed approach has near-linear convergence to a global optimum, for both strongly and non-strongly convex problems, as well as a restricted family of non-convex problems. Empirical results show that the proposed algorithm converges fast and tolerates staleness.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset