Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation
There is an increasing interest in a fast-growing machine learning technique called Federated Learning, in which the model training is distributed over mobile user equipments (UEs), exploiting UEs' local computation and training data. Despite its advantages in data privacy-preserving, Federated Learning (FL) still has challenges in heterogeneity across users' data and UE's characteristics. We first address the heterogeneous data challenge by proposing a FL algorithm that can bypass the independent and identically distributed (i.i.d.) UEs' data assumption for strongly convex and smooth problems. We provide the convergence rate characterizing the trade-off between local computation rounds of UE to update its local model and global communication rounds to update the global model. We then employ the proposed FL algorithm in wireless networks as a resource allocation optimization problem that captures various trade-offs between computation and communication latencies as well as between the Federated Learning time and UE energy consumption. Even though the wireless resource allocation problem of FL is non-convex, we exploit this problem's structure to decompose it into three sub-problems and analyze their closed-form solutions as well as insights to problem design. Finally, we illustrate the theoretical analysis for the new algorithm with Tensorflow experiments and extensive numerical results for the wireless resource allocation sub-problems. The experiment results not only verify the theoretical convergence but also show that our proposed algorithm converges significantly faster than the existing baseline approach.
READ FULL TEXT