Edge Federated Learning Via Unit-Modulus Over-The-Air Computation (Extended Version)
Edge federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications. This paper proposes a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning, which simultaneously uploads local model parameters and updates global model parameters via analog beamforming. The proposed framework avoids sophisticated baseband signal processing, leading to low communication delays and implementation costs. A training loss bound of UM-AirComp is derived and two low-complexity algorithms, termed penalty alternating minimization (PAM) and accelerated gradient projection (AGP), are proposed to minimize the nonconvex nonsmooth loss bound. Simulation results show that the proposed UM-AirComp framework with PAM algorithm not only achieves a smaller mean square error of model parameters' estimation, training loss, and testing error, but also requires a significantly shorter runtime than that of other benchmark schemes. Moreover, the proposed UM-AirComp framework with AGP algorithm achieves satisfactory performance while reduces the computational complexity by orders of magnitude compared with existing optimization algorithms. Finally, we demonstrate the implementation of UM-AirComp in a vehicle-to-everything autonomous driving simulation platform. It is found that autonomous driving tasks are more sensitive to model parameter errors than other tasks since the former neural networks are more sophisticated containing sparser model parameters.
READ FULL TEXT