Unbounded Gradients in Federated Learning with Buffered Asynchronous Aggregation

10/03/2022
by   Mohammad Taha Toghani, et al.
7

Synchronous updates may compromise the efficiency of cross-device federated learning once the number of active clients increases. The FedBuff algorithm (Nguyen et al., 2022) alleviates this problem by allowing asynchronous updates (staleness), which enhances the scalability of training while preserving privacy via secure aggregation. We revisit the FedBuff algorithm for asynchronous federated learning and extend the existing analysis by removing the boundedness assumptions from the gradient norm. This paper presents a theoretical analysis of the convergence rate of this algorithm when heterogeneity in data, batch size, and delay are considered.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset