Online Data Selection for Federated Learning with Limited Storage

09/01/2022
by   Chen gong, et al.
0

Machine learning models have been deployed in mobile networks to deal with the data from different layers to enable automated network management and intelligence on devices. To overcome high communication cost and severe privacy concerns of centralized machine learning, Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices. While the computation and communication limitation has been widely studied in FL, the impact of on-device storage on the performance of FL is still not explored. Without an efficient and effective data selection policy to filter the abundant streaming data on devices, classical FL can suffer from much longer model training time (more than 4×) and significant inference accuracy reduction (more than 7%), observed in our experiments. In this work, we take the first step to consider the online data selection for FL with limited on-device storage. We first define a new data valuation metric for data selection in FL: the projection of local gradient over an on-device data sample onto the global gradient over the data from all devices. We further design ODE, a framework of Online Data sElection for FL, to coordinate networked devices to store valuable data samples collaboratively, with theoretical guarantees for speeding up model convergence and enhancing final model accuracy, simultaneously. Experimental results on one industrial task (mobile network traffic classification) and three public tasks (synthetic task, image classification, human activity recognition) show the remarkable advantages of ODE over the state-of-the-art approaches. Particularly, on the industrial dataset, ODE achieves as high as 2.5× speedup of training time and 6% increase in final inference accuracy, and is robust to various factors in the practical environment.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset