SplitFed: When Federated Learning Meets Split Learning

04/25/2020
by   Chandra Thapa, et al.
0

Federated learning (FL) and split learning (SL) are two recent distributed machine learning (ML) approaches that have gained attention due to their inherent privacy-preserving capabilities. Both approaches follow a model-to-data scenario, in that an ML model is sent to clients for network training and testing. However, FL and SL show contrasting strengths and weaknesses. For example, while FL performs faster than SL due to its parallel client-side model generation strategy, SL provides better privacy than FL due to the split of the ML model architecture between clients and the server. In contrast to FL, SL enables ML training with clients having low computing resources as the client trains only the first few layers of the split ML network model. In this paper, we present a novel approach, named splitfed (SFL), that amalgamates the two approaches eliminating their inherent drawbacks. SFL splits the network architecture between the clients and server as in SL to provide a higher level of privacy than FL. Moreover, it offers better efficiency than SL by incorporating the parallel ML model update paradigm of FL. Our empirical results considering uniformly distributed horizontally partitioned datasets and multiple clients show that SFL provides similar communication efficiency and test accuracies as SL, while significantly reducing - around five times - its computation time per global epoch. Furthermore, as in SL, its communication efficiency over FL improves with the increase in the number of clients.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset