Secure Aggregation of Semi-Honest Clients and Servers in Federated Learning with Secret-Shared Homomorphism

12/21/2022
by   Dongfang Zhao, et al.
0

Privacy-preserving distributed machine learning has been recognized as one of the most promising paradigms for the collaboration of multiple or many clients who cannot disclose their local data – neither the training data set nor the model parameters. One popular approach to preserving the confidentiality of local data is homomorphic encryption (HE): the clients send encrypted model parameters to the server, which aggregates the models into a global model and broadcasts it to the clients. Evidently, the security of the above HE-based approach depends on the assumption that the clients are honest and cannot obtain the encrypted models from others (otherwise the models can be immediately decrypted because all clients share the same private key); this is indeed the case since the encrypted models are transmitted through protocols like Transport Layer Security (TLS). However, it is often overlooked that the leakage of encrypted models does not necessarily happen on the network channel; for example, if a semi-honest client colludes with a semi-honest server (the former having the private key and the latter having the ciphertext), possibly offline in another session, they can surely recover the model parameters in question. This paper presents a new protocol for secure aggregation under the mild, in our opinion more practical, assumption that all parties are semi-honest. As a starting point, we construct a new security primitive, called asymmetric threshold secret sharing (ATSS), which works with homomorphic encryption to constitute an aggregation protocol to achieve the desired security. We demonstrate that the ATSS-based protocol is provably secure and delivers reasonable performance in practice.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset