Secure Aggregation of Semi-Honest Clients and Servers in Federated Learning with Secret-Shared Homomorphism

12/21/2022
by   Dongfang Zhao, et al.
0

Privacy-preserving distributed machine learning has been recognized as one of the most promising paradigms for the collaboration of multiple or many clients who cannot disclose their local data – neither the training data set nor the model parameters. One popular approach to preserving the confidentiality of local data is homomorphic encryption (HE): the clients send encrypted model parameters to the server, which aggregates the models into a global model and broadcasts it to the clients. Evidently, the security of the above HE-based approach depends on the assumption that the clients are honest and cannot obtain the encrypted models from others (otherwise the models can be immediately decrypted because all clients share the same private key); this is indeed the case since the encrypted models are transmitted through protocols like Transport Layer Security (TLS). However, it is often overlooked that the leakage of encrypted models does not necessarily happen on the network channel; for example, if a semi-honest client colludes with a semi-honest server (the former having the private key and the latter having the ciphertext), possibly offline in another session, they can surely recover the model parameters in question. This paper presents a new protocol for secure aggregation under the mild, in our opinion more practical, assumption that all parties are semi-honest. As a starting point, we construct a new security primitive, called asymmetric threshold secret sharing (ATSS), which works with homomorphic encryption to constitute an aggregation protocol to achieve the desired security. We demonstrate that the ATSS-based protocol is provably secure and delivers reasonable performance in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2020

Distributed Additive Encryption and Quantization for Privacy Preserving Federated Deep Learning

Homomorphic encryption is a very useful gradient protection technique us...
research
12/29/2020

Lightweight Techniques for Private Heavy Hitters

This paper presents a new protocol for solving the private heavy-hitters...
research
11/20/2018

FALCON: A Fourier Transform Based Approach for Fast and Secure Convolutional Neural Network Predictions

Machine learning as a service has been widely deployed to utilize deep n...
research
09/21/2021

STAR: Distributed Secret Sharing for Private Threshold Aggregation Reporting

In practice and research, threshold aggregation systems  –  that attempt...
research
09/23/2021

Secure PAC Bayesian Regression via Real Shamir Secret Sharing

Common approach of machine learning is to generate a model by using huge...
research
03/21/2023

Secure Aggregation in Federated Learning is not Private: Leaking User Data at Large Scale through Model Modification

Security and privacy are important concerns in machine learning. End use...
research
09/18/2019

Secure Computation of the kth-Ranked Element in a Star Network

We consider the problem of securely computing the kth-ranked element in ...

Please sign up or login with your details

Forgot password? Click here to reset