Practical Secure Aggregation for Federated Learning on User-Held Data

by   Keith Bonawitz, et al.
cornell university

Secure Aggregation protocols allow a collection of mutually distrust parties, each holding a private value, to collaboratively compute the sum of those values without revealing the values themselves. We consider training a deep neural network in the Federated Learning model, using distributed stochastic gradient descent across user-held training data on mobile devices, wherein Secure Aggregation protects each user's model gradient. We design a novel, communication-efficient Secure Aggregation protocol for high-dimensional data that tolerates up to 1/3 users failing to complete the protocol. For 16-bit input values, our protocol offers 1.73x communication expansion for 2^10 users and 2^20-dimensional vectors, and 1.98x expansion for 2^14 users and 2^24 dimensional vectors.


page 1

page 2

page 3

page 4


Eluding Secure Aggregation in Federated Learning via Model Inconsistency

Federated learning allows a set of users to train a deep neural network ...

Scalable and Differentially Private Distributed Aggregation in the Shuffled Model

Federated learning promises to make machine learning feasible on distrib...

SAFER: Sparse secure Aggregation for FEderated leaRning

Federated learning enables one to train a common machine learning model ...

Federated Learning with Autotuned Communication-Efficient Secure Aggregation

Federated Learning enables mobile devices to collaboratively learn a sha...

Private Weighted Sum Aggregation

As large amounts of data are circulated both from users to a cloud serve...

Distributed Learning with Low Communication Cost via Gradient Boosting Untrained Neural Network

For high-dimensional data, there are huge communication costs for distri...

Secure Aggregation with Heterogeneous Quantization in Federated Learning

Secure model aggregation across many users is a key component of federat...

1 Introduction

Secure Aggregation is a class of Secure Multi-Party Computation algorithms wherein a group of mutually distrustful parties each hold a private value and collaborate to compute an aggregate value, such as the sum , without revealing to one another any information about their private value except what is learnable from the aggregate value itself. In this work, we consider training a deep neural network in the Federated Learning model, using distributed gradient descent across user-held training data on mobile devices, using Secure Aggregation to protect the privacy of each user’s model gradient. We identify a combination of efficiency and robustness requirements which, to the best of our knowledge, are unmet by existing algorithms in the literature. We proceed to design a novel, communication-efficient Secure Aggregation protocol for high-dimensional data that tolerates up to of users failing to complete the protocol. For 16-bit input values, our protocol offers communication expansion for users and -dimensional vectors, and expansion for users and -dimensional vectors.

2 Secure Aggregation for Federated Learning

Consider training a deep neural network to predict the next word that a user will type as she composes a text message to improve typing accuracy for a phone’s on-screen keyboard Goodman et al. (2002). A modeler may wish to train such a model on all text messages across a large population of users. However, text messages frequently contain sensitive information; users may be reluctant to upload a copy of them to the modeler’s servers. Instead, we consider training such a model in a Federated Learning setting, wherein each user maintains a private database of her text messages securely on her own mobile device, and a shared global model is trained under the coordination of a central server based upon highly processed, minimally scoped, ephemeral updates from users (McMahan et al., 2016; Shokri and Shmatikov, 2015).

A neural network represents a function mapping an input to an output , where is parameterized by a high-dimensional vector . For modeling text message composition, might encode the words entered so far and

a probability distribution over the next word. A training example is an observed pair

and a training set is a collection . We define a loss on a training set , where

for a loss function

, e.g., . Training consists of finding parameters that achieve small , typically using a variant minibatch stochastic gradient descent (Chen et al., 2016; Goodfellow et al., 2016).

In the Federated Learning setting, each user holds a private set of training examples with . To run stochastic gradient descent, for each update we select data from a random subset and form a (virtual) minibatch (in practice we might have say while ; we might only consider a subset of each user’s local dataset). The minibatch loss gradient can be rewritten as a weighted average across users: where . A user can thus share just with the server, from which a gradient descent step may be taken.

Although each update is ephemeral and contains less information then the raw , a user might still wonder what information remains. There is evidence that a trained neural network’s parameters sometimes allow reconstruction of training examples (Fredrikson et al., 2015; Shokri and Shmatikov, 2015; Abadi et al., 2016); might the parameter updates be subject to similar attacks? For example, if the input is a one-hot vocabulary-length vector encoding the most recently typed word, common neural network architectures will contain at least one parameter in for each word such that is non-zero only when encodes . Thus, the set of recently typed words in would be revealed by inspecting the non-zero entries of . The server does not need to inspect any individual user’s update, however; it requires only the sums and . Using a Secure Aggregation protocol would ensure that the server learns only that one or more users in wrote the word , but not which users.

Federated Learning systems face several practical challenges. Mobile devices have only sporadic access to power and network connectivity, so the set participating in each update step is unpredictable and the system must be robust to users dropping out. Because may contain millions of parameters, updates may be large, representing a direct cost to users on metered network plans. Mobile devices also generally cannot establish direct communications channels with other mobile devices (relying on a server or service provider to mediate such communication) nor can they natively authenticate other mobile devices. Thus, Federated Learning motivates a need for a Secure Aggregation protocol that: (1) operates on high-dimensional vectors, (2) is communication efficient, even with a novel set of users on each instantiation, (3) is robust to users dropping out, and (4) provides the strongest possible security under the constraints of a server-mediated, unauthenticated network model.

3 A Practical Secure Aggregation Protocol

In our protocol, there are two kinds of parties: a single server and a collection of users . Each user holds a private vector of dimension . We assume that all elements of both and are integers on the range for some known 111Federated Learning updates can be mapped to

through a combination of clipping/scaling, linear transform, and (stochastic) quantization.

. Correctness requires that if all parties are honest, learns for some subset of users where . Security requires that (1) learns nothing other than what is inferable from , and (2) each user learns nothing. We consider three different threat models. In all of them, all users follow the protocol honestly, but the server may attempt to learn extra information in different ways222We do not analyze security against arbitrarily malicious servers and users that may collude. We defer this case and a more formal security analysis to the full version.:


The server is honest-but-curious, that is it follows the protocol honestly, but tries to learn as much as possible from messages it receives from users.


The server can lie to users about which other users have dropped out, including reporting dropouts inconsistently among different users.


The server can lie about who dropped out (as in T2) and also access the private memory of some limited number of users (who are following the protocol honestly themselves). (In this, the privacy requirement applies only to the inputs of the remaining users.)

Protocol 0: Masking with One-Time Pads

We develop our protocol in a series of refinements. We begin by assuming that all parties complete the protocol and possess pair-wise secure communication channels with ample bandwidth. Each pair of users first agree on a matched pair of input perturbations. That is, user samples a vector uniformly from for each other user . Users and exchange and over their secure channel and compute perturbations , noting that and taking when . Each user sends to the server: . The server simply sums the perturbed values: . Correctness is guaranteed because the paired perturbations in cancel:

Protocol 0 guarantees perfect privacy for the users; because the factors that users add are uniformly sampled, the values appear uniformly random to the server, subject to the constraint that . In fact, even if the server can access the memory of some users, privacy holds for those remaining. 333A more complete and formal argument is deferred to the full version of this paper.

Protocol 1: Dropped User Recovery using Secret Sharing

Unfortunately, Protocol 0 fails several of our design criteria, including robustness: if any user fails to complete the protocol by sending her to the server, the resulting sum will be masked by the perturbations that would have cancelled. To achieve robustness, we first add an initial round to the protocol in which user generates a public/private keypair, and broadcasts the public key over the pairwise channels. All future messages from to will be intermediated by the server but encrypted with ’s public key, and signed by , simulating a secure authenticated channel. This allows the server to maintain a consistent view of which users have successfully passed each round of the protocol. (We assume here, temporarily, that the server faithfully delivers all messages between users.)

We also add a secret-sharing round between users after values have been selected. In this round, each user computes shares of each perturbation using a -threshold scheme 444A secret-sharing scheme allows splitting a secret into shares, such that any subset of shares is sufficient to recover the secret, but given any subset of fewer than shares the secret remains completely hidden., such as Shamir’s Secret Sharing (Shamir, 1979), for some . For each secret user holds, she encrypts one share with each user ’s public key, then delivers all of these shares to the server. The server gathers shares from a subset of the users of size at least (e.g. by waiting a for a fixed period), then considers all other users dropped. The server delivers to each user the secret shares that were encrypted for that user; all the users in now infer a consistent view of the surviving user set from the set of received shares. When a user computes , she only includes those perturbations related to surviving users; that is, .

After the server has received from at least users , it proceeds to a new unmasking round, considering all other users to be dropped. From the remaining users in , the server requests all shares of secrets generated by the dropped users in . As long as , each user will respond with those shares. Once the server receives shares from at least users, it reconstructs the perturbations for and computes the aggregate value: . Correctness is guaranteed for as long as at least users complete the protocol. In this case, the sum includes the values of at least users, and all perturbations cancel out:

However, security has been lost: if a server incorrectly omits from , either inadvertently (e.g. arrives slightly too late) or by malicious intent, the honest users in will supply the server with all the secret shares needed to remove all the perturbations that masked in . This means we cannot guarantee security even against honest-but-curious servers (Threat Model T1).

Protocol 2: Double-Masking to Thwart a Malicious Server

To guarantee security, we introduce a double-masking structure that protects even when the server can reconstruct ’s perturbations. First, each user samples an additional random value uniformly from during the same round as the generation of the values. During the secret sharing round, the user also generates and distributes shares of to each of the other users. When generating , users also add this secondary mask: . During the unmasking round, the server must make an explicit choice with respect to each user : from each surviving member , the server can request either a share of the perturbations associated with or a share of the for ; an honest user will only respond if , and will never reveal both kinds of shares for the same user. After gathering at least shares of for all and shares of for all , the server reconstructs the secrets and computes the aggregate value: .

We can now guarantee security in Threat Model T1 for , since always remains masked by either s or by s. It can be shown that in Threat Models T2 and T3 the thresholds must be raised to and correspondingly. We defer the detailed analysis, as well as the case of arbitrarily malicious and colluding servers and users, to the full version555The security argument involves bounding the number of shares the server can recover by forging dropouts..

Protocol 3: Exchanging Secrets Efficiently

While Protocol 2 is robust and secure with the right choice of , it requires communication, which we address in this refinement of the protocol. Observe that a single secret value may be expanded to a vector of pseudorandom values by using it to seed a cryptographically secure pseudorandom generator (PRG) (Ács and Castelluccia, 2011; Golle and Juels, 2004). Thus we can generate just scalar seeds and and expand them to -element vectors. Still, each user has secrets with other users and must publish shares of all these secrets. We use key agreement to establish these secrets more efficiently. Each user generates a Diffie-Hellman secret key and public key . Users send their public keys to the server (authenticated as per Protocol 1); the server then broadcasts all public keys to all users, retaining a copy for itself. Each pair of users can now agree on a secret . To construct perturbations, we assume a total ordering on and take for , for , and for (as before). The server now only needs to learn to reconstruct all of ’s perturbations; therefore need only distribute shares of and during the secret sharing round. The security of Protocol 3 can be shown to be essentially identical to that of Protocol 2 in each of the different threat models.

Protocol 4: Minimizing Trust in Practice

computation User Server666We reconstruct secrets from aligned -Shamir shares in by caching Lagrange coefficients. communication User Server storage User Server tableProtocol 4 Cost Summary (derivations deferred to the full paper). figureProtocol 4 Communication Diagram

Protocol 3 is not practically deployable for mobile devices because they lack pairwise secure communication and authentication. We propose to bootstrap the communication protocol by replacing the exchange of public/private keys described in Protocol 1 with a server-mediated key agreement, where each user generates a Diffie-Hellman secret key and public key and advertises the latter together with 777This can be viewed as bootstrapping a SSL/TLS connection between each pair of users. We note immediately that the server may now conduct man-in-the-middle attacks, but argue that this is tolerable for several reasons. First, it is essentially inevitable for users that lack authentication mechanisms or a pre-existing public-key infrastructure. Relying only on the non-maliciousness of the bootstrapping round also constitutes minimization of trust: the code implementing this stage is small and could be publicly audited, outsourced to a trusted third party, or implemented via a trusted compute platform offering a remote attestation capability (Costan et al., ; Costan and Devadas, 2016; Suh et al., 2003). Moreover, the protocol meaningfully increases security (by protecting against anything less than an actively malicious attack by the server) and provides forward secrecy (compromising the server at any time after the key exchange provides no benefit to the attacker, even if all data and communications had been fully logged).

We summarize the protocol’s performance in Table 3. Taking that key agreement public keys and encrypted secret shares are 256 bits and that users’ inputs are all on the same range888Taking to ensure no overflow , each user transfers more data than if she sent a raw vector.

4 Related work

The restricted case of secure aggregation in which all users but one have an input 0 can be expressed as a dining cryptographers network (DC-net), which provide anonymity by using pairwise blinding of inputs (Chaum, 1988; Golle and Juels, 2004), allowing to untraceably learn each user’s input. Recent research has examined the communication efficiencly and operation in the presence of malicious users (Corrigan-Gibbs et al., 2013). However, if even one user aborts too early, existing protocols must restart from scratch, which can be very expensive (Kwon, 2015). Pairwise blinding in a modulo addition-based encryption scheme has been explored, but existing schemes are neither efficient for vectors nor robust to even single failure (Ács and Castelluccia, 2011; Goryczka and Xiong, 2015). Other schemes (e.g. based on Paillier cryptosystem (Rastogi and Nath, 2010)) are very computationally expensive.


  • Abadi et al. [2016] Martín Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. arXiv preprint arXiv:1607.00133, 2016.
  • Ács and Castelluccia [2011] Gergely Ács and Claude Castelluccia. I have a DREAM! (DiffeRentially privatE smArt Metering). In International Workshop on Information Hiding, pages 118–132. Springer, 2011.
  • Chaum [1988] David Chaum. The dining cryptographers problem: unconditional sender and recipient untraceability. Journal of Cryptology, 1(1):65–75, 1988.
  • Chen et al. [2016] Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Revisiting distributed synchronous sgd. In ICLR Workshop Track, 2016. URL
  • Corrigan-Gibbs et al. [2013] Henry Corrigan-Gibbs, David Isaac Wolinsky, and Bryan Ford. Proactively accountable anonymous messaging in verdict. In Proceedings of the 22nd USENIX Conference on Security, pages 147–162. USENIX Association, 2013.
  • Costan and Devadas [2016] Victor Costan and Srinivas Devadas. Intel SGX explained. Cryptology ePrint Archive, Report 2016/086, 2016.
  • [7] Victor Costan, Ilia Lebedev, and Srinivas Devadas. Sanctum: Minimal hardware extensions for strong software isolation. Technical report, Cryptology ePrint Archive, Report 2015/564, 201 5. http://eprint. iacr. org.
  • Fredrikson et al. [2015] Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1322–1333. ACM, 2015.
  • Golle and Juels [2004] Philippe Golle and Ari Juels. Dining cryptographers revisited. In International Conference on the Theory and Applications of Cryptographic Techniques, pages 456–473. Springer, 2004.
  • Goodfellow et al. [2016] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MIT Press, 2016.
  • Goodman et al. [2002] Joshua Goodman, Gina Venolia, Keith Steury, and Chauncey Parker. Language modeling for soft keyboards. In Proceedings of the 7th international conference on Intelligent user interfaces, pages 194–195. ACM, 2002.
  • Goryczka and Xiong [2015] Slawomir Goryczka and Li Xiong. A comprehensive comparison of multiparty secure additions with differential privacy. 2015.
  • Kwon [2015] Young Hyun Kwon. Riffle: An efficient communication system with strong anonymity. PhD thesis, Massachusetts Institute of Technology, 2015.
  • McMahan et al. [2016] H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629, 2016.
  • Rastogi and Nath [2010] Vibhor Rastogi and Suman Nath. Differentially private aggregation of distributed time-series with transformation and encryption. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, pages 735–746. ACM, 2010.
  • Shamir [1979] Adi Shamir. How to share a secret. Communications of the ACM, 22(11):612–613, 1979.
  • Shokri and Shmatikov [2015] Reza Shokri and Vitaly Shmatikov. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1310–1321. ACM, 2015.
  • Suh et al. [2003] G Edward Suh, Dwaine Clarke, Blaise Gassend, Marten Van Dijk, and Srinivas Devadas. Aegis: architecture for tamper-evident and tamper-resistant processing. In Proceedings of the 17th annual international conference on Supercomputing, pages 160–171. ACM, 2003.