Byzantine-Robust and Privacy-Preserving Framework for FedML

05/05/2021
by   Hanieh Hashemi, et al.
0

Federated learning has emerged as a popular paradigm for collaboratively training a model from data distributed among a set of clients. This learning setting presents, among others, two unique challenges: how to protect privacy of the clients' data during training, and how to ensure integrity of the trained model. We propose a two-pronged solution that aims to address both challenges under a single framework. First, we propose to create secure enclaves using a trusted execution environment (TEE) within the server. Each client can then encrypt their gradients and send them to verifiable enclaves. The gradients are decrypted within the enclave without the fear of privacy breaches. However, robustness check computations in a TEE are computationally prohibitive. Hence, in the second step, we perform a novel gradient encoding that enables TEEs to encode the gradients and then offloading Byzantine check computations to accelerators such as GPUs. Our proposed approach provides theoretical bounds on information leakage and offers a significant speed-up over the baseline in empirical evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/17/2020

Layer-wise Characterization of Latent Information Leakage in Federated Learning

Training a deep neural network (DNN) via federated learning allows parti...
02/19/2020

PrivacyFL: A simulator for privacy-preserving and secure federated learning

Federated learning is a technique that enables distributed clients to co...
10/15/2020

Mitigating Byzantine Attacks in Federated Learning

Prior solutions for mitigating Byzantine failures in federated learning,...
08/29/2018

Fair Marketplace for Secure Outsourced Computations

The cloud computing paradigm offers clients ubiquitous and on demand acc...
06/30/2022

DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware

Privacy and security-related concerns are growing as machine learning re...
01/21/2022

TOFU: Towards Obfuscated Federated Updates by Encoding Weight Updates into Gradients from Proxy Data

Advances in Federated Learning and an abundance of user data have enable...
11/07/2017

The VACCINE Framework for Building DLP Systems

Conventional Data Leakage Prevention (DLP) systems suffer from the follo...