Byzantine-Robust and Privacy-Preserving Framework for FedML

05/05/2021
by   Hanieh Hashemi, et al.
0

Federated learning has emerged as a popular paradigm for collaboratively training a model from data distributed among a set of clients. This learning setting presents, among others, two unique challenges: how to protect privacy of the clients' data during training, and how to ensure integrity of the trained model. We propose a two-pronged solution that aims to address both challenges under a single framework. First, we propose to create secure enclaves using a trusted execution environment (TEE) within the server. Each client can then encrypt their gradients and send them to verifiable enclaves. The gradients are decrypted within the enclave without the fear of privacy breaches. However, robustness check computations in a TEE are computationally prohibitive. Hence, in the second step, we perform a novel gradient encoding that enables TEEs to encode the gradients and then offloading Byzantine check computations to accelerators such as GPUs. Our proposed approach provides theoretical bounds on information leakage and offers a significant speed-up over the baseline in empirical evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2023

Gradient Leakage Defense with Key-Lock Module for Federated Learning

Federated Learning (FL) is a widely adopted privacy-preserving machine l...
research
04/26/2023

FedVS: Straggler-Resilient and Privacy-Preserving Vertical Federated Learning for Split Models

In a vertical federated learning (VFL) system consisting of a central se...
research
10/17/2020

Layer-wise Characterization of Latent Information Leakage in Federated Learning

Training a deep neural network (DNN) via federated learning allows parti...
research
08/29/2018

Fair Marketplace for Secure Outsourced Computations

The cloud computing paradigm offers clients ubiquitous and on demand acc...
research
08/27/2022

BOBA: Byzantine-Robust Federated Learning with Label Skewness

In federated learning, most existing techniques for robust aggregation a...
research
01/21/2022

TOFU: Towards Obfuscated Federated Updates by Encoding Weight Updates into Gradients from Proxy Data

Advances in Federated Learning and an abundance of user data have enable...
research
06/30/2022

DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware

Privacy and security-related concerns are growing as machine learning re...

Please sign up or login with your details

Forgot password? Click here to reset