Byzantine-Resistant Secure Aggregation for Federated Learning Based on Coded Computing and Vector Commitment

02/20/2023
by   Tayyebeh Jahani-Nezhad, et al.
0

In this paper, we propose an efficient secure aggregation scheme for federated learning that is protected against Byzantine attacks and privacy leakages. Processing individual updates to manage adversarial behavior, while preserving privacy of data against colluding nodes, requires some sort of secure secret sharing. However, communication load for secret sharing of long vectors of updates can be very high. To resolve this issue, in the proposed scheme, local updates are partitioned into smaller sub-vectors and shared using ramp secret sharing. However, this sharing method does not admit bi-linear computations, such as pairwise distance calculations, needed by outlier-detection algorithms. To overcome this issue, each user runs another round of ramp sharing, with different embedding of data in the sharing polynomial. This technique, motivated by ideas from coded computing, enables secure computation of pairwise distance. In addition, to maintain the integrity and privacy of the local update, the proposed scheme also uses a vector commitment method, in which the commitment size remains constant (i.e. does not increase with the length of the local update), while simultaneously allowing verification of the secret sharing process.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/10/2020

Communication-Computation Efficient Secure Aggregation for Federated Learning

Federated learning has been spotlighted as a way to train neural network...
research
04/07/2023

Efficient Secure Aggregation for Privacy-Preserving Federated Machine Learning

Federated learning introduces a novel approach to training machine learn...
research
10/06/2021

Secure Byzantine-Robust Distributed Learning via Clustering

Federated learning systems that jointly preserve Byzantine robustness an...
research
07/21/2020

Byzantine-Resilient Secure Federated Learning

Secure federated learning is a privacy-preserving framework to improve m...
research
07/02/2021

Privacy in Distributed Computations based on Real Number Secret Sharing

Privacy preservation in distributed computations is an important subject...
research
12/16/2021

CodedPaddedFL and CodedSecAgg: Straggler Mitigation and Secure Aggregation in Federated Learning

We present two novel coded federated learning (FL) schemes for linear re...
research
02/01/2013

A coding approach to guarantee information integrity against a Byzantine relay

This paper presents a random coding scheme with which two nodes can exch...

Please sign up or login with your details

Forgot password? Click here to reset