β-Stochastic Sign SGD: A Byzantine Resilient and Differentially Private Gradient Compressor for Federated Learning

10/03/2022
by   Ming Xiang, et al.
0

Federated Learning (FL) is a nascent privacy-preserving learning framework under which the local data of participating clients is kept locally throughout model training. Scarce communication resources and data heterogeneity are two defining characteristics of FL. Besides, a FL system is often implemented in a harsh environment – leaving the clients vulnerable to Byzantine attacks. To the best of our knowledge, no gradient compressors simultaneously achieve quantitative Byzantine resilience and privacy preservation. In this paper, we fill this gap via revisiting the stochastic sign SGD <cit.>. We propose β-stochastic sign SGD, which contains a gradient compressor that encodes a client's gradient information in sign bits subject to the privacy budget β>0. We show that as long as β>0, β-stochastic sign SGD converges in the presence of partial client participation and mobile Byzantine faults, showing that it achieves quantifiable Byzantine-resilience and differential privacy simultaneously. In sharp contrast, when β=0, the compressor is not differentially private. Notably, for the special case when each of the stochastic gradients involved is bounded with known bounds, our gradient compressor with β=0 coincides with the compressor proposed in <cit.>. As a byproduct, we show that when the clients report sign messages, the popular information aggregation rules simple mean, trimmed mean, median and majority vote are identical in terms of the output signs. Our theories are corroborated by experiments on MNIST and CIFAR-10 datasets.

READ FULL TEXT
research
09/29/2022

A Secure Federated Learning Framework for Residential Short Term Load Forecasting

Smart meter measurements, though critical for accurate demand forecastin...
research
04/15/2023

Practical Differentially Private and Byzantine-resilient Federated Learning

Privacy and Byzantine resilience are two indispensable requirements for ...
research
02/25/2020

Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees

Federated learning (FL) has emerged as a prominent distributed learning ...
research
05/27/2018

cpSGD: Communication-efficient and differentially-private distributed SGD

Distributed stochastic gradient descent is an important subroutine in di...
research
10/15/2020

Mitigating Byzantine Attacks in Federated Learning

Prior solutions for mitigating Byzantine failures in federated learning,...
research
06/13/2021

DP-NormFedAvg: Normalizing Client Updates for Privacy-Preserving Federated Learning

In this paper, we focus on facilitating differentially private quantized...
research
06/22/2020

Byzantine-Resilient High-Dimensional SGD with Local Iterations on Heterogeneous Data

We study stochastic gradient descent (SGD) with local iterations in the ...

Please sign up or login with your details

Forgot password? Click here to reset