Private and Communication-Efficient Edge Learning: A Sparse Differential Gaussian-Masking Distributed SGD Approach

01/12/2020
by   Xin Zhang, et al.
0

With rise of machine learning (ML) and the proliferation of smart mobile devices, recent years have witnessed a surge of interest in performing ML in wireless edge networks. In this paper, we consider the problem of jointly improving data privacy and communication efficiency of distributed edge learning, both of which are critical performance metrics in wireless edge network computing. Toward this end, we propose a new decentralized stochastic gradient method with sparse differential Gaussian-masked stochastic gradients (SDM-DSGD) for non-convex distributed edge learning. Our main contributions are three-fold: i) We theoretically establish the privacy and communication efficiency performance guarantee of our SDM-DSGD method, which outperforms all existing works; ii) We show that SDM-DSGD improves the fundamental training-privacy trade-off by two orders of magnitude compared with the state-of-the-art. iii) We reveal theoretical insights and offer practical design guidelines for the interactions between privacy preservation and communication efficiency, two conflicting performance goals. We conduct extensive experiments with a variety of learning models on MNIST and CIFAR-10 datasets to verify our theoretical findings. Collectively, our results contribute to the theory and algorithm design for distributed edge learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2021

GT-STORM: Taming Sample, Communication, and Memory Complexities in Decentralized Non-Convex Learning

Decentralized nonconvex optimization has received increasing attention i...
research
04/26/2023

Killing Two Birds with One Stone: Quantization Achieves Privacy in Distributed Learning

Communication efficiency and privacy protection are two critical issues ...
research
05/27/2018

cpSGD: Communication-efficient and differentially-private distributed SGD

Distributed stochastic gradient descent is an important subroutine in di...
research
12/08/2021

FastSGD: A Fast Compressed SGD Framework for Distributed Machine Learning

With the rapid increase of big data, distributed Machine Learning (ML) h...
research
09/28/2020

Communicate to Learn at the Edge

Bringing the success of modern machine learning (ML) techniques to mobil...
research
02/23/2022

Energy-efficient Training of Distributed DNNs in the Mobile-edge-cloud Continuum

We address distributed machine learning in multi-tier (e.g., mobile-edge...
research
12/06/2019

Communication-Efficient Network-Distributed Optimization with Differential-Coded Compressors

Network-distributed optimization has attracted significant attention in ...

Please sign up or login with your details

Forgot password? Click here to reset