Sketching for First Order Method: Efficient Algorithm for Low-Bandwidth Channel and Vulnerability

10/15/2022
by   Zhao Song, et al.
0

Sketching is one of the most fundamental tools in large-scale machine learning. It enables runtime and memory saving via randomly compressing the original large problem onto lower dimensions. In this paper, we propose a novel sketching scheme for the first order method in large-scale distributed learning setting, such that the communication costs between distributed agents are saved while the convergence of the algorithms is still guaranteed. Given gradient information in a high dimension d, the agent passes the compressed information processed by a sketching matrix R∈^s× d with s≪ d, and the receiver de-compressed via the de-sketching matrix R^⊤ to “recover” the information in original dimension. Using such a framework, we develop algorithms for federated learning with lower communication costs. However, such random sketching does not protect the privacy of local data directly. We show that the gradient leakage problem still exists after applying the sketching technique by showing a specific gradient attack method. As a remedy, we prove rigorously that the algorithm will be differentially private by adding additional random noises in gradient information, which results in a both communication-efficient and differentially private first order approach for federated learning tasks. Our sketching scheme can be further generalized to other learning settings and might be of independent interest itself.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/20/2022

SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression

To enable large-scale machine learning in bandwidth-hungry environments ...
research
02/22/2023

Multi-Message Shuffled Privacy in Federated Learning

We study differentially private distributed optimization under communica...
research
10/09/2020

Voting-based Approaches For Differentially Private Federated Learning

While federated learning (FL) enables distributed agents to collaborativ...
research
11/13/2019

Federated and Differentially Private Learning for Electronic Health Records

The use of collaborative and decentralized machine learning techniques s...
research
02/24/2021

Lossless Compression of Efficient Private Local Randomizers

Locally Differentially Private (LDP) Reports are commonly used for colle...
research
11/10/2020

Distributed Learning with Low Communication Cost via Gradient Boosting Untrained Neural Network

For high-dimensional data, there are huge communication costs for distri...
research
12/20/2019

Distributed Fixed Point Methods with Compressed Iterates

We propose basic and natural assumptions under which iterative optimizat...

Please sign up or login with your details

Forgot password? Click here to reset