Confined Gradient Descent: Privacy-preserving Optimization for Federated Learning

04/27/2021
by   Yanjun Zhang, et al.
0

Federated learning enables multiple participants to collaboratively train a model without aggregating the training data. Although the training data are kept within each participant and the local gradients can be securely synthesized, recent studies have shown that such privacy protection is insufficient. The global model parameters that have to be shared for optimization are susceptible to leak information about training data. In this work, we propose Confined Gradient Descent (CGD) that enhances privacy of federated learning by eliminating the sharing of global model parameters. CGD exploits the fact that a gradient descent optimization can start with a set of discrete points and converges to another set at the neighborhood of the global minimum of the objective function. It lets the participants independently train on their local data, and securely share the sum of local gradients to benefit each other. We formally demonstrate CGD's privacy enhancement over traditional FL. We prove that less information is exposed in CGD compared to that of traditional FL. CGD also guarantees desired model accuracy. We theoretically establish a convergence rate for CGD. We prove that the loss of the proprietary models learned for each participant against a model learned by aggregated training data is bounded. Extensive experimental results on two real-world datasets demonstrate the performance of CGD is comparable with the centralized learning, with marginal differences on validation loss (mostly within 0.05) and accuracy (mostly within 1

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2022

Plankton-FL: Exploration of Federated Learning for Privacy-Preserving Training of Deep Neural Networks for Phytoplankton Classification

Creating high-performance generalizable deep neural networks for phytopl...
research
12/17/2021

From Deterioration to Acceleration: A Calibration Approach to Rehabilitating Step Asynchronism in Federated Optimization

In the setting of federated optimization, where a global model is aggreg...
research
09/18/2020

Federated Learning with Nesterov Accelerated Gradient Momentum Method

Federated learning (FL) is a fast-developing technique that allows multi...
research
09/16/2022

Federated Coordinate Descent for Privacy-Preserving Multiparty Linear Regression

Distributed privacy-preserving regression schemes have been developed an...
research
12/22/2022

Over-the-Air Federated Learning with Enhanced Privacy

Federated learning (FL) has emerged as a promising learning paradigm in ...
research
12/01/2020

MYSTIKO : : Cloud-Mediated, Private, Federated Gradient Descent

Federated learning enables multiple, distributed participants (potential...
research
09/16/2020

FedSmart: An Auto Updating Federated Learning Optimization Mechanism

Federated learning has made an important contribution to data privacy-pr...

Please sign up or login with your details

Forgot password? Click here to reset