Tailoring Gradient Methods for Differentially-Private Distributed Optimization

02/02/2022
by   Yongqiang Wang, et al.
0

Decentralized optimization is gaining increased traction due to its widespread applications in large-scale machine learning and multi-agent systems. The same mechanism that enables its success, i.e., information sharing among participating agents, however, also leads to the disclosure of individual agents' private information, which is unacceptable when sensitive data are involved. As differential privacy is becoming a de facto standard for privacy preservation, recently results have emerged integrating differential privacy with distributed optimization. Although such differential-privacy based privacy approaches for distributed optimization are efficient in both computation and communication, directly incorporating differential privacy design in existing distributed optimization approaches significantly compromises optimization accuracy. In this paper, we propose to redesign and tailor gradient methods for differentially-private distributed optimization, and propose two differential-privacy oriented gradient methods that can ensure both privacy and optimality. We prove that the proposed distributed algorithms can ensure almost sure convergence to an optimal solution under any persistent and variance-bounded differential-privacy noise, which, to the best of our knowledge, has not been reported before. The first algorithm is based on static-consensus based gradient methods and only shares one variable in each iteration. The second algorithm is based on dynamic-consensus (gradient-tracking) based distributed optimization methods and, hence, it is applicable to general directed interaction graph topologies. Numerical comparisons with existing counterparts confirm the effectiveness of the proposed approaches.

READ FULL TEXT

page 3

page 5

page 7

page 8

page 9

page 10

page 15

page 16

research
11/14/2022

A Robust Dynamic Average Consensus Algorithm that Ensures both Differential Privacy and Accurate Convergence

We propose a new dynamic average consensus algorithm that is robust to i...
research
06/25/2023

Locally Differentially Private Distributed Online Learning with Guaranteed Optimality

Distributed online learning is gaining increased traction due to its uni...
research
03/19/2019

Differentially Private Consensus-Based Distributed Optimization

Data privacy is an important concern in learning, when datasets contain ...
research
01/31/2020

Locally Private Distributed Reinforcement Learning

We study locally differentially private algorithms for reinforcement lea...
research
06/15/2018

Customized Local Differential Privacy for Multi-Agent Distributed Optimization

Real-time data-driven optimization and control problems over networks ma...
research
07/18/2020

Tighter Generalization Bounds for Iterative Differentially Private Learning Algorithms

This paper studies the relationship between generalization and privacy p...
research
05/22/2020

Secure and Differentially Private Bayesian Learning on Distributed Data

Data integration and sharing maximally enhance the potential for novel a...

Please sign up or login with your details

Forgot password? Click here to reset