Secure Distributed Optimization Under Gradient Attacks

10/28/2022
by   Shuhua Yu, et al.
0

In this paper, we study secure distributed optimization against arbitrary gradient attack in multi-agent networks. In distributed optimization, there is no central server to coordinate local updates, and each agent can only communicate with its neighbors on a predefined network. We consider the scenario where out of n networked agents, a fixed but unknown fraction ρ of the agents are under arbitrary gradient attack in that their stochastic gradient oracles return arbitrary information to derail the optimization process, and the goal is to minimize the sum of local objective functions on unattacked agents. We propose a distributed stochastic gradient method that combines local variance reduction and clipping (CLIP-VRG). We show that, in a connected network, when unattacked local objective functions are convex and smooth, share a common minimizer, and their sum is strongly convex, CLIP-VRG leads to almost sure convergence of the iterates to the exact sum cost minimizer at all agents. We quantify a tight upper bound of the fraction ρ of attacked agents in terms of problem parameters such as the condition number of the associated sum cost that guarantee exact convergence of CLIP-VRG, and characterize its asymptotic convergence rate. Finally, we empirically demonstrate the effectiveness of the proposed method under gradient attacks in both synthetic dataset and image classification datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2021

Improving the Transient Times for Distributed Stochastic Gradient Methods

We consider the distributed optimization problem where n agents each pos...
research
01/14/2023

CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence

In this paper, we consider solving the distributed optimization problem ...
research
10/11/2022

Zero-Order One-Point Estimate with Distributed Stochastic Gradient-Tracking Technique

In this work, we consider a distributed multi-agent stochastic optimizat...
research
04/20/2018

On the Location of the Minimizer of the Sum of Two Strongly Convex Functions

The problem of finding the minimizer of a sum of convex functions is cen...
research
04/20/2018

On the Location of the Minimizer of the Sum of Strongly Convex Functions

The problem of finding the minimizer of a sum of convex functions is cen...
research
11/15/2017

Random gradient extrapolation for distributed and stochastic optimization

In this paper, we consider a class of finite-sum convex optimization pro...
research
06/21/2023

Distributed Random Reshuffling Methods with Improved Convergence

This paper proposes two distributed random reshuffling methods, namely G...

Please sign up or login with your details

Forgot password? Click here to reset