The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication

02/02/2021
by   Blake Woodworth, et al.
0

We resolve the min-max complexity of distributed stochastic convex optimization (up to a log factor) in the intermittent communication setting, where M machines work in parallel over the course of R rounds of communication to optimize the objective, and during each round of communication, each machine may sequentially compute K stochastic gradient estimates. We present a novel lower bound with a matching upper bound that establishes an optimal algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2019

On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization

For SGD based distributed stochastic optimization, computation complexit...
research
08/15/2023

Projection-Free Methods for Stochastic Simple Bilevel Optimization with Convex Lower-level Problem

In this paper, we study a class of stochastic bilevel optimization probl...
research
08/02/2023

Certified Multi-Fidelity Zeroth-Order Optimization

We consider the problem of multi-fidelity zeroth-order optimization, whe...
research
06/05/2015

Communication Complexity of Distributed Convex Learning and Optimization

We study the fundamental limits to communication-efficient distributed m...
research
12/06/2022

Online Min-Max Paging

Motivated by fairness requirements in communication networks, we introdu...
research
07/19/2020

Exploitation of Multiple Replenishing Resources with Uncertainty

We consider an optimization problem in which a (single) bat aims to expl...
research
10/07/2021

A Stochastic Newton Algorithm for Distributed Convex Optimization

We propose and analyze a stochastic Newton algorithm for homogeneous dis...

Please sign up or login with your details

Forgot password? Click here to reset