Lower Bounds for Parallel and Randomized Convex Optimization

11/05/2018
by   Jelena Diakonikolas, et al.
0

We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean (ℓ_2) and ℓ_∞ spaces. It is unclear whether the arguments used in this prior work can be extended to general ℓ_p spaces. Hence, our work provides a more general approach for proving lower bounds in the setting of parallel convex optimization. Moreover, as a consequence of our proof techniques, we obtain new anti-concentration bounds for convex combinations of Rademacher sequences that may be of independent interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2018

Parallelization does not Accelerate Convex Optimization: Adaptivity Lower Bounds for Non-smooth Convex Minimization

In this paper we study the limitations of parallelization in convex opti...
research
06/05/2023

Curvature and complexity: Better lower bounds for geodesically convex optimization

We study the query complexity of geodesically convex (g-convex) optimiza...
research
06/25/2019

Complexity of Highly Parallel Non-Smooth Convex Optimization

A landmark result of non-smooth convex optimization is that gradient des...
research
05/12/2022

Optimal Methods for Higher-Order Smooth Monotone Variational Inequalities

In this work, we present new simple and optimal algorithms for solving t...
research
05/28/2020

Dimension-Free Bounds on Chasing Convex Functions

We consider the problem of chasing convex functions, where functions arr...
research
12/24/2020

Majorizing Measures for the Optimizer

The theory of majorizing measures, extensively developed by Fernique, Ta...
research
07/24/2018

On the Randomized Complexity of Minimizing a Convex Quadratic Function

Minimizing a convex, quadratic objective is a fundamental problem in mac...

Please sign up or login with your details

Forgot password? Click here to reset