Information-constrained optimization: can adaptive processing of gradients help?

by   Jayadev Acharya, et al.

We revisit first-order optimization under local information constraints such as local privacy, gradient quantization, and computational constraints limiting access to a few coordinates of the gradient. In this setting, the optimization algorithm is not allowed to directly access the complete output of the gradient oracle, but only gets limited information about it subject to the local information constraints. We study the role of adaptivity in processing the gradient output to obtain this limited information from it.We consider optimization for both convex and strongly convex functions and obtain tight or nearly tight lower bounds for the convergence rate, when adaptive gradient processing is allowed. Prior work was restricted to convex functions and allowed only nonadaptive processing of gradients. For both of these function classes and for the three information constraints mentioned above, our lower bound implies that adaptive processing of gradients cannot outperform nonadaptive processing in most regimes of interest. We complement these results by exhibiting a natural optimization problem under information constraints for which adaptive processing of gradient strictly outperforms nonadaptive processing.


page 1

page 2

page 3

page 4


Adaptive Gradient Methods for Constrained Convex Optimization

We provide new adaptive first-order methods for constrained convex optim...

Distributed Non-Convex First-Order Optimization and Information Processing: Lower Complexity Bounds and Rate Optimal Algorithms

We consider a class of distributed non-convex optimization problems ofte...

Finite Precision Stochastic Optimization -- Accounting for the Bias

We consider first order stochastic optimization where the oracle must qu...

Finite Precision Stochastic Optimisation -- Accounting for the Bias

We consider first order stochastic optimization where the oracle must qu...

Making Progress Based on False Discoveries

We consider the question of adaptive data analysis within the framework ...

Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks

We consider the task of minimizing the sum of smooth and strongly convex...

Distributed Zero-Order Optimization under Adversarial Noise

We study the problem of distributed zero-order optimization for a class ...