Online Stochastic Convex Optimization: Wasserstein Distance Variation

06/02/2020
by   Iman Shames, et al.
38

Distributionally-robust optimization is often studied for a fixed set of distributions rather than time-varying distributions that can drift significantly over time (which is, for instance, the case in finance and sociology due to underlying expansion of economy and evolution of demographics). This motivates understanding conditions on probability distributions, using the Wasserstein distance, that can be used to model time-varying environments. We can then use these conditions in conjunction with online stochastic optimization to adapt the decisions. We considers an online proximal-gradient method to track the minimizers of expectations of smooth convex functions parameterised by a random variable whose probability distributions continuously evolve over time at a rate similar to that of the rate at which the decision maker acts. We revisit the concepts of estimation and tracking error inspired by systems and control literature and provide bounds for them under strong convexity, Lipschitzness of the gradient, and bounds on the probability distribution drift characterised by the Wasserstein distance. Further, noting that computing projections for a general feasible sets might not be amenable to online implementation (due to computational constraints), we propose an exact penalty method. Doing so allows us to relax the uniform boundedness of the gradient and establish dynamic regret bounds for tracking and estimation error. We further introduce a constraint-tightening approach and relate the amount of tightening to the probability of satisfying the constraints.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/04/2019

Inexact Online Proximal-gradient Method for Time-varying Convex Optimization

This paper considers an online proximal-gradient method to track the min...
09/03/2020

Distributed Online Optimization via Gradient Tracking with Adaptive Momentum

This paper deals with a network of computing agents aiming to solve an o...
05/16/2019

Online Learning over Dynamic Graphs via Distributed Proximal Gradient Algorithm

We consider the problem of tracking the minimum of a time-varying convex...
04/10/2022

Rockafellian Relaxation in Optimization under Uncertainty: Asymptotically Exact Formulations

In practice, optimization models are often prone to unavoidable inaccura...
02/20/2020

Stochastic Optimization for Regularized Wasserstein Estimators

Optimal transport is a foundational problem in optimization, that allows...
08/16/2021

Stochastic optimization under time drift: iterate averaging, step decay, and high probability guarantees

We consider the problem of minimizing a convex function that is evolving...
11/25/2020

Leveraging Predictions in Smoothed Online Convex Optimization via Gradient-based Algorithms

We consider online convex optimization with time-varying stage costs and...