Online Stochastic Convex Optimization: Wasserstein Distance Variation

06/02/2020
by   Iman Shames, et al.
38

Distributionally-robust optimization is often studied for a fixed set of distributions rather than time-varying distributions that can drift significantly over time (which is, for instance, the case in finance and sociology due to underlying expansion of economy and evolution of demographics). This motivates understanding conditions on probability distributions, using the Wasserstein distance, that can be used to model time-varying environments. We can then use these conditions in conjunction with online stochastic optimization to adapt the decisions. We considers an online proximal-gradient method to track the minimizers of expectations of smooth convex functions parameterised by a random variable whose probability distributions continuously evolve over time at a rate similar to that of the rate at which the decision maker acts. We revisit the concepts of estimation and tracking error inspired by systems and control literature and provide bounds for them under strong convexity, Lipschitzness of the gradient, and bounds on the probability distribution drift characterised by the Wasserstein distance. Further, noting that computing projections for a general feasible sets might not be amenable to online implementation (due to computational constraints), we propose an exact penalty method. Doing so allows us to relax the uniform boundedness of the gradient and establish dynamic regret bounds for tracking and estimation error. We further introduce a constraint-tightening approach and relate the amount of tightening to the probability of satisfying the constraints.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2019

Inexact Online Proximal-gradient Method for Time-varying Convex Optimization

This paper considers an online proximal-gradient method to track the min...
research
01/26/2023

Online Convex Optimization with Stochastic Constraints: Zero Constraint Violation and Bandit Feedback

This paper studies online convex optimization with stochastic constraint...
research
04/10/2022

Rockafellian Relaxation in Optimization under Uncertainty: Asymptotically Exact Formulations

In practice, optimization models are often prone to unavoidable inaccura...
research
03/01/2018

Wasserstein Distance Measure Machines

This paper presents a distance-based discriminative framework for learni...
research
08/16/2021

Stochastic optimization under time drift: iterate averaging, step decay, and high probability guarantees

We consider the problem of minimizing a convex function that is evolving...
research
11/25/2020

Leveraging Predictions in Smoothed Online Convex Optimization via Gradient-based Algorithms

We consider online convex optimization with time-varying stage costs and...

Please sign up or login with your details

Forgot password? Click here to reset