Algorithms for stochastic optimization with expectation constraints

04/13/2016
by   Guanghui Lan, et al.
0

This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with an expectation constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the expectation constraint on devision variables. We show that this algorithm exhibits the optimal O(1/√(N)) rate of convergence, in terms of both optimality gap and constraint violation, when the objective and constraint functions are generally convex, where N denotes the number of iterations. Moreover, we show that this rate of convergence can be improved to O(1/N) if the objective and constraint functions are strongly convex. We then present a variant of CSA, namely the cooperative stochastic parameter approximation (CSPA) algorithm, to deal with the situation when the expectation constraint is defined over problem parameters and show that it exhibits similar optimal rate of convergence to CSA. It is worth noting that CSA and CSPA are primal methods which do not require the iterations on the dual space and/or the estimation on the size of the dual variables. To the best of our knowledge, this is the first time that such optimal SA methods for solving expectation constrained stochastic optimization are presented in the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2019

Optimal Convergence for Stochastic Optimization with Multiple Expectation Constraints

In this paper, we focus on the problem of stochastic optimization where ...
research
06/22/2021

A stochastic linearized proximal method of multipliers for convex stochastic optimization with expectation constraints

This paper considers the problem of minimizing a convex expectation func...
research
08/13/2020

Conservative Stochastic Optimization with Expectation Constraints

This paper considers stochastic convex optimization problems where the o...
research
05/23/2020

Multivariate Convex Regression at Scale

We present new large-scale algorithms for fitting a multivariate convex ...
research
12/14/2017

Stochastic Particle Gradient Descent for Infinite Ensembles

The superior performance of ensemble methods with infinite models are we...
research
08/01/2019

Adaptive Kernel Learning in Heterogeneous Networks

We consider the framework of learning over decentralized networks, where...
research
05/27/2023

Some Primal-Dual Theory for Subgradient Methods for Strongly Convex Optimization

We consider (stochastic) subgradient methods for strongly convex but pot...

Please sign up or login with your details

Forgot password? Click here to reset