A stochastic linearized proximal method of multipliers for convex stochastic optimization with expectation constraints

06/22/2021 ∙ by Liwei Zhang, et al. ∙ 0

This paper considers the problem of minimizing a convex expectation function with a set of inequality convex expectation constraints. We present a computable stochastic approximation type algorithm, namely the stochastic linearized proximal method of multipliers, to solve this convex stochastic optimization problem. This algorithm can be roughly viewed as a hybrid of stochastic approximation and the traditional proximal method of multipliers. Under mild conditions, we show that this algorithm exhibits O(K^-1/2) expected convergence rates for both objective reduction and constraint violation if parameters in the algorithm are properly chosen, where K denotes the number of iterations. Moreover, we show that, with high probability, the algorithm has O(log(K)K^-1/2) constraint violation bound and O(log^3/2(K)K^-1/2) objective bound. Some preliminary numerical results demonstrate the performance of the proposed algorithm.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.