Nonconvex Stochastic Nested Optimization via Stochastic ADMM

11/12/2019 ∙ by Zhongruo Wang, et al. ∙ 0

We consider the stochastic nested composition optimization problem where the objective is a composition of two expected-value functions. We proposed the stochastic ADMM to solve this complicated objective. In order to find an ϵ stationary point where the expected norm of the subgradient of corresponding augmented Lagrangian is smaller than ϵ, the total sample complexity of our method is O(ϵ^-3) for the online case and O((2N_1 + N_2) + (2N_1 + N_2)^1/2ϵ^-2) for the finite sum case. The computational complexity is consistent with proximal version proposed in <cit.>, but our algorithm can solve more general problem when the proximal mapping of the penalty is not easy to compute.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.