Towards Gradient Free and Projection Free Stochastic Optimization

10/08/2018
by   Anit Kumar Sahu, et al.
0

This paper focuses on the problem of constrainedstochastic optimization. A zeroth order Frank-Wolfe algorithm is proposed, which in addition to the projection-free nature of the vanilla Frank-Wolfe algorithm makes it gradient free. Under convexity and smoothness assumption, we show that the proposed algorithm converges to the optimal objective function at a rate O(1/T^1/3), where T denotes the iteration count. In particular, the primal sub-optimality gap is shown to have a dimension dependence of O(d^1/3), which is the best known dimension dependence among all zeroth order optimization algorithms with one directional derivative per iteration. For non-convex functions, we obtain the Frank-Wolfe gap to be O(d^1/3T^-1/4). Experiments on black-box optimization setups demonstrate the efficacy of the proposed algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro