kFW: A Frank-Wolfe style algorithm with stronger subproblem oracles

06/29/2020 ∙ by Lijun Ding, et al. ∙ 0

This paper proposes a new variant of Frank-Wolfe (FW), called kFW. Standard FW suffers from slow convergence: iterates often zig-zag as update directions oscillate around extreme points of the constraint set. The new variant, kFW, overcomes this problem by using two stronger subproblem oracles in each iteration. The first is a k linear optimization oracle (kLOO) that computes the k best update directions (rather than just one). The second is a k direction search (kDS) that minimizes the objective over a constraint set represented by the k best update directions and the previous iterate. When the problem solution admits a sparse representation, both oracles are easy to compute, and kFW converges quickly for smooth convex objectives and several interesting constraint sets: kFW achieves finite 4L_f^3D^4/γδ^2 convergence on polytopes and group norm balls, and linear convergence on spectrahedra and nuclear norm balls. Numerical experiments validate the effectiveness of kFW and demonstrate an order-of-magnitude speedup over existing approaches.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.