Accelerating Frank-Wolfe Algorithm using Low-Dimensional and Adaptive Data Structures

07/19/2022
by   Zhao Song, et al.
0

In this paper, we study the problem of speeding up a type of optimization algorithms called Frank-Wolfe, a conditional gradient method. We develop and employ two novel inner product search data structures, improving the prior fastest algorithm in [Shrivastava, Song and Xu, NeurIPS 2021]. * The first data structure uses low-dimensional random projection to reduce the problem to a lower dimension, then uses efficient inner product data structure. It has preprocessing time Õ(nd^ω-1+dn^1+o(1)) and per iteration cost Õ(d+n^ρ) for small constant ρ. * The second data structure leverages the recent development in adaptive inner product search data structure that can output estimations to all inner products. It has preprocessing time Õ(nd) and per iteration cost Õ(d+n). The first algorithm improves the state-of-the-art (with preprocessing time Õ(d^2n^1+o(1)) and per iteration cost Õ(dn^ρ)) in all cases, while the second one provides an even faster preprocessing time and is suitable when the number of iterations is small.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2021

Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures

Conditional gradient methods (CGM) are widely used in modern machine lea...
research
10/19/2010

Random Projection Trees Revisited

The Random Projection Tree structures proposed in [Freund-Dasgupta STOC0...
research
04/07/2022

Speeding Up Sparsification using Inner Product Search Data Structures

We present a general framework that utilizes different efficient data st...
research
12/14/2022

Faster Maximum Inner Product Search in High Dimensions

Maximum Inner Product Search (MIPS) is a popular problem in the machine ...
research
12/15/2018

A Bandit Approach to Maximum Inner Product Search

There has been substantial research on sub-linear time approximate algor...
research
10/09/2021

Does Preprocessing Help Training Over-parameterized Neural Networks?

Deep neural networks have achieved impressive performance in many areas....

Please sign up or login with your details

Forgot password? Click here to reset