Using Taylor-Approximated Gradients to Improve the Frank-Wolfe Method for Empirical Risk Minimization

08/30/2022
by   Zikai Xiong, et al.
0

The Frank-Wolfe method has become increasingly useful in statistical and machine learning applications, due to the structure-inducing properties of the iterates, and especially in settings where linear minimization over the feasible set is more computationally efficient than projection. In the setting of Empirical Risk Minimization – one of the fundamental optimization problems in statistical and machine learning – the computational effectiveness of Frank-Wolfe methods typically grows linearly in the number of data observations n. This is in stark contrast to the case for typical stochastic projection methods. In order to reduce this dependence on n, we look to second-order smoothness of typical smooth loss functions (least squares loss and logistic loss, for example) and we propose amending the Frank-Wolfe method with Taylor series-approximated gradients, including variants for both deterministic and stochastic settings. Compared with current state-of-the-art methods in the regime where the optimality tolerance ε is sufficiently small, our methods are able to simultaneously reduce the dependence on large n while obtaining optimal convergence rates of Frank-Wolfe methods, in both the convex and non-convex settings. We also propose a novel adaptive step-size approach for which we have computational guarantees. Last of all, we present computational experiments which show that our methods exhibit very significant speed-ups over existing methods on real-world datasets for both convex and non-convex binary classification problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2023

Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features

The Frank-Wolfe (FW) method is a popular approach for solving optimizati...
research
06/08/2020

A Stochastic Subgradient Method for Distributionally Robust Non-Convex Learning

We consider a distributionally robust formulation of stochastic optimiza...
research
07/09/2019

A Stochastic First-Order Method for Ordered Empirical Risk Minimization

We propose a new stochastic first-order method for empirical risk minimi...
research
02/13/2018

Fast Global Convergence via Landscape of Empirical Loss

While optimizing convex objective (loss) functions has been a powerhouse...
research
08/25/2017

Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study

The resurgence of deep learning, as a highly effective machine learning ...
research
09/16/2019

Fast Large-Scale Discrete Optimization Based on Principal Coordinate Descent

Binary optimization, a representative subclass of discrete optimization,...
research
11/13/2020

Sparse Representations of Positive Functions via Projected Pseudo-Mirror Descent

We consider the problem of expected risk minimization when the populatio...

Please sign up or login with your details

Forgot password? Click here to reset