Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study

08/25/2017
by   Peng Xu, et al.
0

The resurgence of deep learning, as a highly effective machine learning paradigm, has brought back to life the old optimization question of non-convexity. Indeed, the challenges related to the large-scale nature of many modern machine learning applications are severely exacerbated by the inherent non-convexity in the underlying models. In this light, efficient optimization algorithms which can be effectively applied to such large-scale and non-convex learning problems are highly desired. In doing so, however, the bulk of research has been almost completely restricted to the class of 1st-order algorithms. This is despite the fact that employing the curvature information, e.g., in the form of Hessian, can indeed help with obtaining effective methods with desirable convergence properties for non-convex problems, e.g., avoiding saddle-points and convergence to local minima. The conventional wisdom, in the machine learning community is that the application of 2nd-order methods, i.e., those that employ Hessian as well as gradient information, can be highly inefficient. Consequently, 1st-order algorithms, such as stochastic gradient descent (SGD), have been at the center-stage for solving such machine learning problems. Here, we aim at addressing this misconception by considering efficient and stochastic variants of Newton's method, namely, sub-sampled trust-region and cubic regularization, whose theoretical convergence properties have recently been established in [Xu 2017]. Using a variety of experiments, we empirically evaluate the performance of these methods for solving non-convex machine learning applications. In doing so, we highlight the shortcomings of 1st-order methods, e.g., high sensitivity to hyper-parameters such as step-size and undesirable behavior near saddle-points, and showcase the advantages of employing curvature information as effective remedy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2019

Quasi-Newton Optimization Methods For Deep Learning Applications

Deep learning algorithms often require solving a highly non-linear and n...
research
09/19/2022

BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach

Bilevel optimization (BO) is useful for solving a variety of important m...
research
02/22/2023

Faster Riemannian Newton-type Optimization by Subsampling and Cubic Regularization

This work is on constrained large-scale non-convex optimization where th...
research
11/08/2021

Inertial Newton Algorithms Avoiding Strict Saddle Points

We study the asymptotic behavior of second-order algorithms mixing Newto...
research
11/08/2019

MindTheStep-AsyncPSGD: Adaptive Asynchronous Parallel Stochastic Gradient Descent

Stochastic Gradient Descent (SGD) is very useful in optimization problem...
research
08/30/2022

Using Taylor-Approximated Gradients to Improve the Frank-Wolfe Method for Empirical Risk Minimization

The Frank-Wolfe method has become increasingly useful in statistical and...
research
09/11/2021

Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information

We present a novel adaptive optimization algorithm for large-scale machi...

Please sign up or login with your details

Forgot password? Click here to reset