Structured Stochastic Quasi-Newton Methods for Large-Scale Optimization Problems

by   Minghan Yang, et al.

In this paper, we consider large-scale finite-sum nonconvex problems arising from machine learning. Since the Hessian is often a summation of a relative cheap and accessible part and an expensive or even inaccessible part, a stochastic quasi-Newton matrix is constructed using partial Hessian information as much as possible. By further exploiting the low-rank structures based on the Nyström approximation, the computation of the quasi-Newton direction is affordable. To make full use of the gradient estimation, we also develop an extra-step strategy for this framework. Global convergence to stationary point in expectation and local suplinear convergence rate are established under some mild assumptions. Numerical experiments on logistic regression, deep autoencoder networks and deep learning problems show that the efficiency of our proposed method is at least comparable with the state-of-the-art methods.



page 1

page 2

page 3

page 4


Exact and Inexact Subsampled Newton Methods for Optimization

The paper studies the solution of stochastic optimization problems in wh...

PNKH-B: A Projected Newton-Krylov Method for Large-Scale Bound-Constrained Optimization

We present PNKH-B, a projected Newton-Krylov method with a low-rank appr...

Using Multilevel Circulant Matrix Approximate to Speed Up Kernel Logistic Regression

Kernel logistic regression (KLR) is a classical nonlinear classifier in ...

Stochastic Damped L-BFGS with Controlled Norm of the Hessian Approximation

We propose a new stochastic variance-reduced damped L-BFGS algorithm, wh...

Sketchy Empirical Natural Gradient Methods for Deep Learning

In this paper, we develop an efficient sketchy empirical natural gradien...

Efficiently Using Second Order Information in Large l1 Regularization Problems

We propose a novel general algorithm LHAC that efficiently uses second-o...

Variance-Reduced Stochastic Quasi-Newton Methods for Decentralized Learning: Part I

In this work, we investigate stochastic quasi-Newton methods for minimiz...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.