DeepAI AI Chat
Log In Sign Up

SAN: Stochastic Average Newton Algorithm for Minimizing Finite Sums

06/19/2021
by   Jiabin Chen, et al.
0

We present a principled approach for designing stochastic Newton methods for solving finite sum optimization problems. Our approach has two steps. First, we rewrite the stationarity conditions as a system of nonlinear equations that associates each data point to a new row. Second, we apply a subsampled Newton Raphson method to solve this system of nonlinear equations. By design, methods developed using our approach are incremental, in that they require only a single data point per iteration. Using our approach, we develop a new Stochastic Average Newton (SAN) method, which is incremental and cheap to implement when solving regularized generalized linear models. We show through extensive numerical experiments that SAN requires no knowledge about the problem, neither parameter tuning, while remaining competitive as compared to classical variance reduced gradient methods, such as SAG and SVRG.

READ FULL TEXT

page 9

page 20

06/26/2018

Quasi-Newton approaches to Interior Point Methods for quadratic problems

Interior Point Methods (IPM) rely on the Newton method for solving syste...
05/17/2017

An Investigation of Newton-Sketch and Subsampled Newton Methods

The concepts of sketching and subsampling have recently received much at...
06/22/2020

Sketched Newton-Raphson

We propose a new globally convergent stochastic second order method. Our...
05/14/2014

Newton-Type Iterative Solver for Multiple View L2 Triangulation

In this note, we show that the L2 optimal solutions to most real multipl...
06/30/2023

Solving nonlinear ODEs with the ultraspherical spectral method

We extend the ultraspherical spectral method to solving nonlinear ODE bo...
02/17/2020

Stochastic Gauss-Newton Algorithms for Nonconvex Compositional Optimization

We develop two new stochastic Gauss-Newton algorithms for solving a clas...