Second-order Neural Network Training Using Complex-step Directional Derivative

09/15/2020
by   Siyuan Shen, et al.
0

While the superior performance of second-order optimization methods such as Newton's method is well known, they are hardly used in practice for deep learning because neither assembling the Hessian matrix nor calculating its inverse is feasible for large-scale problems. Existing second-order methods resort to various diagonal or low-rank approximations of the Hessian, which often fail to capture necessary curvature information to generate a substantial improvement. On the other hand, when training becomes batch-based (i.e., stochastic), noisy second-order information easily contaminates the training procedure unless expensive safeguard is employed. In this paper, we adopt a numerical algorithm for second-order neural network training. We tackle the practical obstacle of Hessian calculation by using the complex-step finite difference (CSFD) – a numerical procedure adding an imaginary perturbation to the function for derivative computation. CSFD is highly robust, efficient, and accurate (as accurate as the analytic result). This method allows us to literally apply any known second-order optimization methods for deep learning training. Based on it, we design an effective Newton Krylov procedure. The key mechanism is to terminate the stochastic Krylov iteration as soon as a disturbing direction is found so that unnecessary computation can be avoided. During the optimization, we monitor the approximation error in the Taylor expansion to adjust the step size. This strategy combines advantages of line search and trust region methods making our method preserves good local and global convergency at the same time. We have tested our methods in various deep learning tasks. The experiments show that our method outperforms exiting methods, and it often converges one-order faster. We believe our method will inspire a wide-range of new algorithms for deep learning and numerical optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2020

SPAN: A Stochastic Projected Approximate Newton Method

Second-order optimization methods have desirable convergence properties....
research
04/08/2021

Exact Stochastic Second Order Deep Learning

Optimization in Deep Learning is mainly dominated by first-order methods...
research
06/04/2021

ViViT: Curvature access through the generalized Gauss-Newton's low-rank structure

Curvature in form of the Hessian or its generalized Gauss-Newton (GGN) a...
research
08/03/2022

Neural Nets with a Newton Conjugate Gradient Method on Multiple GPUs

Training deep neural networks consumes increasing computational resource...
research
09/03/2013

SKYNET: an efficient and robust neural network training tool for machine learning in astronomy

We present the first public release of our generic neural network traini...
research
12/14/2020

An Adaptive Memory Multi-Batch L-BFGS Algorithm for Neural Network Training

Motivated by the potential for parallel implementation of batch-based al...
research
06/03/2020

On the Promise of the Stochastic Generalized Gauss-Newton Method for Training DNNs

Following early work on Hessian-free methods for deep learning, we study...

Please sign up or login with your details

Forgot password? Click here to reset