Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information

09/11/2021
by   Majid Jahani, et al.
12

We present a novel adaptive optimization algorithm for large-scale machine learning problems. Equipped with a low-cost estimate of local curvature and Lipschitz smoothness, our method dynamically adapts the search direction and step-size. The search direction contains gradient information preconditioned by a well-scaled diagonal preconditioning matrix that captures the local curvature information. Our methodology does not require the tedious task of learning rate tuning, as the learning rate is updated automatically without adding an extra hyperparameter. We provide convergence guarantees on a comprehensive collection of optimization problems, including convex, strongly convex, and nonconvex problems, in both deterministic and stochastic regimes. We also conduct an extensive empirical evaluation on standard machine learning problems, justifying our algorithm's versatility and demonstrating its strong performance compared to other start-of-the-art first-order and second-order methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2020

SONIA: A Symmetric Blockwise Truncated Optimization Algorithm

This work presents a new algorithm for empirical risk minimization. The ...
research
12/23/2014

ADASECANT: Robust Adaptive Secant Method for Stochastic Gradient

Stochastic gradient algorithms have been the main focus of large-scale l...
research
06/01/2022

Stochastic Gradient Methods with Preconditioned Updates

This work considers non-convex finite sum minimization. There are a numb...
research
11/29/2021

Adaptive First- and Second-Order Algorithms for Large-Scale Machine Learning

In this paper, we consider both first- and second-order techniques to ad...
research
10/26/2022

Adaptive scaling of the learning rate by second order automatic differentiation

In the context of the optimization of Deep Neural Networks, we propose t...
research
08/25/2017

Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study

The resurgence of deep learning, as a highly effective machine learning ...
research
01/15/2020

Resolving learning rates adaptively by locating Stochastic Non-Negative Associated Gradient Projection Points using line searches

Learning rates in stochastic neural network training are currently deter...

Please sign up or login with your details

Forgot password? Click here to reset