Extra-Newton: A First Approach to Noise-Adaptive Accelerated Second-Order Methods

11/03/2022
by   Kimon Antonakopoulos, et al.
0

This work proposes a universal and adaptive second-order method for minimizing second-order smooth, convex functions. Our algorithm achieves O(σ / √(T)) convergence when the oracle feedback is stochastic with variance σ^2, and improves its convergence to O( 1 / T^3) with deterministic oracles, where T is the number of iterations. Our method also interpolates these rates without knowing the nature of the oracle apriori, which is enabled by a parameter-free adaptive step-size that is oblivious to the knowledge of smoothness modulus, variance bounds and the diameter of the constrained set. To our knowledge, this is the first universal algorithm with such global guarantees within the second-order optimization literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/30/2019

UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

We propose a novel adaptive, accelerated algorithm for the stochastic co...
research
06/06/2022

Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches

Stochastic variance reduction has proven effective at accelerating first...
research
11/29/2021

Adaptive First- and Second-Order Algorithms for Large-Scale Machine Learning

In this paper, we consider both first- and second-order techniques to ad...
research
07/26/2019

A simple Newton method for local nonsmooth optimization

Superlinear convergence has been an elusive goal for black-box nonsmooth...
research
10/11/2019

Fast and Furious Convergence: Stochastic Second Order Methods under Interpolation

We consider stochastic second order methods for minimizing strongly-conv...
research
02/20/2020

Second-order Conditional Gradients

Constrained second-order convex optimization algorithms are the method o...
research
10/01/2018

A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption

We study the problem of optimizing a function under a budgeted number of...

Please sign up or login with your details

Forgot password? Click here to reset