Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality

05/15/2021
by   Jonathan Lacotte, et al.
6

We propose a randomized algorithm with quadratic convergence rate for convex optimization problems with a self-concordant, composite, strongly convex objective function. Our method is based on performing an approximate Newton step using a random projection of the Hessian. Our first contribution is to show that, at each iteration, the embedding dimension (or sketch size) can be as small as the effective dimension of the Hessian matrix. Leveraging this novel fundamental result, we design an algorithm with a sketch size proportional to the effective dimension and which exhibits a quadratic rate of convergence. This result dramatically improves on the classical linear-quadratic convergence rates of state-of-the-art sub-sampled Newton methods. However, in most practical cases, the effective dimension is not known beforehand, and this raises the question of how to pick a sketch size as small as the effective dimension while preserving a quadratic convergence rate. Our second and main contribution is thus to propose an adaptive sketch size algorithm with quadratic convergence rate and which does not require prior knowledge or estimation of the effective dimension: at each iteration, it starts with a small sketch size, and increases it until quadratic progress is achieved. Importantly, we show that the embedding dimension remains proportional to the effective dimension throughout the entire path and that our method achieves state-of-the-art computational complexity for solving convex optimization programs with a strongly convex component.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2021

Fast Convex Quadratic Optimization Solvers with Adaptive Sketching-based Preconditioners

We consider least-squares problems with quadratic regularization and pro...
research
05/09/2015

Newton Sketch: A Linear-time Optimization Algorithm with Linear-Quadratic Convergence

We propose a randomized second-order method for optimization known as th...
research
02/28/2022

On the Robustness of CountSketch to Adaptive Inputs

CountSketch is a popular dimensionality reduction technique that maps ve...
research
05/22/2023

Sketch-and-Project Meets Newton Method: Global 𝒪(k^-2) Convergence with Low-Rank Updates

In this paper, we propose the first sketch-and-project Newton method wit...
research
05/27/2022

Asymptotic Convergence Rate and Statistical Inference for Stochastic Sequential Quadratic Programming

We apply a stochastic sequential quadratic programming (StoSQP) algorith...
research
03/02/2021

Convergence Rate of the (1+1)-Evolution Strategy with Success-Based Step-Size Adaptation on Convex Quadratic Functions

The (1+1)-evolution strategy (ES) with success-based step-size adaptatio...
research
10/25/2019

Convergence Analysis of the Randomized Newton Method with Determinantal Sampling

We analyze the convergence rate of the Randomized Newton Method (RNM) in...

Please sign up or login with your details

Forgot password? Click here to reset