OverSketched Newton: Fast Convex Optimization for Serverless Systems

03/21/2019
by   Vipul Gupta, et al.
0

Motivated by recent developments in serverless systems for large-scale machine learning as well as improvements in scalable randomized matrix algorithms, we develop OverSketched Newton, a randomized Hessian-based optimization algorithm to solve large-scale smooth and strongly-convex problems in serverless systems. OverSketched Newton leverages matrix sketching ideas from Randomized Numerical Linear Algebra to compute the Hessian approximately. These sketching methods lead to inbuilt resiliency against stragglers that are a characteristic of serverless architectures. We establish that OverSketched Newton has a linear-quadratic convergence rate, and we empirically validate our results by solving large-scale supervised learning problems on real-world datasets. Experiments demonstrate a reduction of 50 AWS Lambda, compared to state-of-the-art distributed optimization schemes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2020

A Distributed Cubic-Regularized Newton Method for Smooth Convex Optimization over Networks

We propose a distributed, cubic-regularized Newton method for large-scal...
research
05/11/2018

Randomized Smoothing SVRG for Large-scale Nonsmooth Convex Optimization

In this paper, we consider the problem of minimizing the average of a la...
research
05/09/2015

Newton Sketch: A Linear-time Optimization Algorithm with Linear-Quadratic Convergence

We propose a randomized second-order method for optimization known as th...
research
12/22/2019

Modeling Hessian-vector products in nonlinear optimization: New Hessian-free methods

In this paper, we suggest two ways of calculating interpolation models f...
research
10/25/2019

Convergence Analysis of the Randomized Newton Method with Determinantal Sampling

We analyze the convergence rate of the Randomized Newton Method (RNM) in...
research
07/15/2021

Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update

In second-order optimization, a potential bottleneck can be computing th...
research
08/19/2021

Using Multilevel Circulant Matrix Approximate to Speed Up Kernel Logistic Regression

Kernel logistic regression (KLR) is a classical nonlinear classifier in ...

Please sign up or login with your details

Forgot password? Click here to reset