A Distributed Quasi-Newton Algorithm for Primal and Dual Regularized Empirical Risk Minimization

12/12/2019
by   Ching-pei Lee, et al.
0

We propose a communication- and computation-efficient distributed optimization algorithm using second-order information for solving empirical risk minimization (ERM) problems with a nonsmooth regularization term. Our algorithm is applicable to both the primal and the dual ERM problem. Current second-order and quasi-Newton methods for this problem either do not work well in the distributed setting or work only for specific regularizers. Our algorithm uses successive quadratic approximations of the smooth part, and we describe how to maintain an approximation of the (generalized) Hessian and solve subproblems efficiently in a distributed manner. When applied to the distributed dual ERM problem, unlike state of the art that takes only the block-diagonal part of the Hessian, our approach is able to utilize global curvature information and is thus magnitudes faster. The proposed method enjoys global linear convergence for a broad range of non-strongly convex problems that includes the most commonly used ERMs, thus requiring lower communication complexity. It also converges on non-convex problems, so has the potential to be used on applications such as deep learning. Computational results demonstrate that our method significantly improves on communication cost and running time over the current state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/04/2018

A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization

In this paper, we propose a communication- and computation- efficient di...
research
02/25/2018

Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solutions for Nonconvex Distributed Optimization

In this work, we study two first-order primal-dual based algorithms, the...
research
07/18/2018

Distributed Second-order Convex Optimization

Convex optimization problems arise frequently in diverse machine learnin...
research
01/30/2023

Robust empirical risk minimization via Newton's method

We study a variant of Newton's method for empirical risk minimization, w...
research
05/20/2022

pISTA: preconditioned Iterative Soft Thresholding Algorithm for Graphical Lasso

We propose a novel quasi-Newton method for solving the sparse inverse co...
research
11/26/2013

Practical Inexact Proximal Quasi-Newton Method with Global Complexity Analysis

Recently several methods were proposed for sparse optimization which mak...
research
12/08/2021

Learning Linear Models Using Distributed Iterative Hessian Sketching

This work considers the problem of learning the Markov parameters of a l...

Please sign up or login with your details

Forgot password? Click here to reset