A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization

03/04/2018
by   Ching-pei Lee, et al.
0

In this paper, we propose a communication- and computation- efficient distributed optimization algorithm using second- order information for solving ERM problems with a nonsmooth regularization term. Current second-order and quasi- Newton methods for this problem either do not work well in the distributed setting or work only for specific regularizers. Our algorithm uses successive quadratic approximations, and we describe how to maintain an approximation of the Hessian and solve subproblems efficiently in a distributed manner. The proposed method enjoys global linear convergence for a broad range of non-strongly convex problems that includes the most commonly used ERMs, thus requiring lower communication complexity. Empirical results also demonstrate that our method significantly improves on communication cost and running time over the current state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2019

A Distributed Quasi-Newton Algorithm for Primal and Dual Regularized Empirical Risk Minimization

We propose a communication- and computation-efficient distributed optimi...
research
08/20/2021

L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method

This work proposes a distributed algorithm for solving empirical risk mi...
research
10/16/2021

Fast Projection onto the Capped Simplex withApplications to Sparse Regression in Bioinformatics

We consider the problem of projecting a vector onto the so-called k-capp...
research
01/30/2023

Robust empirical risk minimization via Newton's method

We study a variant of Newton's method for empirical risk minimization, w...
research
11/26/2013

Practical Inexact Proximal Quasi-Newton Method with Global Complexity Analysis

Recently several methods were proposed for sparse optimization which mak...
research
03/22/2017

Weight Design of Distributed Approximate Newton Algorithms for Constrained Optimization

Motivated by economic dispatch and linearly-constrained resource allocat...
research
07/22/2019

Practical Newton-Type Distributed Learning using Gradient Based Approximations

We study distributed algorithms for expected loss minimization where the...

Please sign up or login with your details

Forgot password? Click here to reset