Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization

07/06/2021
by   Deeksha Adil, et al.
0

We provide several algorithms for constrained optimization of a large class of convex problems, including softmax, ℓ_p regression, and logistic regression. Central to our approach is the notion of width reduction, a technique which has proven immensely useful in the context of maximum flow [Christiano et al., STOC'11] and, more recently, ℓ_p regression [Adil et al., SODA'19], in terms of improving the iteration complexity from O(m^1/2) to Õ(m^1/3), where m is the number of rows of the design matrix, and where each iteration amounts to a linear system solve. However, a considerable drawback is that these methods require both problem-specific potentials and individually tailored analyses. As our main contribution, we initiate a new direction of study by presenting the first unified approach to achieving m^1/3-type rates. Notably, our method goes beyond these previously considered problems to more broadly capture quasi-self-concordant losses, a class which has recently generated much interest and includes the well-studied problem of logistic regression, among others. In order to do so, we develop a unified width reduction method for carefully handling these losses based on a more general set of potentials. Additionally, we directly achieve m^1/3-type rates in the constrained setting without the need for any explicit acceleration schemes, thus naturally complementing recent work based on a ball-oracle approach [Carmon et al., NeurIPS'20].

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2021

Efficient Methods for Online Multiclass Logistic Regression

Multiclass logistic regression is a fundamental task in machine learning...
research
03/18/2020

Efficient improper learning for online logistic regression

We consider the setting of online logistic regression and consider the r...
research
01/31/2022

Agnostic Learnability of Halfspaces via Logistic Loss

We investigate approximation guarantees provided by logistic regression ...
research
08/28/2023

Minimizing Quasi-Self-Concordant Functions by Gradient Regularization of Newton Method

We study the composite convex optimization problems with a Quasi-Self-Co...
research
02/11/2022

Scale-free Unconstrained Online Learning for Curved Losses

A sequence of works in unconstrained online convex optimisation have inv...
research
05/28/2021

Scalable logistic regression with crossed random effects

The cost of both generalized least squares (GLS) and Gibbs sampling in a...
research
03/29/2023

Unified analysis of SGD-type methods

This note focuses on a simple approach to the unified analysis of SGD-ty...

Please sign up or login with your details

Forgot password? Click here to reset