Efficient improper learning for online logistic regression

03/18/2020
by   Rémi Jézéquel, et al.
0

We consider the setting of online logistic regression and consider the regret with respect to the 2-ball of radius B. It is known (see [Hazan et al., 2014]) that any proper algorithm which has logarithmic regret in the number of samples (denoted n) necessarily suffers an exponential multiplicative constant in B. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, [Foster et al., 2018] showed that the lower bound does not apply to improper algorithms and proposed a strategy based on exponential weights with prohibitive computational complexity. Our new algorithm based on regularized empirical risk minimization with surrogate losses satisfies a regret scaling as O(B log(Bn)) with a per-round time-complexity of order O(d^2).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2021

Mixability made efficient: Fast online multiclass logistic regression

Mixability has been shown to be a powerful tool to obtain algorithms wit...
research
02/26/2019

Logarithmic Regret for parameter-free Online Logistic Regression

We consider online optimization procedures in the context of logistic re...
research
01/31/2022

Agnostic Learnability of Halfspaces via Logistic Loss

We investigate approximation guarantees provided by logistic regression ...
research
07/06/2021

Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization

We provide several algorithms for constrained optimization of a large cl...
research
07/24/2020

Exploiting the Surrogate Gap in Online Multiclass Classification

We present Gaptron, a randomized first-order algorithm for online multic...
research
02/27/2012

Efficiently Sampling Multiplicative Attribute Graphs Using a Ball-Dropping Process

We introduce a novel and efficient sampling algorithm for the Multiplica...
research
03/25/2018

Logistic Regression: The Importance of Being Improper

Learning linear predictors with the logistic loss---both in stochastic a...

Please sign up or login with your details

Forgot password? Click here to reset