On Lipschitz Continuity and Smoothness of Loss Functions in Learning to Rank

05/03/2014
by   Ambuj Tewari, et al.
0

In binary classification and regression problems, it is well understood that Lipschitz continuity and smoothness of the loss function play key roles in governing generalization error bounds for empirical risk minimization algorithms. In this paper, we show how these two properties affect generalization error bounds in the learning to rank problem. The learning to rank problem involves vector valued predictions and therefore the choice of the norm with respect to which Lipschitz continuity and smoothness are defined becomes crucial. Choosing the ℓ_∞ norm in our definition of Lipschitz continuity allows us to improve existing bounds. Furthermore, under smoothness assumptions, our choice enables us to prove rates that interpolate between 1/√(n) and 1/n rates. Application of our results to ListNet, a popular learning to rank method, gives state-of-the-art performance guarantees.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2021

Coarse-Grained Smoothness for RL in Metric Spaces

Principled decision-making in continuous state–action spaces is impossib...
research
02/24/2023

Generalization Analysis for Contrastive Representation Learning

Recently, contrastive learning has found impressive success in advancing...
research
03/09/2020

Risk Analysis of Divide-and-Conquer ERM

Theoretical analysis of the divide-and-conquer based distributed learnin...
research
03/09/2020

Theoretical Analysis of Divide-and-Conquer ERM: Beyond Square Loss and RKHS

Theoretical analysis of the divide-and-conquer based distributed learnin...
research
07/13/2022

Lipschitz Continuity Retained Binary Neural Network

Relying on the premise that the performance of a binary neural network c...
research
09/16/2019

Universal proofs of entropic continuity bounds via majorization flow

We introduce a notion of majorization flow, and demonstrate it to be a p...
research
02/07/2014

Binary Excess Risk for Smooth Convex Surrogates

In statistical learning theory, convex surrogates of the 0-1 loss are hi...

Please sign up or login with your details

Forgot password? Click here to reset