Pairwise Learning via Stagewise Training in Proximal Setting

08/08/2022
by   Hilal AlQuabeh, et al.
1

The pairwise objective paradigms are an important and essential aspect of machine learning. Examples of machine learning approaches that use pairwise objective functions include differential network in face recognition, metric learning, bipartite learning, multiple kernel learning, and maximizing of area under the curve (AUC). Compared to pointwise learning, pairwise learning's sample size grows quadratically with the number of samples and thus its complexity. Researchers mostly address this challenge by utilizing an online learning system. Recent research has, however, offered adaptive sample size training for smooth loss functions as a better strategy in terms of convergence and complexity, but without a comprehensive theoretical study. In a distinct line of research, importance sampling has sparked a considerable amount of interest in finite pointwise-sum minimization. This is because of the stochastic gradient variance, which causes the convergence to be slowed considerably. In this paper, we combine adaptive sample size and importance sampling techniques for pairwise learning, with convergence guarantees for nonsmooth convex pairwise loss functions. In particular, the model is trained stochastically using an expanded training set for a predefined number of iterations derived from the stability bounds. In addition, we demonstrate that sampling opposite instances at each iteration reduces the variance of the gradient, hence accelerating convergence. Experiments on a broad variety of datasets in AUC maximization confirm the theoretical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/25/2019

Stability and Optimization Error of Stochastic Gradient Descent for Pairwise Learning

In this paper we study the stability and its trade-off with optimization...
research
01/22/2013

Online Learning with Pairwise Loss Functions

Efficient online learning with pairwise loss functions is a crucial comp...
research
11/09/2021

Learning Rates for Nonconvex Pairwise Learning

Pairwise learning is receiving increasing attention since it covers many...
research
11/08/2019

Variance Reduced Stochastic Proximal Algorithm for AUC Maximization

Stochastic Gradient Descent has been widely studied with classification ...
research
06/11/2019

ADASS: Adaptive Sample Selection for Training Acceleration

Stochastic gradient decent (SGD) and its variants, including some accele...
research
03/27/2022

Benchmarking Deep AUROC Optimization: Loss Functions and Algorithmic Choices

The area under the ROC curve (AUROC) has been vigorously applied for imb...
research
11/04/2019

Importance Sampling via Local Sensitivity

Given a loss function F:X→R^+ that can be written as the sum of losses o...

Please sign up or login with your details

Forgot password? Click here to reset