Algorithmic Foundation of Deep X-Risk Optimization

06/01/2022
by   Tianbao Yang, et al.
0

X-risk is a term introduced to represent a family of compositional measures or objectives, in which each data point is compared with a set of data points explicitly or implicitly for defining a risk function. It includes many widely used measures or objectives, e.g., AUROC, AUPRC, partial AUROC, NDCG, MAP, top-K NDCG, top-K MAP, listwise losses, p-norm push, top push, precision/recall at top K positions, precision at a certain recall level, contrastive objectives, etc. While these measures/objectives and their optimization algorithms have been studied in the literature of machine learning, computer vision, information retrieval, and etc, optimizing these measures/objectives has encountered some unique challenges for deep learning. In this technical report, we survey our recent rigorous efforts for deep X-risk optimization (DXO) by focusing on its algorithmic foundation. We introduce a class of techniques for optimizing X-risk for deep learning. We formulate DXO into three special families of non-convex optimization problems belonging to non-convex min-max optimization, non-convex compositional optimization, and non-convex bilevel optimization, respectively. For each family of problems, we present some strong baseline algorithms and their complexities, which will motivate further research for improving the existing results. Discussions about the presented results and future studies are given at the end. Efficient algorithms for optimizing a variety of X-risks are implemented in the LibAUC library at www.libauc.org.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2022

Finite-Sum Compositional Stochastic Optimization: Theory and Applications

This paper studies stochastic optimization for a sum of compositional fu...
research
10/04/2018

Non-Convex Min-Max Optimization: Provable Algorithms and Applications in Machine Learning

Min-max saddle-point problems have broad applications in many tasks in m...
research
12/18/2019

Provable Non-Convex Optimization and Algorithm Validation via Submodularity

Submodularity is one of the most well-studied properties of problem clas...
research
06/08/2020

A Stochastic Subgradient Method for Distributionally Robust Non-Convex Learning

We consider a distributionally robust formulation of stochastic optimiza...
research
07/23/2021

Implicit Rate-Constrained Optimization of Non-decomposable Objectives

We consider a popular family of constrained optimization problems arisin...
research
06/05/2023

LibAUC: A Deep Learning Library for X-Risk Optimization

This paper introduces the award-winning deep learning (DL) library calle...
research
04/11/2018

When optimizing nonlinear objectives is no harder than linear objectives

Most systems and learning algorithms optimize average performance or ave...

Please sign up or login with your details

Forgot password? Click here to reset