On Tilted Losses in Machine Learning: Theory and Applications

09/13/2021
by   Tian Li, et al.
0

Exponential tilting is a technique commonly used in fields such as statistics, probability, information theory, and optimization to create parametric distribution shifts. Despite its prevalence in related fields, tilting has not seen widespread use in machine learning. In this work, we aim to bridge this gap by exploring the use of tilting in risk minimization. We study a simple extension to ERM – tilted empirical risk minimization (TERM) – which uses exponential tilting to flexibly tune the impact of individual losses. The resulting framework has several useful properties: We show that TERM can increase or decrease the influence of outliers, respectively, to enable fairness or robustness; has variance-reduction properties that can benefit generalization; and can be viewed as a smooth approximation to a superquantile method. Our work makes rigorous connections between TERM and related objectives, such as Value-at-Risk, Conditional Value-at-Risk, and distributionally robust optimization (DRO). We develop batch and stochastic first-order optimization methods for solving TERM, provide convergence guarantees for the solvers, and show that the framework can be efficiently solved relative to common alternatives. Finally, we demonstrate that TERM can be used for a multitude of applications in machine learning, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance. Despite the straightforward modification TERM makes to traditional ERM objectives, we find that the framework can consistently outperform ERM and deliver competitive performance with state-of-the-art, problem-specific approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2020

Tilted Empirical Risk Minimization

Empirical risk minimization (ERM) is typically designed to perform well ...
research
11/12/2020

Fairness and Robustness in Invariant Learning: A Case Study in Toxicity Classification

Robustness is of central importance in machine learning and has given ri...
research
02/24/2021

FERMI: Fair Empirical Risk Minimization via Exponential Rényi Mutual Information

In this paper, we propose a new notion of fairness violation, called Exp...
research
09/20/2023

Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework

While training fair machine learning models has been studied extensively...
research
06/21/2017

Improved Optimization of Finite Sums with Minibatch Stochastic Variance Reduced Proximal Iterations

We present novel minibatch stochastic optimization methods for empirical...
research
12/28/2022

Optimal algorithms for group distributionally robust optimization and beyond

Distributionally robust optimization (DRO) can improve the robustness an...
research
04/11/2019

Connections Between Adaptive Control and Optimization in Machine Learning

This paper demonstrates many immediate connections between adaptive cont...

Please sign up or login with your details

Forgot password? Click here to reset