Learning Effective Loss Functions Efficiently

06/28/2019
by   Matthew Streeter, et al.
0

We consider the problem of learning a loss function which, when minimized over a training dataset, yields a model that approximately minimizes a validation error metric. Though learning an optimal loss function is NP-hard, we present an anytime algorithm that is asymptotically optimal in the worst case, and is provably efficient in an idealized "easy" case. Experimentally, we show that this algorithm can be used to tune loss function hyperparameters orders of magnitude faster than state-of-the-art alternatives. We also show that our algorithm can be used to learn novel and effective loss functions on-the-fly during training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2020

Population-Based Training for Loss Function Optimization

Metalearning of deep neural network (DNN) architectures and hyperparamet...
research
10/02/2020

Effective Regularization Through Loss-Function Metalearning

Loss-function metalearning can be used to discover novel, customized los...
research
07/24/2019

Learning Embedding of 3D models with Quadric Loss

Sharp features such as edges and corners play an important role in the p...
research
05/04/2022

Homography-Based Loss Function for Camera Pose Regression

Some recent visual-based relocalization algorithms rely on deep learning...
research
04/09/2019

Optimal solutions to the isotonic regression problem

In general, the solution to a regression problem is the minimizer of a g...
research
06/24/2020

Bayesian Sampling Bias Correction: Training with the Right Loss Function

We derive a family of loss functions to train models in the presence of ...
research
03/08/2021

Program Synthesis Over Noisy Data with Guarantees

We explore and formalize the task of synthesizing programs over noisy da...

Please sign up or login with your details

Forgot password? Click here to reset