Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization

05/27/2019
by   Santiago Gonzalez, et al.
0

As the complexity of neural network models has grown, it has become increasingly important to optimize their design automatically through metalearning. Methods for discovering hyperparameters, topologies, and learning rate schedules have lead to significant increases in performance. This paper shows that loss functions can be optimized with metalearning as well, and result in similar improvements. The method, Genetic Loss-function Optimization (GLO), discovers loss functions de novo, and optimizes them for a target task. Leveraging techniques from genetic programming, GLO builds loss functions hierarchically from a set of operators and leaf nodes. These functions are repeatedly recombined and mutated to find an optimal structure, and then a covariance-matrix adaptation evolutionary strategy (CMA-ES) is used to find optimal coefficients. Networks trained with GLO loss functions are found to outperform the standard cross-entropy loss on standard image classification tasks. Training with these new loss functions requires fewer steps, results in lower test error, and allows for smaller datasets to be used. Loss-function optimization thus provides a new dimension of metalearning, and constitutes an important step towards AutoML.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2020

Evolving Loss Functions With Multivariate Taylor Polynomial Parameterizations

Loss function optimization for neural networks has recently emerged as a...
research
02/11/2020

Population-Based Training for Loss Function Optimization

Metalearning of deep neural network (DNN) architectures and hyperparamet...
research
08/05/2022

CROLoss: Towards a Customizable Loss for Retrieval Models in Recommender Systems

In large-scale recommender systems, retrieving top N relevant candidates...
research
04/24/2022

The Multiscale Structure of Neural Network Loss Functions: The Effect on Optimization and Origin

Local quadratic approximation has been extensively used to study the opt...
research
08/13/2023

Effect of Choosing Loss Function when Using T-batching for Representation Learning on Dynamic Networks

Representation learning methods have revolutionized machine learning on ...
research
06/07/2023

Loss Functions for Behavioral Game Theory

Behavioral game theorists all use experimental data to evaluate predicti...
research
06/17/2021

CIRA Guide to Custom Loss Functions for Neural Networks in Environmental Sciences – Version 1

Neural networks are increasingly used in environmental science applicati...

Please sign up or login with your details

Forgot password? Click here to reset