Multi-Loss Weighting with Coefficient of Variations

09/03/2020
by   Rick Groenendijk, et al.
0

Many interesting tasks in machine learning and computer vision are learned by optimising an objective function defined as a weighted linear combination of multiple losses. The final performance is sensitive to choosing the correct (relative) weights for these losses. Finding a good set of weights is often done by adopting them into the set of hyper-parameters, which are set using an extensive grid search. This is computationally expensive. In this paper, the weights are defined based on properties observed while training the model, including the specific batch loss, the average loss, and the variance for each of the losses. An additional advantage is that the defined weights evolve during training, instead of using static loss weights. In literature, loss weighting is mostly used in a multi-task learning setting, where the different tasks obtain different weights. However, there is a plethora of single-task multi-loss problems that can benefit from automatic loss weighting. In this paper, it is shown that these multi-task approaches do not work on single tasks. Instead, a method is proposed that automatically and dynamically tunes loss weights throughout training specifically for single-task multi-loss problems. The method incorporates a measure of uncertainty to balance the losses. The validity of the approach is shown empirically for different tasks on multiple datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/12/2020

A Simple General Approach to Balance Task Difficulty in Multi-Task Learning

In multi-task learning, difficulty levels of different tasks are varying...
11/20/2021

A Closer Look at Loss Weighting in Multi-Task Learning

Multi-Task Learning (MTL) has achieved great success in various fields, ...
08/26/2020

HydaLearn: Highly Dynamic Task Weighting for Multi-task Learning with Auxiliary Tasks

Multi-task learning (MTL) can improve performance on a task by sharing r...
06/03/2015

Multi-Objective Optimization for Self-Adjusting Weighted Gradient in Machine Learning Tasks

Much of the focus in machine learning research is placed in creating new...
06/11/2021

Instance-Level Task Parameters: A Robust Multi-task Weighting Framework

Recent works have shown that deep neural networks benefit from multi-tas...
09/04/2018

End-to-end Multimodal Emotion and Gender Recognition with Dynamic Joint Loss Weights

Multi-task learning is a method for improving the generalizability of mu...
09/04/2018

End-to-end Multimodal Emotion and Gender Recognition with Dynamic Weights of Joint Loss

Multi-task learning (MTL) is one of the method for improving generalizab...