Fast Line Search for Multi-Task Learning

10/02/2021
by   Andrey Filatov, et al.
0

Multi-task learning is a powerful method for solving several tasks jointly by learning robust representation. Optimization of the multi-task learning model is a more complex task than a single-task due to task conflict. Based on theoretical results, convergence to the optimal point is guaranteed when step size is chosen through line search. But, usually, line search for the step size is not the best choice due to the large computational time overhead. We propose a novel idea for line search algorithms in multi-task learning. The idea is to use latent representation space instead of parameter space for finding step size. We examined this idea with backtracking line search. We compare this fast backtracking algorithm with classical backtracking and gradient methods with a constant learning rate on MNIST, CIFAR-10, Cityscapes tasks. The systematic empirical study showed that the proposed method leads to more accurate and fast solution, than the traditional backtracking approach and keep competitive computational time and performance compared to the constant learning rate method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2020

Statistical Adaptive Stochastic Gradient Methods

We propose a statistical adaptive procedure called SALSA for automatical...
research
04/01/2021

Learning Rates for Multi-task Regularization Networks

Multi-task learning is an important trend of machine learning in facing ...
research
09/29/2018

AdaShift: Decorrelation and Convergence of Adaptive Learning Rate Methods

Adam is shown not being able to converge to the optimal solution in cert...
research
05/31/2018

Asymptotic performance of regularized multi-task learning

This paper analyzes asymptotic performance of a regularized multi-task l...
research
08/19/2023

Dynamic Bilevel Learning with Inexact Line Search

In various domains within imaging and data science, particularly when ad...
research
06/22/2021

Adaptive Learning Rate and Momentum for Training Deep Neural Networks

Recent progress on deep learning relies heavily on the quality and effic...
research
08/27/2023

Revisiting Scalarization in Multi-Task Learning: A Theoretical Perspective

Linear scalarization, i.e., combining all loss functions by a weighted s...

Please sign up or login with your details

Forgot password? Click here to reset