Maximum Roaming Multi-Task Learning

06/17/2020
by   Lucas Pascal, et al.
0

Multi-task learning has gained popularity due to the advantages it provides with respect to resource usage and performance. Nonetheless, the joint optimization of parameters with respect to multiple tasks remains an active research topic. Sub-partitioning the parameters between different tasks has proven to be an efficient way to relax the optimization constraints over the shared weights, may the partitions be disjoint or overlapping. However, one drawback of this approach is that it can weaken the inductive bias generally set up by the joint task optimization. In this work, we present a novel way to partition the parameter space without weakening the inductive bias. Specifically, we propose Maximum Roaming, a method inspired by dropout that randomly varies the parameter partitioning, while forcing them to visit as many tasks as possible at a regulated frequency, so that the network fully adapts to each update. We study the properties of our method through experiments on a variety of visual multi-task data sets. Experimental results suggest that the regularization brought by roaming has more impact on performance than usual partitioning optimization strategies. The overall method is flexible, easily applicable, provides superior regularization and consistently achieves improved performances compared to recent multi-task learning formulations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2019

Adaptive Activation Network and Functional Regularization for Efficient and Flexible Deep Multi-Task Learning

Multi-task learning (MTL) is a common paradigm that seeks to improve the...
research
08/12/2019

Feature Partitioning for Efficient Multi-Task Architectures

Multi-task learning holds the promise of less data, parameters, and time...
research
10/23/2018

Meta-Learning Multi-task Communication

In this paper, we describe a general framework: Parameters Read-Write Ne...
research
09/29/2021

Partitioning Cloud-based Microservices (via Deep Learning)

Cloud-based software has many advantages. When services are divided into...
research
02/02/2022

Multi-Task Learning as a Bargaining Game

In Multi-task learning (MTL), a joint model is trained to simultaneously...
research
04/14/2022

Leveraging convergence behavior to balance conflicting tasks in multi-task learning

Multi-Task Learning is a learning paradigm that uses correlated tasks to...
research
11/08/2022

Nimbus: Toward Speed Up Function Signature Recovery via Input Resizing and Multi-Task Learning

Function signature recovery is important for many binary analysis tasks ...

Please sign up or login with your details

Forgot password? Click here to reset