DynaShare: Task and Instance Conditioned Parameter Sharing for Multi-Task Learning

05/26/2023
by   Elahe Rahimian, et al.
0

Multi-task networks rely on effective parameter sharing to achieve robust generalization across tasks. In this paper, we present a novel parameter sharing method for multi-task learning that conditions parameter sharing on both the task and the intermediate feature representations at inference time. In contrast to traditional parameter sharing approaches, which fix or learn a deterministic sharing pattern during training and apply the same pattern to all examples during inference, we propose to dynamically decide which parts of the network to activate based on both the task and the input instance. Our approach learns a hierarchical gating policy consisting of a task-specific policy for coarse layer selection and gating units for individual input instances, which work together to determine the execution path at inference time. Experiments on the NYU v2, Cityscapes and MIMIC-III datasets demonstrate the potential of the proposed approach and its applicability across problem domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2019

AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning

Multi-task learning is an open and challenging problem in computer visio...
research
11/12/2019

Learning Sparse Sharing Architectures for Multiple Tasks

Most existing deep multi-task learning models are based on parameter sha...
research
09/11/2019

Deep Elastic Networks with Model Selection for Multi-Task Learning

In this work, we consider the problem of instance-wise dynamic network m...
research
04/07/2020

Multi-Task Learning via Co-Attentive Sharing for Pedestrian Attribute Recognition

Learning to predict multiple attributes of a pedestrian is a multi-task ...
research
10/28/2017

Multi-Task Learning by Deep Collaboration and Application in Facial Landmark Detection

Convolutional neural networks (CNN) have become the most successful and ...
research
06/02/2020

Learning to Branch for Multi-Task Learning

Training multiple tasks jointly in one deep network yields reduced laten...
research
12/07/2022

Tree DNN: A Deep Container Network

Multi-Task Learning (MTL) has shown its importance at user products for ...

Please sign up or login with your details

Forgot password? Click here to reset