DeepAI AI Chat
Log In Sign Up

Learning Task Grouping and Overlap in Multi-task Learning

by   Abhishek Kumar, et al.

In the paradigm of multi-task learning, mul- tiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learn- ing that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combi- nation of a finite number of underlying basis tasks. The coefficients of the linear combina- tion are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on on the assumption that task pa- rameters within a group lie in a low dimen- sional subspace but allows the tasks in differ- ent groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.


page 1

page 2

page 3

page 4


A Convex Formulation for Learning Task Relationships in Multi-Task Learning

Multi-task learning is a learning paradigm which seeks to improve the ge...

Modular Universal Reparameterization: Deep Multi-task Learning Across Diverse Domains

As deep learning applications continue to become more diverse, an intere...

Curriculum Learning of Multiple Tasks

Sharing information between multiple tasks enables algorithms to achieve...

Learning task structure via sparsity grouped multitask learning

Sparse mapping has been a key methodology in many high-dimensional scien...

PaCo: Parameter-Compositional Multi-Task Reinforcement Learning

The purpose of multi-task reinforcement learning (MTRL) is to train a si...

Multi-Task Learning for Sparsity Pattern Heterogeneity: A Discrete Optimization Approach

We extend best-subset selection to linear Multi-Task Learning (MTL), whe...

Learning Task Relatedness in Multi-Task Learning for Images in Context

Multimedia applications often require concurrent solutions to multiple t...