Towards Interpretable Multi-Task Learning Using Bilevel Programming

09/11/2020
by   Francesco Alesiani, et al.
0

Interpretable Multi-Task Learning can be expressed as learning a sparse graph of the task relationship based on the prediction performance of the learned models. Since many natural phenomenon exhibit sparse structures, enforcing sparsity on learned models reveals the underlying task relationship. Moreover, different sparsification degrees from a fully connected graph uncover various types of structures, like cliques, trees, lines, clusters or fully disconnected graphs. In this paper, we propose a bilevel formulation of multi-task learning that induces sparse graphs, thus, revealing the underlying task relationships, and an efficient method for its computation. We show empirically how the induced sparse graph improves the interpretability of the learned models and their relationship on synthetic and real data, without sacrificing generalization performance. Code at https://bit.ly/GraphGuidedMTL

READ FULL TEXT
research
03/15/2012

A Convex Formulation for Learning Task Relationships in Multi-Task Learning

Multi-task learning is a learning paradigm which seeks to improve the ge...
research
09/11/2020

Learning an Interpretable Graph Structure in Multi-Task Learning

We present a novel methodology to jointly perform multi-task learning an...
research
01/27/2021

Language Modelling as a Multi-Task Problem

In this paper, we propose to study language modelling as a multi-task pr...
research
04/30/2023

Multi-Task Structural Learning using Local Task Similarity induced Neuron Creation and Removal

Multi-task learning has the potential to improve generalization by maxim...
research
08/02/2022

Curvature-informed multi-task learning for graph networks

Properties of interest for crystals and molecules, such as band gap, ela...
research
10/28/2020

Polymer Informatics with Multi-Task Learning

Modern data-driven tools are transforming application-specific polymer d...
research
11/26/2022

Synergies Between Disentanglement and Sparsity: a Multi-Task Learning Perspective

Although disentangled representations are often said to be beneficial fo...

Please sign up or login with your details

Forgot password? Click here to reset