Distributed Stochastic Multi-Task Learning with Graph Regularization

02/11/2018
by   Weiran Wang, et al.
0

We propose methods for distributed graph-based multi-task learning that are based on weighted averaging of messages from other machines. Uniform averaging or diminishing stepsize in these methods would yield consensus (single task) learning. We show how simply skewing the averaging weights or controlling the stepsize allows learning different, but related, tasks on the different machines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2011

Multi-Task Averaging

We present a multi-task learning approach to jointly estimate the means ...
research
03/07/2016

Distributed Multi-Task Learning with Shared Representation

We study the problem of distributed multi-task learning with shared repr...
research
01/11/2022

In Defense of the Unitary Scalarization for Deep Multi-Task Learning

Recent multi-task learning research argues against unitary scalarization...
research
09/15/2021

Multi-Task Learning with Sequence-Conditioned Transporter Networks

Enabling robots to solve multiple manipulation tasks has a wide range of...
research
09/11/2020

Learning an Interpretable Graph Structure in Multi-Task Learning

We present a novel methodology to jointly perform multi-task learning an...
research
01/12/2021

Seed Stocking Via Multi-Task Learning

Sellers of crop seeds need to plan for the variety and quantity of seeds...
research
04/25/2019

Decentralized Multi-Task Learning Based on Extreme Learning Machines

In multi-task learning (MTL), related tasks learn jointly to improve gen...

Please sign up or login with your details

Forgot password? Click here to reset