Multi-Task Averaging

07/21/2011
by   Sergey Feldman, et al.
0

We present a multi-task learning approach to jointly estimate the means of multiple independent data sets. The proposed multi-task averaging (MTA) algorithm results in a convex combination of the single-task maximum likelihood estimates. We derive the optimal minimum risk estimator and the minimax estimator, and show that these estimators can be efficiently estimated. Simulations and real data experiments demonstrate that MTA estimators often outperform both single-task and James-Stein estimators.

READ FULL TEXT
research
02/11/2018

Distributed Stochastic Multi-Task Learning with Graph Regularization

We propose methods for distributed graph-based multi-task learning that ...
research
11/13/2020

High-Dimensional Multi-Task Averaging and Application to Kernel Mean Embedding

We propose an improved estimator for the multi-task averaging problem, w...
research
05/27/2022

Selective Inference for Sparse Multitask Regression with Applications in Neuroimaging

Multi-task learning is frequently used to model a set of related respons...
research
09/20/2019

Applications of Generalized Maximum Likelihood Estimators to stratified sampling and post-stratification with many unobserved strata

Consider the problem of estimating a weighted average of the means of n ...
research
04/07/2021

Equivariant Estimation of Fréchet Means

The Fréchet mean generalizes the concept of a mean to a metric space set...
research
08/24/2018

Self-Paced Multi-Task Clustering

Multi-task clustering (MTC) has attracted a lot of research attentions i...
research
09/13/2012

Minimax Multi-Task Learning and a Generalized Loss-Compositional Paradigm for MTL

Since its inception, the modus operandi of multi-task learning (MTL) has...

Please sign up or login with your details

Forgot password? Click here to reset