Multi-task learning on the edge: cost-efficiency and theoretical optimality

10/09/2021
by   Sami Fakhry, et al.
0

This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable. Supporting experiments on synthetic and real benchmark data demonstrate that significant energy gains can be obtained with no performance loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/01/2021

PCA-based Multi Task Learning: a Random Matrix Approach

The article proposes and theoretically analyses a computationally effici...
research
02/10/2022

Adaptive and Robust Multi-task Learning

We study the multi-task learning problem that aims to simultaneously ana...
research
10/22/2022

Adaptive Data Fusion for Multi-task Non-smooth Optimization

We study the problem of multi-task non-smooth optimization that arises u...
research
09/13/2012

Minimax Multi-Task Learning and a Generalized Loss-Compositional Paradigm for MTL

Since its inception, the modus operandi of multi-task learning (MTL) has...
research
07/18/2012

On the Statistical Efficiency of ℓ_1,p Multi-Task Learning of Gaussian Graphical Models

In this paper, we present ℓ_1,p multi-task structure learning for Gaussi...
research
09/03/2020

Large Dimensional Analysis and Improvement of Multi Task Learning

Multi Task Learning (MTL) efficiently leverages useful information conta...
research
11/27/2018

Kernel-based Multi-Task Contextual Bandits in Cellular Network Configuration

Cellular network configuration plays a critical role in network performa...

Please sign up or login with your details

Forgot password? Click here to reset