Improved Active Multi-Task Representation Learning via Lasso

06/05/2023
by   Yiping Wang, et al.
0

To leverage the copious amount of data from source tasks and overcome the scarcity of the target task samples, representation learning based on multi-task pretraining has become a standard approach in many applications. However, up until now, most existing works design a source task selection strategy from a purely empirical perspective. Recently, <cit.> gave the first active multi-task representation learning (A-MTRL) algorithm which adaptively samples from source tasks and can provably reduce the total sample complexity using the L2-regularized-target-source-relevance parameter ν^2. But their work is theoretically suboptimal in terms of total source sample complexity and is less practical in some real-world scenarios where sparse training source task selection is desired. In this paper, we address both issues. Specifically, we show the strict dominance of the L1-regularized-relevance-based (ν^1-based) strategy by giving a lower bound for the ν^2-based strategy. When ν^1 is unknown, we propose a practical algorithm that uses the LASSO program to estimate ν^1. Our algorithm successfully recovers the optimal result in the known case. In addition to our sample complexity results, we also characterize the potential of our ν^1-based strategy in sample-cost-sensitive settings. Finally, we provide experiments on real-world computer vision datasets to illustrate the effectiveness of our proposed method.

READ FULL TEXT
research
02/02/2022

Active Multi-Task Representation Learning

To leverage the power of big data from source tasks and overcome the sca...
research
06/15/2023

Active Representation Learning for General Task Space with Applications in Robotics

Representation learning based on multi-task pretraining has become a pow...
research
02/21/2020

Few-Shot Learning via Learning the Representation, Provably

This paper studies few-shot learning via representation learning, where ...
research
11/28/2022

On the Sample Complexity of Representation Learning in Multi-task Bandits with Global and Local structure

We investigate the sample complexity of learning the optimal arm for mul...
research
10/27/2021

Provable Lifelong Learning of Representations

In lifelong learning, the tasks (or classes) to be learned arrive sequen...
research
04/12/2023

Representation Learning with Multi-Step Inverse Kinematics: An Efficient and Optimal Approach to Rich-Observation RL

We study the design of sample-efficient algorithms for reinforcement lea...
research
07/28/2023

Learning with Constraint Learning: New Perspective, Solution Strategy and Various Applications

The complexity of learning problems, such as Generative Adversarial Netw...

Please sign up or login with your details

Forgot password? Click here to reset