Active Representation Learning for General Task Space with Applications in Robotics

06/15/2023
by   Yifang Chen, et al.
0

Representation learning based on multi-task pretraining has become a powerful approach in many domains. In particular, task-aware representation learning aims to learn an optimal representation for a specific target task by sampling data from a set of source tasks, while task-agnostic representation learning seeks to learn a universal representation for a class of tasks. In this paper, we propose a general and versatile algorithmic and theoretic framework for active representation learning, where the learner optimally chooses which source tasks to sample from. This framework, along with a tractable meta algorithm, allows most arbitrary target and source task spaces (from discrete to continuous), covers both task-aware and task-agnostic settings, and is compatible with deep representation learning practices. We provide several instantiations under this framework, from bilinear and feature-based nonlinear to general nonlinear cases. In the bilinear case, by leveraging the non-uniform spectrum of the task representation and the calibrated source-target relevance, we prove that the sample complexity to achieve ε-excess risk on target scales with (k^*)^2 v^*_2^2 ε^-2 where k^* is the effective dimension of the target and v^*_2^2 ∈ (0,1] represents the connection between source and target space. Compared to the passive one, this can save up to 1/d_W of sample complexity, where d_W is the task space dimension. Finally, we demonstrate different instantiations of our meta algorithm in synthetic datasets and robotics problems, from pendulum simulations to real-world drone flight datasets. On average, our algorithms outperform baselines by 20%-70%.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2022

Active Multi-Task Representation Learning

To leverage the power of big data from source tasks and overcome the sca...
research
02/21/2020

Few-Shot Learning via Learning the Representation, Provably

This paper studies few-shot learning via representation learning, where ...
research
06/05/2023

Improved Active Multi-Task Representation Learning via Lasso

To leverage the copious amount of data from source tasks and overcome th...
research
06/20/2020

On the Theory of Transfer Learning: The Importance of Task Diversity

We provide new statistical guarantees for transfer learning via represen...
research
08/08/2023

Meta-Learning Operators to Optimality from Multi-Task Non-IID Data

A powerful concept behind much of the recent progress in machine learnin...
research
05/31/2021

Representation Learning Beyond Linear Prediction Functions

Recent papers on the theory of representation learning has shown the imp...
research
10/09/2019

Improving Password Guessing via Representation Learning

Learning useful representations from unstructured data is one of the cor...

Please sign up or login with your details

Forgot password? Click here to reset