Multi-Task Learning with Sequence-Conditioned Transporter Networks

09/15/2021
by   Michael H. Lim, et al.
43

Enabling robots to solve multiple manipulation tasks has a wide range of industrial applications. While learning-based approaches enjoy flexibility and generalizability, scaling these approaches to solve such compositional tasks remains a challenge. In this work, we aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling. First, we propose a new suite of benchmark specifically aimed at compositional tasks, MultiRavens, which allows defining custom task combinations through task modules that are inspired by industrial tasks and exemplify the difficulties in vision-based learning and planning methods. Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling and can efficiently learn to solve multi-task long horizon problems. Our analysis suggests that not only the new framework significantly improves pick-and-place performance on novel 10 multi-task benchmark problems, but also the multi-task learning with weighted sampling can vastly improve learning and agent performances on individual tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 6

page 10

page 11

page 14

research
11/12/2021

Multi-task Learning for Compositional Data via Sparse Network Lasso

A network lasso enables us to construct a model for each sample, which i...
research
02/20/2017

Learning to Multi-Task by Active Sampling

One of the long-standing challenges in Artificial Intelligence for learn...
research
02/11/2018

Distributed Stochastic Multi-Task Learning with Graph Regularization

We propose methods for distributed graph-based multi-task learning that ...
research
08/01/2018

Seq2Seq and Multi-Task Learning for joint intent and content extraction for domain specific interpreters

This study evaluates the performances of an LSTM network for detecting a...
research
12/09/2021

New Tight Relaxations of Rank Minimization for Multi-Task Learning

Multi-task learning has been observed by many researchers, which suppose...
research
07/26/2023

Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition

We present a framework for robot skill acquisition, which 1) efficiently...
research
09/13/2012

Minimax Multi-Task Learning and a Generalized Loss-Compositional Paradigm for MTL

Since its inception, the modus operandi of multi-task learning (MTL) has...

Please sign up or login with your details

Forgot password? Click here to reset