Task Selection Policies for Multitask Learning

07/14/2019
by   John Glover, et al.
0

One of the questions that arises when designing models that learn to solve multiple tasks simultaneously is how much of the available training budget should be devoted to each individual task. We refer to any formalized approach to addressing this problem (learned or otherwise) as a task selection policy. In this work we provide an empirical evaluation of the performance of some common task selection policies in a synthetic bandit-style setting, as well as on the GLUE benchmark for natural language understanding. We connect task selection policy learning to existing work on automated curriculum learning and off-policy evaluation, and suggest a method based on counterfactual estimation that leads to improved model performance in our experimental settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2022

PLUE: Language Understanding Evaluation Benchmark for Privacy Policies in English

Privacy policies provide individuals with information about their rights...
research
12/04/2022

Counterfactual Learning with General Data-generating Policies

Off-policy evaluation (OPE) attempts to predict the performance of count...
research
08/28/2019

Learning a Multitask Curriculum for Neural Machine Translation

Existing curriculum learning research in neural machine translation (NMT...
research
11/25/2022

Policy-Adaptive Estimator Selection for Off-Policy Evaluation

Off-policy evaluation (OPE) aims to accurately evaluate the performance ...
research
06/18/2021

Active Offline Policy Selection

This paper addresses the problem of policy selection in domains with abu...
research
12/09/2022

Multi-Task Off-Policy Learning from Bandit Feedback

Many practical applications, such as recommender systems and learning to...
research
11/13/2021

On the Statistical Benefits of Curriculum Learning

Curriculum learning (CL) is a commonly used machine learning training st...

Please sign up or login with your details

Forgot password? Click here to reset