ConTinTin: Continual Learning from Task Instructions

03/16/2022
by   Wenpeng Yin, et al.
0

The mainstream machine learning paradigms for NLP often work with two underlying presumptions. First, the target task is predefined and static; a system merely needs to learn to solve it exclusively. Second, the supervision of a task mainly comes from a set of labeled examples. A question arises: how to build a system that can keep learning new tasks from their instructions? This work defines a new learning paradigm ConTinTin (Continual Learning from Task Instructions), in which a system should learn a sequence of new tasks one by one, each task is explained by a piece of textual instruction. The system is required to (i) generate the expected outputs of a new task by learning from its instruction, (ii) transfer the knowledge acquired from upstream tasks to help solve downstream tasks (i.e., forward-transfer), and (iii) retain or even improve the performance on earlier tasks after learning new tasks (i.e., backward-transfer). This new problem is studied on a stream of more than 60 tasks, each equipped with an instruction. Technically, our method InstructionSpeak contains two strategies that make full use of task instructions to improve forward-transfer and backward-transfer: one is to learn from negative outputs, the other is to re-visit instructions of previous tasks. To our knowledge, this is the first time to study ConTinTin in NLP. In addition to the problem formulation and our promising approach, this work also contributes to providing rich analyses for the community to better understand this novel learning problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2023

Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning

Task semantics can be expressed by a set of input-to-output examples or ...
research
05/23/2023

Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation

Instruction tuning has emerged to enhance the capabilities of large lang...
research
10/02/2020

XDA: Accurate, Robust Disassembly with Transfer Learning

Accurate and robust disassembly of stripped binaries is challenging. The...
research
05/05/2023

On the Effectiveness of Equivariant Regularization for Robust Online Continual Learning

Humans can learn incrementally, whereas neural networks forget previousl...
research
11/16/2020

Learning from Task Descriptions

Typically, machine learning systems solve new tasks by training on thous...
research
02/05/2019

Interactively shaping robot behaviour with unlabeled human instructions

In this paper, we propose a framework that enables a human teacher to sh...
research
08/10/2021

Continual Learning for Grounded Instruction Generation by Observing Human Following Behavior

We study continual learning for natural language instruction generation,...

Please sign up or login with your details

Forgot password? Click here to reset