Robustness of Learning from Task Instructions

12/07/2022
by   Jiasheng Gu, et al.
0

Traditional supervised learning mostly works on individual tasks and requires training on a large set of task-specific examples. This paradigm seriously hinders the development of task generalization since preparing a task-specific example set is costly. To build a system that can quickly and easily generalize to new tasks, task instructions have been adopted as an emerging trend of supervision recently. These instructions give the model the definition of the task and allow the model to output the appropriate answer based on the instructions and inputs. However, task instructions are often expressed in different forms, which can be interpreted from two threads: first, some instructions are short sentences and are pretrained language model (PLM) oriented, such as prompts, while other instructions are paragraphs and are human-oriented, such as those in Amazon MTurk; second, different end-users very likely explain the same task with instructions of different textual expressions. A robust system for task generalization should be able to handle any new tasks regardless of the variability of instructions. However, the system robustness in dealing with instruction-driven task generalization is still unexplored. This work investigates the system robustness when the instructions of new tasks are (i) maliciously manipulated, (ii) paraphrased, or (iii) from different levels of conciseness. To our knowledge, this is the first work that systematically studies how robust a PLM is when it is supervised by instructions with different factors of variability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2023

Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning

Task semantics can be expressed by a set of input-to-output examples or ...
research
11/16/2020

Learning from Task Descriptions

Typically, machine learning systems solve new tasks by training on thous...
research
08/04/2023

Forget Demonstrations, Focus on Learning from Textual Instructions

This work studies a challenging yet more realistic setting for zero-shot...
research
01/13/2020

Towards Evaluating Plan Generation Approaches with Instructional Texts

Recent research in behaviour understanding through language grounding ha...
research
06/06/2020

Towards Generating Virtual Movement from Textual Instructions A Case Study in Quality Assessment

Many application areas ranging from serious games for health to learning...
research
12/16/2022

Implementation of general formal translators

The general translator formalism and computing specific implementations ...
research
04/28/2023

A Unified Generative Retriever for Knowledge-Intensive Language Tasks via Prompt Learning

Knowledge-intensive language tasks (KILTs) benefit from retrieving high-...

Please sign up or login with your details

Forgot password? Click here to reset