LINGO : Visually Debiasing Natural Language Instructions to Support Task Diversity

04/12/2023
by   Anjana Arunkumar, et al.
0

Cross-task generalization is a significant outcome that defines mastery in natural language understanding. Humans show a remarkable aptitude for this, and can solve many different types of tasks, given definitions in the form of textual instructions and a small set of examples. Recent work with pre-trained language models mimics this learning style: users can define and exemplify a task for the model to attempt as a series of natural language prompts or instructions. While prompting approaches have led to higher cross-task generalization compared to traditional supervised learning, analyzing 'bias' in the task instructions given to the model is a difficult problem, and has thus been relatively unexplored. For instance, are we truly modeling a task, or are we modeling a user's instructions? To help investigate this, we develop LINGO, a novel visual analytics interface that supports an effective, task-driven workflow to (1) help identify bias in natural language task instructions, (2) alter (or create) task instructions to reduce bias, and (3) evaluate pre-trained model performance on debiased task instructions. To robustly evaluate LINGO, we conduct a user study with both novice and expert instruction creators, over a dataset of 1,616 linguistic tasks and their natural language instructions, spanning 55 different languages. For both user groups, LINGO promotes the creation of more difficult tasks for pre-trained models, that contain higher linguistic diversity and lower instruction bias. We additionally discuss how the insights learned in developing and evaluating LINGO can aid in the design of future dashboards that aim to minimize the effort involved in prompt creation across multiple domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2020

The Turking Test: Can Language Models Understand Instructions?

Supervised machine learning provides the learner with a set of input-out...
research
05/08/2023

Improving Cross-Task Generalization with Step-by-Step Instructions

Instruction tuning has been shown to be able to improve cross-task gener...
research
04/27/2023

Controlled Text Generation with Natural Language Instructions

Large language models generate fluent texts and can follow natural langu...
research
06/01/2023

Did You Read the Instructions? Rethinking the Effectiveness of Task Definitions in Instruction Learning

Large language models (LLMs) have shown impressive performance in follow...
research
05/19/2023

Prompting with Pseudo-Code Instructions

Prompting with natural language instructions has recently emerged as a p...
research
02/09/2023

Real-Time Visual Feedback to Guide Benchmark Creation: A Human-and-Metric-in-the-Loop Workflow

Recent research has shown that language models exploit `artifacts' in be...
research
05/23/2023

Evaluation of African American Language Bias in Natural Language Generation

We evaluate how well LLMs understand African American Language (AAL) in ...

Please sign up or login with your details

Forgot password? Click here to reset