DeepAI AI Chat
Log In Sign Up

FLEX: Unifying Evaluation for Few-Shot NLP

07/15/2021
by   Jonathan Bragg, et al.
Allen Institute for Artificial Intelligence
15

Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which techniques perform best or even if they outperform simple baselines. We formulate desiderata for an ideal few-shot NLP benchmark and present FLEX, the first benchmark, public leaderboard, and framework that provides unified, comprehensive measurement for few-shot NLP techniques. FLEX incorporates and introduces new best practices for few-shot evaluation, including measurement of four transfer settings, textual labels for zero-shot evaluation, and a principled approach to benchmark design that optimizes statistical accuracy while keeping evaluation costs accessible to researchers without large compute resources. In addition, we present UniFew, a simple yet strong prompt-based model for few-shot learning which unifies the pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting downstream task formats to language model pretraining objectives. We demonstrate that despite simplicity UniFew achieves results competitive with both popular meta-learning and prompt-based approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/19/2022

Continued Pretraining for Better Zero- and Few-Shot Promptability

Recently introduced language model prompting methods can achieve high ac...
04/06/2021

Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark

Meta and transfer learning are two successful families of approaches to ...
09/17/2020

FewJoint: A Few-shot Learning Benchmark for Joint Language Understanding

Few-learn learning (FSL) is one of the key future steps in machine learn...
09/09/2021

Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning

Recent prompt-based approaches allow pretrained language models to achie...
04/28/2022

On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model

Many recent studies on large-scale language models have reported success...
12/21/2022

JASMINE: Arabic GPT Models for Few-Shot Learning

Task agnostic generative pretraining (GPT) has recently proved promising...
10/13/2020

With Little Power Comes Great Responsibility

Despite its importance to experimental design, statistical power (the pr...

Code Repositories

flex

Few-shot NLP benchmark for unified, rigorous eval


view repo

unifew

Unifew: Unified Fewshot Learning Model


view repo