DeepAI AI Chat
Log In Sign Up

FLEX: Unifying Evaluation for Few-Shot NLP

by   Jonathan Bragg, et al.
Allen Institute for Artificial Intelligence

Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which techniques perform best or even if they outperform simple baselines. We formulate desiderata for an ideal few-shot NLP benchmark and present FLEX, the first benchmark, public leaderboard, and framework that provides unified, comprehensive measurement for few-shot NLP techniques. FLEX incorporates and introduces new best practices for few-shot evaluation, including measurement of four transfer settings, textual labels for zero-shot evaluation, and a principled approach to benchmark design that optimizes statistical accuracy while keeping evaluation costs accessible to researchers without large compute resources. In addition, we present UniFew, a simple yet strong prompt-based model for few-shot learning which unifies the pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting downstream task formats to language model pretraining objectives. We demonstrate that despite simplicity UniFew achieves results competitive with both popular meta-learning and prompt-based approaches.


page 1

page 2

page 3

page 4


Continued Pretraining for Better Zero- and Few-Shot Promptability

Recently introduced language model prompting methods can achieve high ac...

Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark

Meta and transfer learning are two successful families of approaches to ...

FewJoint: A Few-shot Learning Benchmark for Joint Language Understanding

Few-learn learning (FSL) is one of the key future steps in machine learn...

Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning

Recent prompt-based approaches allow pretrained language models to achie...

On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model

Many recent studies on large-scale language models have reported success...

JASMINE: Arabic GPT Models for Few-Shot Learning

Task agnostic generative pretraining (GPT) has recently proved promising...

With Little Power Comes Great Responsibility

Despite its importance to experimental design, statistical power (the pr...

Code Repositories


Few-shot NLP benchmark for unified, rigorous eval

view repo


Unifew: Unified Fewshot Learning Model

view repo