Curriculum: A Broad-Coverage Benchmark for Linguistic Phenomena in Natural Language Understanding

04/13/2022
by   Zeming Chen, et al.
0

In the age of large transformer language models, linguistic evaluation play an important role in diagnosing models' abilities and limitations on natural language understanding. However, current evaluation methods show some significant shortcomings. In particular, they do not provide insight into how well a language model captures distinct linguistic skills essential for language understanding and reasoning. Thus they fail to effectively map out the aspects of language understanding that remain challenging to existing models, which makes it hard to discover potential limitations in models and datasets. In this paper, we introduce Curriculum as a new format of NLI benchmark for evaluation of broad-coverage linguistic phenomena. Curriculum contains a collection of datasets that covers 36 types of major linguistic phenomena and an evaluation procedure for diagnosing how well a language model captures reasoning skills for distinct types of linguistic phenomena. We show that this linguistic-phenomena-driven benchmark can serve as an effective tool for diagnosing model behavior and verifying model learning quality. In addition, Our experiments provide insight into the limitation of existing benchmark datasets and state-of-the-art models that may encourage future research on re-designing datasets, model architectures, and learning objectives.

READ FULL TEXT
research
09/16/2019

Probing Natural Language Inference Models through Semantic Fragments

Do state-of-the-art models for language understanding already have, or c...
research
08/04/2021

Curriculum learning for language modeling

Language Models like ELMo and BERT have provided robust representations ...
research
10/29/2020

RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark

In this paper, we introduce an advanced Russian general language underst...
research
09/12/2023

BHASA: A Holistic Southeast Asian Linguistic and Cultural Evaluation Suite for Large Language Models

The rapid development of Large Language Models (LLMs) and the emergence ...
research
05/24/2022

FLUTE: Figurative Language Understanding and Textual Explanations

In spite of the prevalence of figurative language, transformer-based mod...
research
06/20/2016

The LAMBADA dataset: Word prediction requiring a broad discourse context

We introduce LAMBADA, a dataset to evaluate the capabilities of computat...
research
12/02/2019

BLiMP: A Benchmark of Linguistic Minimal Pairs for English

We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLi...

Please sign up or login with your details

Forgot password? Click here to reset