This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish

11/23/2022
by   Łukasz Augustyniak, et al.
0

The availability of compute and data to train larger and larger language models increases the demand for robust methods of benchmarking the true progress of LM training. Recent years witnessed significant progress in standardized benchmarking for English. Benchmarks such as GLUE, SuperGLUE, or KILT have become de facto standard tools to compare large language models. Following the trend to replicate GLUE for other languages, the KLEJ benchmark has been released for Polish. In this paper, we evaluate the progress in benchmarking for low-resourced languages. We note that only a handful of languages have such comprehensive benchmarks. We also note the gap in the number of tasks being evaluated by benchmarks for resource-rich English/Chinese and the rest of the world. In this paper, we introduce LEPISZCZE (the Polish word for glew, the Middle English predecessor of glue), a new, comprehensive benchmark for Polish NLP with a large variety of tasks and high-quality operationalization of the benchmark. We design LEPISZCZE with flexibility in mind. Including new models, datasets, and tasks is as simple as possible while still offering data versioning and model tracking. In the first run of the benchmark, we test 13 experiments (task and dataset pairs) based on the five most recent LMs for Polish. We use five datasets from the Polish benchmark and add eight novel datasets. As the paper's main contribution, apart from LEPISZCZE, we provide insights and experiences learned while creating the benchmark for Polish as the blueprint to design similar benchmarks for other low-resourced languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2023

WYWEB: A NLP Evaluation Benchmark For Classical Chinese

To fully evaluate the overall performance of different NLP models in a g...
research
03/01/2021

Vyākarana: A Colorless Green Benchmark for Syntactic Evaluation in Indic Languages

While there has been significant progress towards developing NLU dataset...
research
05/17/2023

Towards More Robust NLP System Evaluation: Handling Missing Scores in Benchmarks

The evaluation of natural language processing (NLP) systems is crucial f...
research
08/22/2023

Efficient Benchmarking (of Language Models)

The increasing versatility of language models LMs has given rise to a ne...
research
10/24/2018

Why every GBDT speed benchmark is wrong

This article provides a comprehensive study of different ways to make sp...
research
03/31/2023

Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations

As large language models (LLMs) gain popularity among speakers of divers...
research
07/04/2023

CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care

The recent advances in NLP, have led to a new trend of applying LLMs to ...

Please sign up or login with your details

Forgot password? Click here to reset