Thresh: A Unified, Customizable and Deployable Platform for Fine-Grained Text Evaluation

08/14/2023
by   David Heineman, et al.
0

Fine-grained, span-level human evaluation has emerged as a reliable and robust method for evaluating text generation tasks such as summarization, simplification, machine translation and news generation, and the derived annotations have been useful for training automatic metrics and improving language models. However, existing annotation tools implemented for these evaluation frameworks lack the adaptability to be extended to different domains or languages, or modify annotation settings according to user needs. And the absence of a unified annotated data format inhibits the research in multi-task learning. In this paper, we introduce Thresh, a unified, customizable and deployable platform for fine-grained evaluation. By simply creating a YAML configuration file, users can build and test an annotation interface for any framework within minutes – all in one web browser window. To facilitate collaboration and sharing, Thresh provides a community hub that hosts a collection of fine-grained frameworks and corresponding annotations made and collected by the community, covering a wide range of NLP tasks. For deployment, Thresh offers multiple options for any scale of annotation projects from small manual inspections to large crowdsourcing ones. Additionally, we introduce a Python library to streamline the entire process from typology design and deployment to annotation processing. Thresh is publicly accessible at https://thresh.tools.

READ FULL TEXT

page 5

page 6

research
06/14/2017

Fine-grained human evaluation of neural versus phrase-based machine translation

We compare three approaches to statistical machine translation (pure phr...
research
05/23/2023

Dancing Between Success and Failure: Edit-level Simplification Evaluation using SALSA

Large language models (e.g., GPT-3.5) are uniquely capable of producing ...
research
06/18/2023

MISMATCH: Fine-grained Evaluation of Machine-generated Text with Mismatch Error Types

With the growing interest in large language models, the need for evaluat...
research
06/13/2019

KCAT: A Knowledge-Constraint Typing Annotation Tool

Fine-grained Entity Typing is a tough task which suffers from noise samp...
research
10/04/2020

STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation

Systems for story generation are asked to produce plausible and enjoyabl...
research
06/02/2023

Fine-Grained Human Feedback Gives Better Rewards for Language Model Training

Language models (LMs) often exhibit undesirable text generation behavior...
research
03/02/2021

A Data-Centric Framework for Composable NLP Workflows

Empirical natural language processing (NLP) systems in application domai...

Please sign up or login with your details

Forgot password? Click here to reset