Discourse-Based Evaluation of Language Understanding

07/19/2019
by   Damien Sileo, et al.
0

We introduce DiscEval, a compilation of 11 evaluation datasets with a focus on discourse, that can be used for evaluation of English Natural Language Understanding when considering meaning as use. We make the case that evaluation with discourse tasks is overlooked and that Natural Language Inference (NLI) pretraining may not lead to the learning really universal representations. DiscEval can also be used as supplementary training data for multi-task learning-based systems, and is publicly available, alongside the code for gathering and preprocessing the datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/07/2020

KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding

Natural language inference (NLI) and semantic textual similarity (STS) a...
research
09/01/2023

When Do Discourse Markers Affect Computational Sentence Understanding?

The capabilities and use cases of automatic natural language processing ...
research
07/27/2018

Concept Tagging for Natural Language Understanding: Two Decadelong Algorithm Development

Concept tagging is a type of structured learning needed for natural lang...
research
11/30/2015

Ask, and shall you receive?: Understanding Desire Fulfillment in Natural Language Text

The ability to comprehend wishes or desires and their fulfillment is imp...
research
05/01/2020

Language (Re)modelling: Towards Embodied Language Understanding

While natural language understanding (NLU) is advancing rapidly, today's...
research
08/19/2022

Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding

Generalized text representations are the foundation of many natural lang...
research
03/16/2021

Robustly Optimized and Distilled Training for Natural Language Understanding

In this paper, we explore multi-task learning (MTL) as a second pretrain...

Please sign up or login with your details

Forgot password? Click here to reset