DeepAI AI Chat
Log In Sign Up

Discourse-Based Evaluation of Language Understanding

by   Damien Sileo, et al.

We introduce DiscEval, a compilation of 11 evaluation datasets with a focus on discourse, that can be used for evaluation of English Natural Language Understanding when considering meaning as use. We make the case that evaluation with discourse tasks is overlooked and that Natural Language Inference (NLI) pretraining may not lead to the learning really universal representations. DiscEval can also be used as supplementary training data for multi-task learning-based systems, and is publicly available, alongside the code for gathering and preprocessing the datasets.


page 1

page 2

page 3

page 4


KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding

Natural language inference (NLI) and semantic textual similarity (STS) a...

Concept Tagging for Natural Language Understanding: Two Decadelong Algorithm Development

Concept tagging is a type of structured learning needed for natural lang...

Ask, and shall you receive?: Understanding Desire Fulfillment in Natural Language Text

The ability to comprehend wishes or desires and their fulfillment is imp...

Language (Re)modelling: Towards Embodied Language Understanding

While natural language understanding (NLU) is advancing rapidly, today's...

Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding

Generalized text representations are the foundation of many natural lang...

Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding

The goal of this paper is to use multi-task learning to efficiently scal...

Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference

Natural Language Inference (NLI), also known as Recognizing Textual Enta...

Code Repositories


Discourse Based Evaluation of Language Understanding

view repo