Text-based NP Enrichment

09/24/2021
by   Yanai Elazar, et al.
0

Understanding the relations between entities denoted by NPs in text is a critical part of human-like natural language understanding. However, only a fraction of such relations is covered by NLP tasks and models nowadays. In this work, we establish the task of text-based NP enrichment (TNE), that is, enriching each NP with all the preposition-mediated relations that hold between this and the other NPs in the text. The relations are represented as triplets, each denoting two NPs linked via a preposition. Humans recover such relations seamlessly, while current state-of-the-art models struggle with them due to the implicit nature of the problem. We build the first large-scale dataset for the problem, provide the formal framing and scope of annotation, analyze the data, and report the result of fine-tuned neural language models on the task, demonstrating the challenge it poses to current technology. We created a webpage with the data, data-exploration UI, code, models, and demo to foster further research into this challenging text understanding problem at yanaiela.github.io/TNE/.

READ FULL TEXT
research
09/15/2021

Can Machines Read Coding Manuals Yet? – A Benchmark for Building Better Language Models for Code Understanding

Code understanding is an increasingly important application of Artificia...
research
04/10/2017

Pay Attention to Those Sets! Learning Quantification from Images

Major advances have recently been made in merging language and vision re...
research
04/26/2020

Assessing Discourse Relations in Language Generationfrom Pre-trained Language Models

Recent advances in NLP have been attributed to the emergence of large-sc...
research
04/26/2020

Assessing Discourse Relations in Language Generation from Pre-trained Language Models

Recent advances in NLP have been attributed to the emergence of large-sc...
research
05/23/2023

Can Large Language Models Infer and Disagree Like Humans?

Large Language Models (LLMs) have shown stellar achievements in solving ...
research
04/22/2021

Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand?

Language models trained on billions of tokens have recently led to unpre...
research
09/02/2023

Multilingual Text Representation

Modern NLP breakthrough includes large multilingual models capable of pe...

Please sign up or login with your details

Forgot password? Click here to reset