Representing Inferences and their Lexicalization

12/14/2021
by   David McDonald, et al.
2

We have recently begun a project to develop a more effective and efficient way to marshal inferences from background knowledge to facilitate deep natural language understanding. The meaning of a word is taken to be the entities, predications, presuppositions, and potential inferences that it adds to an ongoing situation. As words compose, the minimal model in the situation evolves to limit and direct inference. At this point we have developed our computational architecture and implemented it on real text. Our focus has been on proving the feasibility of our design.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2021

Exploring Transitivity in Neural NLI Models through Veridicality

Despite the recent success of deep neural networks in natural language p...
research
08/25/2019

On Measuring and Mitigating Biased Inferences of Word Embeddings

Word embeddings carry stereotypical connotations from the text they are ...
research
12/15/2022

The KITMUS Test: Evaluating Knowledge Integration from Multiple Sources in Natural Language Understanding Systems

Many state-of-the-art natural language understanding (NLU) models are ba...
research
06/21/2017

Language That Matters: Statistical Inferences for Polarity Identification in Natural Language

Information forms the basis for all human behavior, including the ubiqui...
research
08/18/2022

A Kind Introduction to Lexical and Grammatical Aspect, with a Survey of Computational Approaches

Aspectual meaning refers to how the internal temporal structure of situa...
research
11/09/2021

Enumerating Independent Linear Inferences

A linear inference is a valid inequality of Boolean algebra in which eac...
research
09/06/2019

Uncertain Natural Language Inference

We propose a refinement of Natural Language Inference (NLI), called Unce...

Please sign up or login with your details

Forgot password? Click here to reset