Lexicosyntactic Inference in Neural Models

08/19/2018
by   Aaron Steven White, et al.
0

We investigate neural models' ability to capture lexicosyntactic inferences: inferences triggered by the interaction of lexical and syntactic information. We take the task of event factuality prediction as a case study and build a factuality judgment dataset for all English clause-embedding verbs in various syntactic contexts. We use this dataset, which we make publicly available, to probe the behavior of current state-of-the-art neural systems, showing that these systems make certain systematic errors that are clearly visible through the lens of factuality prediction.

READ FULL TEXT
research
04/30/2020

Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?

Despite the success of language models using neural networks, it remains...
research
04/06/2018

Neural models of factuality

We present two neural models for event factuality prediction, which yiel...
research
09/14/2021

NOPE: A Corpus of Naturally-Occurring Presuppositions in English

Understanding language requires grasping not only the overtly stated con...
research
04/27/2019

HELP: A Dataset for Identifying Shortcomings of Neural Models in Monotonicity Reasoning

Large crowdsourced datasets are widely used for training and evaluating ...
research
06/15/2019

Can neural networks understand monotonicity reasoning?

Monotonicity reasoning is one of the important reasoning skills for any ...
research
10/12/2020

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

Humans can learn structural properties about a word from minimal experie...
research
11/24/2022

Tapping the Potential of Coherence and Syntactic Features in Neural Models for Automatic Essay Scoring

In the prompt-specific holistic score prediction task for Automatic Essa...

Please sign up or login with your details

Forgot password? Click here to reset