AMR4NLI: Interpretable and robust NLI measures from semantic graphs

06/01/2023
by   Juri Opitz, et al.
2

The task of natural language inference (NLI) asks whether a given premise (expressed in NL) entails a given NL hypothesis. NLI benchmarks contain human ratings of entailment, but the meaning relationships driving these ratings are not formalized. Can the underlying sentence pair relationships be made more explicit in an interpretable yet robust fashion? We compare semantic structures to represent premise and hypothesis, including sets of contextualized embeddings and semantic graphs (Abstract Meaning Representations), and measure whether the hypothesis is a semantic substructure of the premise, utilizing interpretable metrics. Our evaluation on three English benchmarks finds value in both contextualized embeddings and semantic graphs; moreover, they provide complementary signals, and can be leveraged together in a hybrid model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2022

SBERT studies Meaning Representations: Decomposing Sentence Embeddings into Explainable AMR Meaning Features

Metrics for graph-based meaning representations (e.g., Abstract Meaning ...
research
09/03/2018

Affordance Extraction and Inference based on Semantic Role Labeling

Common-sense reasoning is becoming increasingly important for the advanc...
research
05/11/2023

A maturity model for catalogues of semantic artefacts

The work presented in this paper is twofold. On the one hand, we aim to ...
research
10/22/2022

R^2F: A General Retrieval, Reading and Fusion Framework for Document-level Natural Language Inference

Document-level natural language inference (DOCNLI) is a new challenging ...
research
07/30/2022

Smoothing Entailment Graphs with Language Models

The diversity and Zipfian frequency distribution of natural language pre...
research
10/18/2022

Retrofitting Multilingual Sentence Embeddings with Abstract Meaning Representation

We introduce a new method to improve existing multilingual sentence embe...
research
08/02/2016

SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity

Verbs play a critical role in the meaning of sentences, but these ubiqui...

Please sign up or login with your details

Forgot password? Click here to reset