Using syntactical and logical forms to evaluate textual inference competence

05/10/2019
by   Felipe Salvatore, et al.
0

In the light of recent breakthroughs in transfer learning for Natural Language Processing, much progress was achieved on Natural Language Inference. Different models are now presenting high accuracy on popular inference datasets such as SNLI, MNLI and SciTail. At the same time, there are different indicators that those datasets can be exploited by using some simple linguistic patterns. This fact poses difficulties to our understanding of the actual capacity of machine learning models to solve the complex task of textual inference. We propose a new set of tasks that require specific capacities over linguistic logical forms such as: i) Boolean coordination, ii) quantifiers, iii) definitive description, and iv) counting operators. By evaluating a model on our stratified dataset, we can better pinpoint the specific inferential difficulties of a model in each kind of textual structure. We evaluate two kinds of neural models that implicitly exploit language structure: recurrent models and the Transformer network BERT. We show that although BERT is clearly more efficient to generalize over most logical forms, there is space for improvement when dealing with counting operators.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2022

LogicInference: A New Dataset for Teaching Logical Inference to seq2seq Models

Machine learning models such as Transformers or LSTMs struggle with task...
research
01/10/2022

Polish Natural Language Inference and Factivity – an Expert-based Dataset and Benchmarks

Despite recent breakthroughs in Machine Learning for Natural Language Pr...
research
12/12/2021

Improving Logical-Level Natural Language Generation with Topic-Conditioned Data Augmentation and Logical Form Generation

Logical Natural Language Generation, i.e., generating textual descriptio...
research
03/28/2019

Distilling Task-Specific Knowledge from BERT into Simple Neural Networks

In the natural language processing literature, neural networks are becom...
research
05/23/2022

On the Paradox of Learning to Reason from Data

Logical reasoning is needed in a wide range of NLP tasks. Can a BERT mod...
research
05/08/2020

Probing Linguistic Systematicity

Recently, there has been much interest in the question of whether deep n...
research
10/06/2022

Join-Chain Network: A Logical Reasoning View of the Multi-head Attention in Transformer

Developing neural architectures that are capable of logical reasoning ha...

Please sign up or login with your details

Forgot password? Click here to reset