Developmental Negation Processing in Transformer Language Models

04/29/2022
by   Antonio Laverghetta Jr., et al.
4

Reasoning using negation is known to be difficult for transformer-based language models. While previous studies have used the tools of psycholinguistics to probe a transformer's ability to reason over negation, none have focused on the types of negation studied in developmental psychology. We explore how well transformers can process such categories of negation, by framing the problem as a natural language inference (NLI) task. We curate a set of diagnostic questions for our target categories from popular NLI datasets and evaluate how well a suite of models reason over them. We find that models perform consistently better only on certain categories, suggesting clear distinctions in how they are processed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2022

Predicting Human Psychometric Properties Using Computational Language Models

Transformer-based language models (LMs) continue to achieve state-of-the...
research
05/31/2023

Examining the Emergence of Deductive Reasoning in Generative Language Models

We conduct a preliminary inquiry into the ability of generative transfor...
research
11/28/2021

ORCHARD: A Benchmark For Measuring Systematic Generalization of Multi-Hierarchical Reasoning

The ability to reason with multiple hierarchical structures is an attrac...
research
06/12/2021

Can Transformer Language Models Predict Psychometric Properties?

Transformer-based language models (LMs) continue to advance state-of-the...
research
09/30/2020

TaxiNLI: Taking a Ride up the NLU Hill

Pre-trained Transformer-based neural architectures have consistently ach...
research
09/21/2023

Benchmarking quantized LLaMa-based models on the Brazilian Secondary School Exam

Although Large Language Models (LLMs) represent a revolution in the way ...
research
04/07/2022

Transformer-Based Language Models for Software Vulnerability Detection: Performance, Model's Security and Platforms

The large transformer-based language models demonstrate excellent perfor...

Please sign up or login with your details

Forgot password? Click here to reset