DeepAI AI Chat
Log In Sign Up

Developmental Negation Processing in Transformer Language Models

04/29/2022
by   Antonio Laverghetta Jr., et al.
University of South Florida
4

Reasoning using negation is known to be difficult for transformer-based language models. While previous studies have used the tools of psycholinguistics to probe a transformer's ability to reason over negation, none have focused on the types of negation studied in developmental psychology. We explore how well transformers can process such categories of negation, by framing the problem as a natural language inference (NLI) task. We curate a set of diagnostic questions for our target categories from popular NLI datasets and evaluate how well a suite of models reason over them. We find that models perform consistently better only on certain categories, suggesting clear distinctions in how they are processed.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/12/2022

Predicting Human Psychometric Properties Using Computational Language Models

Transformer-based language models (LMs) continue to achieve state-of-the...
11/28/2021

ORCHARD: A Benchmark For Measuring Systematic Generalization of Multi-Hierarchical Reasoning

The ability to reason with multiple hierarchical structures is an attrac...
06/12/2021

Can Transformer Language Models Predict Psychometric Properties?

Transformer-based language models (LMs) continue to advance state-of-the...
09/30/2020

TaxiNLI: Taking a Ride up the NLU Hill

Pre-trained Transformer-based neural architectures have consistently ach...
04/12/2022

What do Toothbrushes do in the Kitchen? How Transformers Think our World is Structured

Transformer-based models are now predominant in NLP. They outperform app...
06/03/2021

Auto-tagging of Short Conversational Sentences using Transformer Methods

The problem of categorizing short speech sentences according to their se...