Language models are not naysayers: An analysis of language models on negation benchmarks

06/14/2023
by   Thinh Hung Truong, et al.
0

Negation has been shown to be a major bottleneck for masked language models, such as BERT. However, whether this finding still holds for larger-sized auto-regressive language models (“LLMs”) has not been studied comprehensively. With the ever-increasing volume of research and applications of LLMs, we take a step back to evaluate the ability of current-generation LLMs to handle negation, a fundamental linguistic phenomenon that is central to language understanding. We evaluate different LLMs – including the open-source GPT-neo, GPT-3, and InstructGPT – against a wide range of negation benchmarks. Through systematic experimentation with varying model sizes and prompts, we show that LLMs have several limitations including insensitivity to the presence of negation, an inability to capture the lexical semantics of negation, and a failure to reason under negation.

READ FULL TEXT
research
02/28/2023

Beyond the limitations of any imaginable mechanism: large language models and psycholinguistics

Large language models are not detailed models of human linguistic proces...
research
09/15/2021

Can Machines Read Coding Manuals Yet? – A Benchmark for Building Better Language Models for Code Understanding

Code understanding is an increasingly important application of Artificia...
research
02/08/2021

The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point

This paper discusses the current critique against neural network-based N...
research
04/26/2022

Testing the Ability of Language Models to Interpret Figurative Language

Figurative and metaphorical language are commonplace in discourse, and f...
research
06/21/2023

Limits for Learning with Language Models

With the advent of large language models (LLMs), the trend in NLP has be...
research
09/15/2020

Critical Thinking for Language Models

This paper takes a first step towards a critical thinking curriculum for...
research
08/15/2023

Using Artificial Populations to Study Psychological Phenomena in Neural Models

The recent proliferation of research into transformer based natural lang...

Please sign up or login with your details

Forgot password? Click here to reset