The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point

02/08/2021
by   Magnus Sahlgren, et al.
0

This paper discusses the current critique against neural network-based Natural Language Understanding (NLU) solutions known as language models. We argue that much of the current debate rests on an argumentation error that we will refer to as the singleton fallacy: the assumption that language, meaning, and understanding are single and uniform phenomena that are unobtainable by (current) language models. By contrast, we will argue that there are many different types of language use, meaning and understanding, and that (current) language models are build with the explicit purpose of acquiring and representing one type of structural understanding of language. We will argue that such structural understanding may cover several different modalities, and as such can handle several different types of meaning. Our position is that we currently see no theoretical reason why such structural knowledge would be insufficient to count as "real" understanding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/26/2017

Input-to-Output Gate to Improve RNN Language Models

This paper proposes a reinforcing method that refines the output layers ...
research
04/25/2023

On the Computation of Meaning, Language Models and Incomprehensible Horrors

We integrate foundational theories of meaning with a mathematical formal...
research
08/10/2023

Do Language Models Refer?

What do language models (LMs) do with language? Everyone agrees that the...
research
10/19/2022

Language Models Understand Us, Poorly

Some claim language models understand us. Others won't hear it. To clari...
research
06/14/2023

Language models are not naysayers: An analysis of language models on negation benchmarks

Negation has been shown to be a major bottleneck for masked language mod...
research
08/05/2022

Meaning without reference in large language models

The widespread success of large language models (LLMs) has been met with...
research
12/20/2022

Measure More, Question More: Experimental Studies on Transformer-based Language Models and Complement Coercion

Transformer-based language models have shown strong performance on an ar...

Please sign up or login with your details

Forgot password? Click here to reset