Transparency Helps Reveal When Language Models Learn Meaning

10/14/2022
by   Zhaofeng Wu, et al.
0

Many current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text. Under what conditions might such a procedure acquire meaning? Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations (i.e., languages with strong transparency), both autoregressive and masked language models successfully learn to emulate semantic relations between expressions. However, when denotations are changed to be context-dependent with the language otherwise unmodified, this ability degrades. Turning to natural language, our experiments with a specific phenomenon – referential opacity – add to the growing body of evidence that current language models do not well-represent natural language semantics. We show this failure relates to the context-dependent nature of natural language form-meaning mappings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/22/2021

Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand?

Language models trained on billions of tokens have recently led to unpre...
research
06/20/2023

Towards Understanding What Code Language Models Learned

Pre-trained language models are effective in a variety of natural langua...
research
05/11/2023

How Good are Commercial Large Language Models on African Languages?

Recent advancements in Natural Language Processing (NLP) has led to the ...
research
05/18/2023

Evidence of Meaning in Language Models Trained on Programs

We present evidence that language models can learn meaning despite being...
research
08/27/2023

Symbolic and Language Agnostic Large Language Models

We argue that the relative success of large language models (LLMs) is no...
research
04/25/2023

On the Computation of Meaning, Language Models and Incomprehensible Horrors

We integrate foundational theories of meaning with a mathematical formal...
research
06/21/2023

Limits for Learning with Language Models

With the advent of large language models (LLMs), the trend in NLP has be...

Please sign up or login with your details

Forgot password? Click here to reset