Integrating Linguistic Theory and Neural Language Models

07/20/2022
by   Bai Li, et al.
0

Transformer-based language models have recently achieved remarkable results in many natural language tasks. However, performance on leaderboards is generally achieved by leveraging massive amounts of training data, and rarely by encoding explicit linguistic knowledge into neural models. This has led many to question the relevance of linguistics for modern natural language processing. In this dissertation, I present several case studies to illustrate how theoretical linguistics and neural language models are still relevant to each other. First, language models are useful to linguists by providing an objective tool to measure semantic distance, which is difficult to do using traditional methods. On the other hand, linguistic theory contributes to language modelling research by providing frameworks and sources of data to probe our language models for specific aspects of language understanding. This thesis contributes three studies that explore different aspects of the syntax-semantics interface in language models. In the first part of my thesis, I apply language models to the problem of word class flexibility. Using mBERT as a source of semantic distance measurements, I present evidence in favour of analyzing word class flexibility as a directional process. In the second part of my thesis, I propose a method to measure surprisal at intermediate layers of language models. My experiments show that sentences containing morphosyntactic anomalies trigger surprisals earlier in language models than semantic and commonsense anomalies. Finally, in the third part of my thesis, I adapt several psycholinguistic studies to show that language models contain knowledge of argument structure constructions. In summary, my thesis develops new connections between natural language processing, linguistic theory, and psycholinguistics to provide fresh perspectives for the interpretation of language models.

READ FULL TEXT
research
05/16/2022

A Precis of Language Models are not Models of Language

Natural Language Processing is one of the leading application areas in t...
research
05/16/2021

How is BERT surprised? Layerwise detection of linguistic anomalies

Transformer language models have shown remarkable ability in detecting w...
research
02/23/2023

Testing AI performance on less frequent aspects of language reveals insensitivity to underlying meaning

Advances in computational methods and big data availability have recentl...
research
09/05/2020

Visually Analyzing Contextualized Embeddings

In this paper we introduce a method for visually analyzing contextualize...
research
05/02/2020

Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese

We examine a methodology using neural language models (LMs) for analyzin...
research
02/24/2022

Neural reality of argument structure constructions

In lexicalist linguistic theories, argument structure is assumed to be p...
research
04/25/2023

What does BERT learn about prosody?

Language models have become nearly ubiquitous in natural language proces...

Please sign up or login with your details

Forgot password? Click here to reset