Frequency effects in Linear Discriminative Learning

06/19/2023
by   Maria Heitmeier, et al.
0

Word frequency is a strong predictor in most lexical processing tasks. Thus, any model of word recognition needs to account for how word frequency effects arise. The Discriminative Lexicon Model (DLM; Baayen et al., 2018a, 2019) models lexical processing with linear mappings between words' forms and their meanings. So far, the mappings can either be obtained incrementally via error-driven learning, a computationally expensive process able to capture frequency effects, or in an efficient, but frequency-agnostic closed-form solution modelling the theoretical endstate of learning (EL) where all words are learned optimally. In this study we show how an efficient, yet frequency-informed mapping between form and meaning can be obtained (Frequency-informed learning; FIL). We find that FIL well approximates an incremental solution while being computationally much cheaper. FIL shows a relatively low type- and high token-accuracy, demonstrating that the model is able to process most word tokens encountered by speakers in daily life correctly. We use FIL to model reaction times in the Dutch Lexicon Project (Keuleers et al., 2010) and find that FIL predicts well the S-shaped relationship between frequency and the mean of reaction times but underestimates the variance of reaction times for low frequency words. FIL is also better able to account for priming effects in an auditory lexical decision task in Mandarin Chinese (Lee, 2007), compared to EL. Finally, we used ordered data from CHILDES (Brown, 1973; Demuth et al., 2006) to compare mappings obtained with FIL and incremental learning. The mappings are highly correlated, but with FIL some nuances based on word ordering effects are lost. Our results show how frequency effects in a learning model can be simulated efficiently by means of a closed-form solution, and raise questions about how to best account for low-frequency words in cognitive models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2022

How trial-to-trial learning shapes mappings in the mental lexicon: Modelling Lexical Decision with Linear Discriminative Learning

Priming and antipriming can be modelled with error-driven learning (Mars...
research
07/08/2021

Vector Space Morphology with Linear Discriminative Learning

This paper presents three case studies of modeling aspects of lexical pr...
research
01/25/2020

Reducing Noise from Competing Neighbours: Word Retrieval with Lateral Inhibition in Multilink

Multilink is a computational model for word retrieval in monolingual and...
research
06/15/2021

Modeling morphology with Linear Discriminative Learning: considerations and design choices

This study addresses a series of methodological questions that arise whe...
research
10/26/2018

Can Entropy Explain Successor Surprisal Effects in Reading?

Human reading behavior is sensitive to surprisal: more predictable words...
research
11/17/2017

Phonological (un)certainty weights lexical activation

Spoken word recognition involves at least two basic computations. First ...
research
08/10/2020

When words collide: Bayesian meta-analyses of distractor and target properties in the picture-word interference paradigm

In the picture-word interference paradigm, participants name pictures wh...

Please sign up or login with your details

Forgot password? Click here to reset