Reanalyzing L2 Preposition Learning with Bayesian Mixed Effects and a Large Language Model

02/16/2023
by   Jakob Prange, et al.
0

We use both Bayesian and neural models to dissect a data set of Chinese learners' pre- and post-interventional responses to two tests measuring their understanding of English prepositions. The results mostly replicate previous findings from frequentist analyses and newly reveal crucial interactions between student ability, task type, and stimulus sentence. Given the sparsity of the data as well as high diversity among learners, the Bayesian method proves most useful; but we also see potential in using language model probabilities as predictors of grammaticality and learnability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2023

Modeling Human-like Concept Learning with Bayesian Inference over Natural Language

We model learning of abstract symbolic concepts by performing Bayesian i...
research
05/12/2023

Using Language Models to Detect Alarming Student Responses

This article details the advances made to a system that uses artificial ...
research
04/04/2023

Resources and Few-shot Learners for In-context Learning in Slavic Languages

Despite the rapid recent progress in creating accurate and compact in-co...
research
08/27/2018

Targeted Syntactic Evaluation of Language Models

We present a dataset for evaluating the grammaticality of the prediction...
research
08/01/2023

JIANG: Chinese Open Foundation Language Model

With the advancements in large language model technology, it has showcas...
research
11/03/2022

Little Tricky Logic: Misconceptions in the Understanding of LTL

Context: Linear Temporal Logic (LTL) has been used widely in verificatio...

Please sign up or login with your details

Forgot password? Click here to reset