Probing for targeted syntactic knowledge through grammatical error detection

10/28/2022
by   Christopher Davis, et al.
0

Targeted studies testing knowledge of subject-verb agreement (SVA) indicate that pre-trained language models encode syntactic information. We assert that if models robustly encode subject-verb agreement, they should be able to identify when agreement is correct and when it is incorrect. To that end, we propose grammatical error detection as a diagnostic probe to evaluate token-level contextual representations for their knowledge of SVA. We evaluate contextual representations at each layer from five pre-trained English language models: BERT, XLNet, GPT-2, RoBERTa, and ELECTRA. We leverage public annotated training data from both English second language learners and Wikipedia edits, and report results on manually crafted stimuli for subject-verb agreement. We find that masked language models linearly encode information relevant to the detection of SVA errors, while the autoregressive models perform on par with our baseline. However, we also observe a divergence in performance when probes are trained on different training sets, and when they are evaluated on different syntactic constructions, suggesting the information pertaining to SVA error detection is not robustly encoded.

READ FULL TEXT
research
10/25/2022

Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models

Structural probing work has found evidence for latent syntactic informat...
research
04/06/2020

An analysis of the utility of explicit negative examples to improve the syntactic abilities of neural language models

We explore the utilities of explicit negative examples in training neura...
research
06/10/2021

Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models

Targeted syntactic evaluations have demonstrated the ability of language...
research
07/07/2020

Evaluating German Transformer Language Models with Syntactic Agreement Tests

Pre-trained transformer language models (TLMs) have recently refashioned...
research
04/14/2022

Does BERT really agree ? Fine-grained Analysis of Lexical Dependence on a Syntactic Task

Although transformer-based Neural Language Models demonstrate impressive...
research
01/16/2019

Assessing BERT's Syntactic Abilities

I assess the extent to which the recently introduced BERT model captures...
research
09/14/2021

Frequency Effects on Syntactic Rule Learning in Transformers

Pre-trained language models perform well on a variety of linguistic task...

Please sign up or login with your details

Forgot password? Click here to reset