Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization

11/04/2020
by   Tristan Thrush, et al.
0

Previous studies investigating the syntactic abilities of deep learning models have not targeted the relationship between the strength of the grammatical generalization and the amount of evidence to which the model is exposed during training. We address this issue by deploying a novel word-learning paradigm to test BERT's few-shot learning capabilities for two aspects of English verbs: alternations and classes of selectional preferences. For the former, we fine-tune BERT on a single frame in a verbal-alternation pair and ask whether the model expects the novel verb to occur in its sister frame. For the latter, we fine-tune BERT on an incomplete selectional network of verbal objects and ask whether it expects unattested but plausible verb/object pairs. We find that BERT makes robust grammatical generalizations after just one or two instances of a novel word in fine-tuning. For the verbal alternation tests, we find that the model displays behavior that is consistent with a transitivity bias: verbs seen few times are expected to take direct objects, but verbs seen with direct objects are not expected to occur intransitively.

READ FULL TEXT

page 5

page 6

page 11

research
08/31/2021

How Does Adversarial Fine-Tuning Benefit BERT?

Adversarial training (AT) is one of the most reliable methods for defend...
research
12/01/2020

How to fine-tune deep neural networks in few-shot learning?

Deep learning has been widely used in data-intensive applications. Howev...
research
10/10/2022

Multi-CLS BERT: An Efficient Alternative to Traditional Ensembling

Ensembling BERT models often significantly improves accuracy, but at the...
research
12/13/2022

Localized Latent Updates for Fine-Tuning Vision-Language Models

Although massive pre-trained vision-language models like CLIP show impre...
research
04/24/2020

Syntactic Data Augmentation Increases Robustness to Inference Heuristics

Pretrained neural models such as BERT, when fine-tuned to perform natura...
research
10/12/2020

Layer-wise Guided Training for BERT: Learning Incrementally Refined Document Representations

Although BERT is widely used by the NLP community, little is known about...
research
10/17/2018

Learning to Separate Domains in Generalized Zero-Shot and Open Set Learning: a probabilistic perspective

This paper studies the problem of domain division problem which aims to ...

Please sign up or login with your details

Forgot password? Click here to reset