-
CLiMP: A Benchmark for Chinese Language Model Evaluation
Linguistically informed analyses of language models (LMs) contribute to ...
read it
-
Do Language Embeddings Capture Scales?
Pretrained Language Models (LMs) have been shown to possess significant ...
read it
-
Limits of Detecting Text Generated by Large-Scale Language Models
Some consider large-scale language models that can generate long and coh...
read it
-
StereoSet: Measuring stereotypical bias in pretrained language models
A stereotype is an over-generalized belief about a particular group of p...
read it
-
Evaluating NLP Models via Contrast Sets
Standard test sets for supervised learning evaluate in-distribution gene...
read it
-
AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
The remarkable success of pretrained language models has motivated the s...
read it
-
BLiMP: A Benchmark of Linguistic Minimal Pairs for English
We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLi...
read it
Linguistically-Informed Transformations (LIT): A Method forAutomatically Generating Contrast Sets
Although large-scale pretrained language models, such as BERT and RoBERTa, have achieved superhuman performance on in-distribution test sets, their performance suffers on out-of-distribution test sets (e.g., on contrast sets). Building contrast sets often re-quires human-expert annotation, which is expensive and hard to create on a large scale. In this work, we propose a Linguistically-Informed Transformation (LIT) method to automatically generate contrast sets, which enables practitioners to explore linguistic phenomena of interests as well as compose different phenomena. Experimenting with our method on SNLI and MNLI shows that current pretrained language models, although being claimed to contain sufficient linguistic knowledge, struggle on our automatically generated contrast sets. Furthermore, we improve models' performance on the contrast sets by apply-ing LIT to augment the training data, without affecting performance on the original data.
READ FULL TEXT
Comments
There are no comments yet.