
-
CLiMP: A Benchmark for Chinese Language Model Evaluation
Linguistically informed analyses of language models (LMs) contribute to ...
read it
-
When Do You Need Billions of Words of Pretraining Data?
NLP is currently dominated by general-purpose pretrained language models...
read it
-
Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
One reason pretraining on self-supervised linguistic tasks is effective ...
read it
-
Can neural networks acquire a structural bias from raw linguistic data?
We evaluate whether BERT, a widely used neural network for sentence proc...
read it
-
Are Natural Language Inference Models IMPPRESsive? Learning IMPlicature and PRESupposition
Natural language inference (NLI) is an increasingly important task for n...
read it
-
BLiMP: A Benchmark of Linguistic Minimal Pairs for English
We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLi...
read it
-
Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs
Though state-of-the-art sentence representation models can perform tasks...
read it
-
Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments
Recent pretrained sentence encoders achieve state of the art results on ...
read it
-
Verb Argument Structure Alternations in Word and Sentence Embeddings
Verbs occur in different syntactic environments, or frames. We investiga...
read it
-
Neural Network Acceptability Judgments
In this work, we explore the ability of artificial neural networks to ju...
read it