We present the call for papers for the BabyLM Challenge: Sample-efficien...
We propose reconstruction probing, a new analysis method for contextuali...
Language models are often trained on text alone, without additional
grou...
Rapid progress in machine learning for natural language processing has t...
For a natural language understanding benchmark to be useful in research,...
Understanding language requires grasping not only the overtly stated con...
Crowdsourcing is widely used to create data for common natural language
...
Many crowdsourced NLP datasets contain systematic gaps and biases that a...
Linguistically informed analyses of language models (LMs) contribute to ...
NLP is currently dominated by general-purpose pretrained language models...
One reason pretraining on self-supervised linguistic tasks is effective ...
We evaluate whether BERT, a widely used neural network for sentence
proc...
Natural language inference (NLI) is an increasingly important task for
n...
We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLi...
Though state-of-the-art sentence representation models can perform tasks...
Recent pretrained sentence encoders achieve state of the art results on
...
Verbs occur in different syntactic environments, or frames. We investiga...
In this work, we explore the ability of artificial neural networks to ju...