Most interpretability research in NLP focuses on understanding the behav...
The impact of randomness on model training is poorly understood. How do
...
Large language models (LLMs) have achieved widespread success on a varie...
Pretrained language models often generate outputs that are not in line w...
The potential for pre-trained large language models (LLMs) to use natura...
Given the recent impressive accomplishments of language models (LMs) for...
Language models (LMs) are pretrained to imitate internet text, including...
We present the results of the NLP Community Metasurvey. Run from May to ...
Summarization datasets are often assembled either by scraping naturally
...
In modern interactive speech-based systems, speech is consumed and
trans...
Pretrained language models often do not perform tasks in ways that are i...
Current QA systems can generate reasonable-sounding yet false answers wi...
To enable building and testing models on long-document comprehension, we...
More capable language models increasingly saturate existing task benchma...
It is well documented that NLP models learn social biases present in the...
Structured information about entities is critical for many semantic pars...