Despite recent advances in the field of explainability, much remains unk...
A primary criticism towards language models (LMs) is their inscrutabilit...
Large Language Models (LLMs) have driven extraordinary improvements in N...
Training data attribution (TDA) methods offer to trace a model's predict...
Three-dimensional trajectories, or the 3D position and rotation of objec...
Many tasks can be described as compositions over subroutines. Though mod...
Prompts have been the center of progress in advancing language models'
z...
Large-scale models combining text and images have made incredible progre...
AlphaZero, an approach to reinforcement learning that couples neural net...
The extent to which text-only language models (LMs) learn to represent t...
Lexical semantics and cognitive science point to affordances (i.e. the
a...
Distributional models learn representations of words from text, but are
...
Vision-language pretrained models have achieved impressive performance o...
We present a novel corpus of 445 human- and computer-generated documents...
Linguistic representations derived from text alone have been criticized ...
Pre-trained language models perform well on a variety of linguistic task...
Pretrained language models have been shown to encode relational informat...
Recently, a boom of papers have shown extraordinary progress in few-shot...
Experiments with pretrained models such as BERT are often based on a sin...
Many Question-Answering (QA) datasets contain unanswerable questions, bu...
We present a system that enables robots to interpret spatial language as...
We introduce a new dataset for training and evaluating grounded language...
Pre-trained models have revolutionized natural language understanding.
H...
When communicating, people behave consistently across conversational rol...
In politics, neologisms are frequently invented for partisan objectives....
Natural language object retrieval is a highly useful yet challenging tas...
Neural models often exploit superficial ("weak") features to achieve goo...
While there has been much recent work studying how linguistic informatio...
Often times, we specify tasks for a robot using temporal language that c...
Contextualized representation models such as ELMo (Peters et al., 2018a)...
Pre-trained text encoders have rapidly advanced the state of the art on ...
We introduce a set of nine challenge tasks that test for the understandi...
Machine learning systems can often achieve high performance on a test se...
Work on the problem of contextualized word representation -- the develop...
We release a corpus of 43 million atomic edits across 8 languages. These...
We present a large scale unified natural language inference (NLI) datase...