
-
Does Putting a Linguist in the Loop Improve NLU Data Collection?
Many crowdsourced NLP datasets contain systematic gaps and biases that a...
read it
-
COGS: A Compositional Generalization Challenge Based on Semantic Interpretation
Natural language is characterized by compositionality: the meaning of a ...
read it
-
Universal linguistic inductive biases via meta-learning
How do learners acquire languages from the limited data available to the...
read it
-
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
This position paper describes and critiques the Pretraining-Agnostic Ide...
read it
-
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
A range of studies have concluded that neural word prediction models can...
read it
-
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs
Sequence-based neural networks show significant sensitivity to syntactic...
read it
-
Syntactic Data Augmentation Increases Robustness to Inference Heuristics
Pretrained neural models such as BERT, when fine-tuned to perform natura...
read it
-
Syntactic Structure from Deep Learning
Modern deep neural networks achieve impressive performance in engineerin...
read it
-
Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks
Learners that are exposed to the same training data might generalize dif...
read it
-
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance
If the same neural architecture is trained multiple times on the same da...
read it
-
Discovering the Compositional Structure of Vector Representations with Role Learning Networks
Neural networks (NNs) are able to perform tasks that rely on composition...
read it
-
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
Neural language models (LMs) perform well on tasks that require sensitiv...
read it
-
Quantity doesn't buy quality syntax with neural language models
Recurrent neural networks can learn to predict upcoming words remarkably...
read it
-
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension
We introduce a set of nine challenge tasks that test for the understandi...
read it
-
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techn...
read it
-
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages
How do typological properties such as word order and morphological case ...
read it
-
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
Machine learning systems can often achieve high performance on a test se...
read it
-
Human few-shot learning of compositional instructions
People learn in fast and flexible ways that have not been emulated by ma...
read it
-
RNNs Implicitly Implement Tensor Product Representations
Recurrent neural networks (RNNs) can learn continuous vector representat...
read it
-
Non-entailed subsequences as a challenge for natural language inference
Neural network models have shown great success at natural language infer...
read it
-
Can Entropy Explain Successor Surprisal Effects in Reading?
Human reading behavior is sensitive to surprisal: more predictable words...
read it
-
What can linguistics and deep learning contribute to each other?
Joe Pater's target article calls for greater interaction between neural ...
read it
-
A Neural Model of Adaptation in Reading
It has been argued that humans rapidly adapt their lexical and syntactic...
read it
-
Targeted Syntactic Evaluation of Language Models
We present a dataset for evaluating the grammaticality of the prediction...
read it
-
Distinct patterns of syntactic agreement errors in recurrent networks and humans
Determining the correct form of a verb in context requires an understand...
read it
-
Colorless green recurrent networks dream hierarchically
Recurrent neural networks (RNNs) have achieved impressive results in a v...
read it
-
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks
Syntactic rules in human language usually refer to the hierarchical stru...
read it
-
Phonological (un)certainty weights lexical activation
Spoken word recognition involves at least two basic computations. First ...
read it
-
Exploring the Syntactic Abilities of RNNs with Multi-task Learning
Recent work has explored the syntactic abilities of RNNs using the subje...
read it
-
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
The success of long short-term memory (LSTM) neural networks in language...
read it
-
Issues in evaluating semantic spaces using word analogies
The offset method for solving word analogies has become a standard evalu...
read it