
-
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
While research on explaining predictions of open-domain QA systems (ODQA...
read it
-
Facebook AI's WMT20 News Translation Task Submission
This paper describes Facebook AI's submission to WMT20 shared news trans...
read it
-
Generating Fact Checking Briefs
Fact checking at scale is difficult – while the number of active fact ch...
read it
-
Multilingual AMR-to-Text Generation
Generating text from structured data is challenging because it requires ...
read it
-
Beyond English-Centric Multilingual Machine Translation
Existing work in translation demonstrated the potential of massively mul...
read it
-
Nearest Neighbor Machine Translation
We introduce k-nearest-neighbor machine translation (kNN-MT), which pred...
read it
-
KILT: a Benchmark for Knowledge Intensive Language Tasks
Challenging problems such as open-domain question answering, fact checki...
read it
-
Multilingual Translation with Extensible Multilingual Pretraining and Finetuning
Recent work demonstrates the potential of multilingual pretraining of cr...
read it
-
Open-Domain Conversational Agents: Current Progress, Open Problems, and Future Directions
We present our view of what is necessary to build an engaging open-domai...
read it
-
Multi-Dimensional Gender Bias Classification
Machine learning models are trained to find patterns in data. NLP models...
read it
-
Multilingual Unsupervised Sentence Simplification
Progress in Sentence Simplification has been hindered by the lack of sup...
read it
-
Augmenting Transformers with KNN-Based Composite Memory for Dialogue
Various machine learning tasks can benefit from access to external infor...
read it
-
Training with Quantization Noise for Extreme Model Compression
We tackle the problem of producing compact models, maximizing their accu...
read it
-
Training with Quantization Noise for Extreme Fixed-Point Compression
We tackle the problem of producing compact models, maximizing their accu...
read it
-
Accessing Higher-level Representations in Sequential Transformers with Feedback Memory
Transformers are feedforward networks that can process input tokens in p...
read it
-
Generating Interactive Worlds with Text
Procedurally generating cohesive and interesting game environments is ch...
read it
-
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Models often easily learn biases present in the training data, and their...
read it
-
Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs
Query-based open-domain NLP tasks require information synthesis from lon...
read it
-
Reducing Transformer Depth on Demand with Structured Dropout
Overparameterized transformer networks have obtained state of the art re...
read it
-
ELI5: Long Form Question Answering
We introduce the first large-scale corpus for long-form question answeri...
read it
-
GLOSS: Generative Latent Optimization of Sentence Representations
We propose a method to learn unsupervised sentence representations in a ...
read it
-
fairseq: A Fast, Extensible Toolkit for Sequence Modeling
fairseq is an open-source sequence modeling toolkit that allows research...
read it
-
Learning to Speak and Act in a Fantasy Text Adventure Game
We introduce a large scale crowdsourced text adventure game as a researc...
read it
-
Strategies for Structuring Story Generation
Writers generally rely on plans or sketches to write long stories, but m...
read it
-
Pay Less Attention with Lightweight and Dynamic Convolutions
Self-attention is a useful mechanism to build generative models for lang...
read it
-
Wizard of Wikipedia: Knowledge-Powered Conversational agents
In open-domain dialogue intelligent agents should exhibit the use of kno...
read it
-
Hierarchical Neural Story Generation
We explore story generation: creative systems that can build coherent an...
read it
-
Controllable Abstractive Summarization
Current models for document summarization ignore user preferences such a...
read it
-
Prior matters: simple and general methods for evaluating and improving topic quality in topic modeling
Latent Dirichlet Allocation (LDA) models trained without stopword remova...
read it
-
Language Modeling with Gated Convolutional Networks
The pre-dominant approach to language modeling to date is based on recur...
read it